Reddit announced today that it’s bringing on the team from Oterlu, a startup from Gothenburg, Sweden that develops machine learning-powered content moderation tools. The Oterlu team will join Reddit’s Safety team and develop native machine learning moderation models that can quickly and accurately detect harmful content across a range of languages, Reddit says. The Oterlu team will also build new safety tools for Reddit moderators.
The company says the announcement is part of its ongoing efforts to invest in and grow its internal Safety team that oversees its content policy. Reddit
コメント