đź’‚UK's Online Safety Bill finally passed by parliament
More than four years since the bill was first introduced, the UK has passed its online safety bill. Recent changes to the proposed bill will force social media companies to create and execute on plans to remove content deemed harmful to children, or prevent that content from being published altogether. From Reuters: “If companies do not comply, media regulator Ofcom will be able to issue fines of up to 18 million pounds ($22.3 million) or 10% of their annual global turnover.” That’s a pretty penny!
❌ More X
Musk-owned X's content moderation shift complicated effort to win back brands
Following X’s April change to its content moderation policy, now called "Freedom of Speech, Not Reach,” it seems advertisers have been struggling to see the value in spending with X. Before the newest update to X’s content moderation policy, tweets that violated policies were removed. Now that content will be suppressed rather than deleted. As reported on Reuters, CEO Linda Yaccarino cites a 60% decline in advertising revenue.
X Corp. sues California AG over content moderation law
In September 2022, California Governor Gavin Newsom signed bill AB 587 into law, which requires social media companies like X to publish their ToS and submit regular reports to the state AG outlining content moderation policies and practices. CNN reports earlier this month X owner, Elon Musk, filed a lawsuit claiming these requirements would force the social media platform to stifle or censor first amendment-protected speech.
âž• Content Moderation Industry Gains Momentum
Roblox acquires Speechly
Speechly announced it’s been acquired by Roblox last week. The speech-to-text transcription company hopes to improve content moderation capabilities in the popular online game, which saw 65.5 million daily active users in Q2 2023.
Spectrum Labs joins ActiveFence
Earlier this month, ActiveFence acquired Spectrum Labs, adding Spectrum’s content moderation capabilities and customers to its roster. TechCrunch reports on the timeliness of the acquisition as even more legal regulations and bills are being introduced to legislative bodies across the globe.
📲 Niantic Boosts Trust & Safety Transparency
Just yesterday, Niantic announced the launch of its Niantic Safety Center, "a hub where you can find information and resources on building a safe and enjoyable Niantic experience." From their blog, Niantic lays out their approach to trust & safety, from policy development and partnerships, to testing and integrating emerging tech tools:
"With regards to emerging technology and innovation, we’re considering safety from the beginning: we’re actively red-teaming several generative AI-driven experiences and features for integration into our products. By evaluating their performance from a safety perspective as part of launch readiness, we hope to ensure we’re thinking and acting responsibly and incorporating diverse perspectives into feature development in these areas. We’re also supporting upcoming features that will create new ways for players to join and celebrate their local community."
🤖 Future of Content Moderation
Is ChatGPT coming for human moderator jobs?
In response to OpenAI’s blog post on AI’s role in content moderation, Technopedia asks “Will ChatGPT Mean An End to Human Moderation Jobs?” Clearly there is a need for a shift in approach to content moderation — it is time consuming and often traumatizing for moderators, especially those working on behalf of social media sites. While both humans and AI will inevitably make mistakes, the potential boost to efficiency of moderation that AI tools create can’t be ignored. Perhaps human intelligence and artificial intelligence can find synergies in the arena of content moderation.
Check out this recent blog post by Rachel M., our Senior Machine Learning Engineer, on recent ethical frameworks and research to consider when adding or developing AI tools to bolster your content moderation and trust and safety tool belt.