Hefty fees for non-compliant companies in the UK, X fights to retain its ad revenue & a burgeoning content moderation industry.
View in browser
TRUST & SAFETY Lately _ light

Welcome to the latest issue of Trust & Safety Lately, your source for all things T&S, from acquisitions, new regulations (and subsequent industry responses), and more.

 

Here's what we've got for you this month:

  • đź’‚ UK Online Safety Bill passes
  • âž• Acquisitions of note
  • ❌ More moderation movement from X
  • 📲 Niantic launches their T&S hub
  • 🤖 Exploring the future of human and AI moderation
  • Upcoming events -- see you at GamesBeat Next! 

Let's get started! 🔸

T&S Lately divider

Industry News 

 

đź’‚UK's Online Safety Bill finally passed by parliament

More than four years since the bill was first introduced, the UK has passed its online safety bill. Recent changes to the proposed bill will force social media companies to create and execute on plans to remove content deemed harmful to children, or prevent that content from being published altogether. From Reuters: “If companies do not comply, media regulator Ofcom will be able to issue fines of up to 18 million pounds ($22.3 million) or 10% of their annual global turnover.” That’s a pretty penny!

 

❌ More X

Musk-owned X's content moderation shift complicated effort to win back brands

Following X’s April change to its content moderation policy, now called "Freedom of Speech, Not Reach,” it seems advertisers have been struggling to see the value in spending with X. Before the newest update to X’s content moderation policy, tweets that violated policies were removed. Now that content will be suppressed rather than deleted. As reported on Reuters, CEO Linda Yaccarino cites a 60% decline in advertising revenue.

 

X Corp. sues California AG over content moderation law

In September 2022, California Governor Gavin Newsom signed bill AB 587 into law, which requires social media companies like X to publish their ToS and submit regular reports to the state AG outlining content moderation policies and practices. CNN reports earlier this month X owner, Elon Musk, filed a lawsuit claiming these requirements would force the social media platform to stifle or censor first amendment-protected speech.

 

âž• Content Moderation Industry Gains Momentum

Roblox acquires Speechly

Speechly announced it’s been acquired by Roblox last week. The speech-to-text transcription company hopes to improve content moderation capabilities in the popular online game, which saw 65.5 million daily active users in Q2 2023.

 

Spectrum Labs joins ActiveFence

Earlier this month, ActiveFence acquired Spectrum Labs, adding Spectrum’s content moderation capabilities and customers to its roster. TechCrunch reports on the timeliness of the acquisition as even more legal regulations and bills are being introduced to legislative bodies across the globe.

 

📲 Niantic Boosts Trust & Safety Transparency

Just yesterday, Niantic announced the launch of its Niantic Safety Center, "a hub where you can find information and resources on building a safe and enjoyable Niantic experience." From their blog, Niantic lays out their approach to trust & safety, from policy development and partnerships, to testing and integrating emerging tech tools:

 

"With regards to emerging technology and innovation, we’re considering safety from the beginning: we’re actively red-teaming several generative AI-driven experiences and features for integration into our products. By evaluating their performance from a safety perspective as part of launch readiness, we hope to ensure we’re thinking and acting responsibly and incorporating diverse perspectives into feature development in these areas. We’re also supporting upcoming features that will create new ways for players to join and celebrate their local community."

 

🤖 Future of Content Moderation

Is ChatGPT coming for human moderator jobs? 

In response to OpenAI’s blog post on AI’s role in content moderation, Technopedia asks “Will ChatGPT Mean An End to Human Moderation Jobs?” Clearly there is a need for a shift in approach to content moderation — it is time consuming and often traumatizing for moderators, especially those working on behalf of social media sites. While both humans and AI will inevitably make mistakes, the potential boost to efficiency of moderation that AI tools create can’t be ignored. Perhaps human intelligence and artificial intelligence can find synergies in the arena of content moderation. 

 

Check out this recent blog post by Rachel M., our Senior Machine Learning Engineer, on recent ethical frameworks and research to consider when adding or developing AI tools to bolster your content moderation and trust and safety tool belt. 

T&S Lately divider

Industry Events

Backend Services Summit

Backend Services Summit

October 24, 2023

Los Angeles, CA

Fancy a day in LA with the brightest folks in backend engineering? The inaugural Backend Services Summit will include a mixture of chats, panels, and roundtables – plus a fun, chill beach vibe. (Oh, and Modulate will be there!)

Register | Meet with us there

2823075-64de1e82a1de6

Generative AI and the Future of Speech Online

October 4-5, 2023

Virtual
Register

2668318-648389d1b4e5c

Trust & Safety Professional Association: APAC Summit

October 11, 2023
Singapore

Register

GamesBeat Next

GamesBeat NEXT

October 24-25, 2023

San Francisco, CA

Register | Meet with us there

1695293796796

Marketplace Risk Global Summit

October 30-November 1, 2023

London, UK

Register

secondary-color-light-reversed

Modulate, 212 Elm St, Suite 300, Somerville, MA 02144, USA

Manage preferences