Welcome back to Trust & Safety Lately,your monthly recap of all things trust & safety! As always, we're sharing the latest updates in online safety, content moderation, regulation, and reports.
In this issue of Trust & Safety Lately we're covering:
🅰️ Activision x Modulate case study
🛠️ Content Policies 101
🌐 The State of Online Hate and Harassment in 2024
🎮 The PC & Console Gaming Report 2024
💬 Let's Talk TikTok
🗣️ Is Moderation Considered Free Speech?
⏳ Stanford Internet Observatory Closes
🏠 Sounding off on Airbnb's Extremism Policies
🏥 Surgeon General Calls for Warning Labels
🤔 Superintelligence but... safe!
Let's get started! 🔸
Data and Reports
Modulate recently released a case study detailing some of the positive impact its ToxMod tech has had in Call of Duty. Read the full case study here.
Some highlights:
🖐🏽 ToxMod helped to improve player participation and engagement by reducing toxicity exposure. During the test period with proactive voice moderation active, Call of Duty Modern Warfare II saw:
+3.9% more new players
+2.4% more players who were previously inactive for 21-59 days
+2.8% more players who were previously inactive for 60+ days
⤵️ Proactive voice moderation helped to reduce repeat violations of the Code of Conduct by 8%. 🛡️ ToxMod helped moderators take action against up to 2 million accounts for disruptive voice chat, based on the Call of Duty Code of Conduct.
🛠️ Content Policies 101
The Integrity Institute released their guide to creating effective and actionable content policies for your platform. Whether you're a startup or big tech org, this white paper outlines a step-by-step approach:
Research and Development
Getting cross-functional buy-in
Training content moderators
Training machine-learning (ML) models
Launching your policy
Quality assurance and post-launch reporting
🌐 The State of Online Hate and Harassment in 2024
The ADL published the annual survey of online hate and harassment, conducted between February and March 2023 and asking respondents about their experiences in the previous 12 months.
22% of Americans experienced severe harassment, defined in this survey as physical threats, sustained harassment, stalking, sexual harassment, doxing, and swatting. People with disabilities, LGBTQ+ people, and Jewish people saw increased rates of harassment online.
One suggestion from the ADL: Invest in trust & safety! Read the full report here.
🎮 The PC & Console Gaming Report 2024
This free report from NewZoo gives a glimpse into the PC and console gaming market. Download the report here.
Unsurprisingly, NewZoo found that average playtime of PC and console games has gone down by 26% since its peak in 2021 (what happened in 2020-21 that gave people so much time to play games?)
A small number of titles captured a majority of new game revenue.
Overall, the PC and console gaming market is growing -- NewZoo reports a 3.1% YoY growth in 2023.
Industry News
💬 Let's Talk TikTok
TikTok is alleging US congress didn't even bother to look at their extensive documentation around safety risk mitigation before lawmakers charged ahead with a new law, called The Protecting Americans from Foreign Adversary Controlled Applications Act, that could outright ban the massive social platform in the US unless it divests from its parent company, ByteDance. Now, TikTok (alongside some influencers) is pointing to the First Amendment's protections of free speech in a recent suit filed in DC Circuit Court in late June. The Verge has the scoop.
Meanwhile, TikTok is expanding its Trust & Safety Team with new job openings. In mid-June, the Fortune Data Sheet newsletter pointed to new career opportunities, including Policy Managers who would be in charge of reviewing some of the platform's most shocking content. That specific job posting is no longer active, but there are plenty of other T&S-related positions available.
🗣️ Is Moderation Considered Free Speech?
Back in February's issue of Trust & Safety Lately, we shared a recap ("SCOTUS on Social Media Content Moderation") of two US state-level court cases from Florida and Texas that challenge social media platforms' ability to remove dis and misinformation, claiming this is a violation of free speech. Now, the US Supreme Court deferred these cases back to the lower courts on a technicality, but six justices effectively published an opinion stating that content moderation decisions were considered "speech" and therefore protected by the first amendment. This is good news for platforms with respect to anti-content-moderation laws, but potentially worrying with respect to Section 230.
More on the Supreme Court's latest position on Florida and Texas disputes from Reuters.
⏳ Stanford Internet Observatory Closes
Following many months of conservative backlash against research produced by the Stanford Internet Observatory, the SIO seems to be winding down its operations. Platformer points to three law suits by conservative orgs to allege illegal collusion between the Observatory and the federal government to limit free speech. The Verge also covers the story.
Relatedly, the US Supreme Court struck down conservative-led suit against the Biden administration in a 6-3 vote. The suit alleges that Biden's administration caused harm by urging social media platforms to remove disinformation. From Variety:
"In 2022, Republican attorneys general in Missouri and Louisiana together with five social media users sued over the White House’s outreach to social media platforms requesting the removal of certain disinformation. They alleged that Biden administration officials “coerced” tech platforms to remove content reflecting viewpoints the administration disagreed with."
🏠 Sounding off on Airbnb's Extremism Policies
A whistleblower complaint by former contractor Jess Hernandez came to light last month, alleging the short-term rental property company dissolved much of its team responsible for detecting and removing extremists, hate groups, and organized crime from the platform. As NBC Newsreports:
"It alleges that the San Francisco-based short-term lodging company shifted its policies in 2023, away from the proactive safety approach co-founder and CEO Brian Chesky said in 2021 should serve as ”a role model” for other tech companies, and toward one guided by internal pressure to avoid negative press and the appearance of unequal enforcement against conservatives."
🏥 Surgeon General Calls for Warning Labels
The New York Times published an opinion piece by US Surgeon General Vivek H. Murthy who is calling for a warning label on all social platforms. Murthy points to social media's negative impacts on adolescent mental health. He acknowledges the limits of a simple warning label, calling on legislators, social media companies, schools, and parents to join the movement to protect young people.
But what would a warning label even look like? Where should it appear? How often? The Atlantic unpacks some of these questions.
📹 Reddit and Tumblr: Your favorite video sharing platforms
In Ireland, Reddit and Tumblr will be regulated by the country's An Coimisiún na Meán’s (Media Commission), as declared by the highest court in June. The Irish Times reports that in separate filings, both US-based companies claimed their platforms should not have been categorized as video-sharing platform services and therefore should not be regulated by An Coimisiún. In both cases, each company's arguments were shut down.
🤔 Superintelligence but... safe!
Time magazine and others reported on OpenAI co-founder and former chief scientist Ilya Sutskever launching Safe Superintelligence Inc. In an exclusive Bloomberg interview, Sutskever (kind of) clarifies what he means by "safe."
Sutskever is vague about this at the moment, though he does suggest that the new venture will try to achieve safety with engineering breakthroughs baked into the AI system, as opposed to relying on guardrails applied to the technology on the fly. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” he says.
The Bloomberg interview goes on...
“At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” Sutskever says. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.”
Industry Events
VentureBeat Transform
July 9-11, 2024
San Francisco, CA
Putting AI to Work at Scale! An exclusive conference for enterprise leaders. Practical GenAI case studies and application stories directly from industry leaders.
Hosted by the Trust & Safety Professionals Association, this year's TrustCon will be here before you know it. You can register now -- keep an eye out for the full lineup of speakers and talks coming later this year.
Devcom is the official game developer event of gamescom and Europe’s biggest game developer community-driven industry conference. Together, both events represent one of the largest gaming industry conferences in Europe.