Welcome or welcome back to Trust & Safety Lately,your monthly recap of all things trust & safety with an eye on the gaming industry. As we head into the US Presidential Election, Wiredlooks at the role that social media has played in partisan discourse since the last election four years ago -- not much has changed for the better when it comes to combatting misinformation. In fact, NPR reports on a network of X accounts that seem to be campaigning to amplify distrust in the election process.
There's sure to be more reporting on social media and the election in the coming month, but for now we've got:
☢️ Anti-toxicity in Call of Duty: Black Ops 6
🔗 LinkedIn's Transparency Report
💃🏽 ByteDance Turns to AI for Moderation
💸 US Treasury Department (also) Turns to AI
💬 Ofcom: "Enough talk!"
🌏 Google x Roblox for Child Online Safety
☘️ Ireland Adopts Online Safety Code
✅ X Pushes Political Content
Let's get started! 🔸
Data & Reports
☢️ Anti-toxicity in Call of Duty: Black Ops 6
Modulate partnered with Activision in 2023 to bring ToxMod's proactive voice chat moderation into Call of Duty games. Ahead of the official launch of Black Ops 6, the Disruptive Behavior team shared an update on progress in anti-cheat and anti-toxicity:
Since rolling out an improved voice chat enforcement in June 2024, Call of Duty has seen a combined 67% reduction in repeat offenders of voice-chat based offenses in Modern Warfare III and Call of Duty: Warzone. In July 2024, 80% of players that were issued a voice chat enforcement since launch did not re-offend. Exposure to disruptive voice chat continues to fall, dropping by 43% since January 2024.
At launch, Black Ops 6 will expand its voice moderation to French and German, in addition to English, Spanish, and Portuguese.
Even professional networking platforms like LinkedIn have to handle problematic content. With a mix of automated and manual moderation, LinkedIn is using proactive and reactive tactics to prevent the spread of UGC that violates terms and conditions. Actioned content ranged from hate speech to sexual content, misinformation, and more.
From January to the end of June 2024, LinkedIn reports:
894,433 user reports relating to posted content.
Top report categories for content are Misinformation, Hateful Speech, and Fake Accounts.
An median response time from report to moderation action is 12 minutes.
An estimated 481,674 user reports on content, job postings, and ads were handled by LinkedIn's automated system.
“We’re making these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation,” a TikTok spokesperson told TechCrunch. “We expect to invest $2 billion globally in trust and safety in 2024 alone and are continuing to improve the efficacy of our efforts, with 80% of violative content now removed by automated technologies.”
Most of the content moderation positions impacted Malaysian staff. Reuters also points to an uptick in requests for harmful content to be removed from online platforms earlier in 2024 in Malaysia.
💸 US Treasury Department (also) Turns to AI
The Treasury announced via press release in mid-October that it has recovered more than $4 billion in fraud and improper payments in the last fiscal year thanks to machine learning tech. The Treasury is using AI models to identify high risk transactions that warrant further investigation and general efficiencies including detecting false checks.
💬 Ofcom: "Enough talk!"
The UK's Ofcom released a reminder (really a warning) that new guidance and requirements for online platforms to take action is coming in December. Companies will have three months from December to publish a risk assessment report. In the coming year, platforms who serve UK audiences will be expected to provide details and data on their content moderation, age verification, and child safety risks.
"The time for talk is over. From December, tech firms will be legally required to start taking action, meaning 2025 will be a pivotal year in creating a safer life online."
- Dame Melanie Dawes, Ofcom’s Chief Executive
🌏 Google x Roblox for Child Online Safety
Google and Roblox have teamed up to launch a new child safety campaign called "Be Internet Awesome World." The Be Internet Awesome campaign has been an ongoing effort from Google to educate online netizens on how to identify scams, how to create a strong password, and other internet safety lessons. The new Roblox world puts a game-based spin on these important safety and wellbeing tips, meeting kids and youth where they are in a platform they're already familiar with -- neat!
☘️ Ireland Adopts Online Safety Code
From TechCrunch, Irish media watchdog Coimisiún na Meán has published a set of requirements for online platforms operating out of the country. This will impact the likes of TikTok, Instagram, Youtube, and Facebook to name a few. Platforms will need to publish their Terms of Service that explicitly ban users from publishing CSAM, terrorism or extremist content, and content promoting self-harm, and bullying.
Spokesman Adam Hurley notes that this new Code is meant to give better definitions to harmful -- not just illegal -- content, which the current EU's Digital Services Act doesn't quite dig into. From the TechCrunch story:
“One of the thoughts behind the Online Safety Code is dealing with content which is more harmful rather than illegal,” Hurley told us, adding: “What we’ve done is broaden the scope to harmful content that they must prohibit uploading of and then act on reports against those terms and conditions.”
✅ X Pushes Political Content
Looking for a new fall recipe? Hoping to find cute pictures of cows, or maybe just catch up on the latest news with the Boston Celtics? X makes it more difficult to find content you're interested in and instead pushes political content right onto your feed, according to a recent investigation by The Wall Street Journal.
Investigators found that for new X accounts that only indicated interest in non-political topics, their "For You" feed contained an overwhelming amount of partisan content. This contradicts previous claims by Elon Musk that the platform is purely neutral when it comes to politics.
Industry Events
Marketplace Risk Global Summit
November 12-14, 2024
London, UK
Gstar 2024
November 14-17, 2024
Busan, South Korea
Trust & Safety Festival
November 19-20, 2024
Amsterdam, The Netherlands
Family Online Safety Institute Annual Conference 2024
December 9, 2024
Washington DC
Modulate, 212 Elm St, Suite 300, Somerville, MA 02144, USA