Opening dialogue on hate and extremism in gaming
View in browser
TRUST & SAFETY Lately _ light

Welcome back to Trust & Safety Lately, your monthly recap of all things trust & safety!

 

Before we dive into April's T&S news, our friends at Take This Inc. have recently published this peer reviewed research paper, From Avoidance to Action: A Call for Open Dialogue on Hate, Harassment, and Extremism in the Gaming Industry, asking industry professionals about the blockers and solutions when it comes to addressing extremism and other bad behavior happening in and around gaming. 

 

Onto the latest issue of Trust & Safety Lately -- we're covering:

  • ๐Ÿ”ง Fixing the Online Child Exploitation Reporting System
  • ๐Ÿ‘ป Snapchat Joins DHS's "Know2Protect" Program
  • ๐Ÿ’• 2024 Norton Cyber Safety Insights Report โ€“ Online Dating
  • ๐Ÿ” Deja vu? US Privacy Reform (again) 
  • ๐Ÿ“œ Senate KOSA vs. House KOSA
  • ๐Ÿ›ก๏ธ Ofcom Explores Using AI to Protect Young Users
  •  ๐ŸŽต Spotify's Approach to Trust & Safety 
  • ๐Ÿค– AI for Content Moderation
  • ๐ŸŽฎ Blocking emotes in Fortnite = W? 

Let's get into it! ๐Ÿ”ธ 

T&S Lately divider

Data and Reports

 

๐Ÿ”ง Fixing the Online Child Exploitation Reporting System

Stanford Internet Observatory interviewed 66 respondents across industry, law enforcement, and civil society on the effectiveness (or not) of current CSAM reporting mechanisms that notify the National Center for Missing and Exploited Children (NCMEC) of potential child exploitation online. 

 

  • Unsurprisingly, law enforcement officials struggle with volume of reports received through NCMEC's CyberTipLine, but also struggle with triaging inbound reports.
  • Quality of reports is often lacking. 
  • Legal restrictions around machine learning-assistance in triaging incoming reports, scant funding, and staffing challenges make NCMEC improvements to the CyberTipLine process slow. 
  • NCMEC is subject to fourth amendment rules, preventing the organization from concretely listing best practices. 

The report includes several suggestions for how to improve the NCMEC reporting process. Read the full report here. 

๐Ÿ‘ป Snapchat Joins DHS's "Know2Protect" Program

The Department of Homeland Security announced the launch of a new awareness and education initiative aimed at preventing the sexual exploitation of children online. 

 

Among the founding partners is Snap Inc., which also outlines results of a survey of over 1,000 US-based teens and young adults in their announcement: 

 

"...more than two-thirds (68%) reporting that they had shared intimate imagery online or experienced โ€œgroomingโ€ or โ€œcatfishingโ€ behaviors."

 

"Among those who shared intimate imagery, or experienced grooming or catfishing behaviors, nine in 10 (90%) said the other person lied about their identity. "

 

"Males were more susceptible to being sextorted than females (51% vs. 42%)"

 

"...a noteworthy percentage of teens and young adults (41%) who experienced one of these three risks kept it to themselves. Just 37% reported grooming to the online platform, law enforcement, and / or a hotline.โ€‹"

๐Ÿ’• 2024 Norton Cyber Safety Insights Report โ€“ Online Dating

Norton released a survey of over 13,000 adults in 13 countries, uncovering insights into online dating safety, scams, and harassment. 

  • One quarter of survey respondents had been the target of online dating scams. 
  • One third experienced catfishing.
  • The majority of respondents felt they would use AI to generate conversation starters or pickup lines.
T&S Lately divider

Industry News 

 

๐Ÿ” Deja vu? US Privacy Reform (again) 

Lawmakers are hopeful that online privacy reform can finally come to fruition -- this time, they say, it'll be different!

 

Up for discussion by the subcommittee on innovation, data, and commerce (IDC) is a draft version of the American Privacy Rights Act. In a mid-April hearing, legislators pointed to international pressure to enact privacy rights, especially for children, as other countries and regions like the EU have had privacy protections for consumers for years now. While largely receiving bipartisan support, the current proposal is seeing initial pushback in the Senate, with Texas republican senator Ted Cruz saying the proposal "gives unprecedented power to the FTC to become referees of internet speech and DEI compliance." 

 

More from The Verge. 

๐Ÿ“œ Senate KOSA vs. House KOSA

Ahead of the same Congressional hearing in mid-April, Tech Policy Press shared a rundown of some key differences in the two versions of KOSA being addressed by the US Senate and US House of Representatives: 

  • The House version mirrors the EU's Digital Services Act approach to tiered enforcement, giving different standards for companies of varying profit and user base size. 
  • The House version also better defines the more clinical term "compulsive usage" in reference to social media platform design tactics used to encourage repetitive behaviors resulting in mental distress.
  • The Senate version's language is intentionally more broad to hopefully cover anonymous messaging and video call sharing platforms 

 

๐Ÿ›ก๏ธ Ofcom Explores Using AI to Protect Young Users

UK Ofcom, the regulatory body in charge of enforcing the Online Safety Act, announced it will be launching research into how AI tools are being used to mitigate harm against underage internet users. Ofcom will likely release a report outlining the state of the AI content moderation space and recommendations for industry verticals. 

 

Read more on TechCrunch.

 

๐ŸŽต Spotify's Approach to Trust & Safety

 

Sarah Hoyle, Head of Trust & Safety for Spotify shares a status update on safety features and initiatives: 

  • Search warnings and in-app messaging to users searching for suicide, self-harm, and disordered eating-related content.
  • Users can skip music tagged by creators or rights holders as โ€œexplicit.โ€
  • Partnerships with  Ditch the Label, the Jed Foundation and establishing an internal Safety Leadership group and Safety Advisory Council.

 

๐Ÿค– AI for Content Moderation

Former Twitter trust & safety exec Alex Popken chatted with ABC News about the future of content moderation using AI. In the exclusive interview, Popken shares a familiar-sounding approach:

 

"Effective content moderation is a combination of humans and machines. AI, which has been used in moderation for years, solves for scale. And so you have machine learning models that are trained on different policies and can detect content."

 

Meanwhile, Roblox shares an update on their multimodal approach to player safety that address voice, text, and 3D avatars to moderate content at scale. Another Roblox blog post distinguishes between safety and civility and announces the launch of their new civility microsite. 

 

๐ŸŽฎ Blocking emotes in Fortnite = W? 

IGN reports on Fortnite publisher Epic Games allowing players to block certain in-game emotes that are being used in less-than-friendly ways. 

T&S Lately divider

Industry Events

GamesBeat Summit logo

GamesBeat Summit 2024

May 20-21, 2024

Los Angeles, CA

 

GamesBeat Summit brings together leaders across the spectrum of the video game industry ecosystem including entrepreneurs, talent, anchor corporations, support organizations, investment capital, and innovation enablers. Join Modulate for a panel talk with industry experts offering insights into content moderation, using AI to bolster safety strategies, and more! 

Register Here

TSPA Summit | EMEA

May 17, 2024

Dublin, Ireland

 

This year's regional TSPA Summit in Dublin focuses on "The Power of Collaboration." Join a panel discussion including Modulate CEO & Co-founder Mike Pappas alongside Farah Lalani of Teleperformance and Leslie Heryford of EA for a roundtable talk on moderation practices to create safer gaming environments. 

Register Here

TrustCon 2024

July 22-24, 2024

San Francisco, CA

 

Hosted by the Trust & Safety Professionals Association, this year's TrustCon will be here before you know it. You can register now -- keep an eye out for the full lineup of speakers and talks coming later this year. 

Register Here

Trust & Safety Research Conference

September 26 โ€“ 27, 2024

Stanford, CA

 

Just announced, the third annual Trust & Safety Research Conference is back! The conference brings together trust and safety researchers and practitioners from academia, industry, civil society, and government. Submit proposals by April 30 and register beginning June 2024.  

Read More

secondary-color-light-reversed

Modulate, 212 Elm St, Suite 300, Somerville, MA 02144, USA

Manage preferences