Tune into a free webinar recording on navigating child safety online
View in browser
TRUST & SAFETY Lately _ light

Welcome back to Trust & Safety Lately! We're sending out this month's newsletter a little early (think of it as an early new year gift?) Speaking of the new year, Modulate CEO and Co-founder Mike Pappas shared his recap of highlights in trust & safety in the gaming sector for 2023, plus some predictions for 2024 — check it out on Forbes. 

 

In this issue of Trust & Safety Lately: 

  • ⚠️ Moderator Mental Health in Online Dating

  • 🤖 A New, Sweeping AI Act from the EU

  • ☁️ X, Meta, Bluesky, Oh my!

  • 🚨 Extremism Has Entered the Chat

  • 💡 Getting Inspired with The Digital Wellness Lab
  • 📹 Free Webinar on Navigating Child Safety in Gaming Communities

There's a lot to get through so let's get started! 🔸

T&S Lately divider

Data and Reports

 

⚠️ Moderator Mental Health in Online Dating

Dating apps like Grindr and Bumble are environments ripe for potential abuse, from sexual harassment to child exploitation and blackmail. While companies like Match Group, which owns Hinge and Tinder, have robust content moderation and trust & safety teams in place, the mental impact on the  moderators who review and decipher user reports is distressing to say the least. 

 

In late November, The Bureau of Investigative Journalism published their report on the mental and emotional toll being placed on content moderators who work to keep dating app platforms safe. The report outlines moderators' feelings of under training, lack of mental health support (in some cases retribution for requesting mental health support), chronic understaffing, and serious backlogs of reports to be reviewed and addressed. To give a sense of how grueling it can be for moderators at Hinge: 

 

"According to an onboarding document seen by TBIJ, moderators were initially expected to process 35 profiles per hour, or a decision every 100 seconds. In July 2020, workers complained that this had been significantly increased, to 52 profiles per hour."

 

What tools and training can we give to support content moderators? We're so glad you asked! Check out the latest podcast episode of Player: Engage featuring Modulate's Chris James and Keywords Studios' Sharon Fisher as they delve into exactly that topic. 

T&S Lately divider

Industry News 

 

 

🤖 A New, Sweeping AI Act from the EU

As Wired reports, this law is not the first of its kind since China passed regulations in August but the EU's AI Act is much wider ranging, even impacting the collection and use of biometric data by law enforcement. Not surprisingly, strict transparency requirements for companies offering AI services and products are also included in the Act. Via Wired:

 

"Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years."

 

There's a lot to unpack here and we found this write-up to be particularly helpful in breaking down this new law. 

 

☁️ X, Meta, Bluesky, Oh my!

As always, there's been a lot of movement in the social media space this month. Strap in...

 

X ended its contract with the Irish outsourcing company CPL, which had provided content monitoring services for X in France, Germany and South Korea. More on this in The Irish Times. 

 

Things are looking up (get it?) at Bluesky, as the company launches new safety tools like automated content moderation and so-called moderation lists, or ban/mute lists. TechCrunch has the rundown. 

 

In early December, CNN reported on Meta's oversight board launching an internal review of the decision to remove two videos related to the Israel-Hamas War. Just under two weeks later, the oversight board ruled that Meta should reinstate both posts. So what happened inside Meta's content moderation system in the first place? As The Associated Press reports: 

 

"In a briefing on the cases, the board said Meta confirmed it had temporarily lowered thresholds for automated tools to detect and remove potentially violating content."

 

As we head into the 2024 elections, non-profit media watchdog Free Press found 17 trust & safety or content moderation policies had been rolled back or in some cases eliminated at Alphabet, Meta, and X. Free Press also points to over 40,000 layoffs that directly impact those companies' ability to prevent the spread of misinformation on their platforms. The Guardian has more. 

 

 

🚨 Extremism Has Entered the Chat

The Washington Post documents Discord's "problematic pockets" of bad actors, pointing to a handful of recent scandals involving planned violence, extremism, and other illegal activities including the 2022-2023 case of leaked confidential documents by former US airman Jack Teixeira. While Discord data is not end-to-end encrypted and so could theoretically be scanned for illegal content or content that violates terms of service, the company generally opts not to do so, leaning on its privacy first approach. That said, it seems when Discord does become aware of harmful content they tend to take quick action.

 

Only two days after The Post published its report, ABC News published a story on Discord tipping off law enforcement to potential planned mass violence, leading to an arrest. 

 

Across the globe, New Zealand Prime Minister Jacinda Ardern chats with Axios in an exclusive interview on the Christchurch Call to Action, which was a dual nation response to the 2019 Christchurch mass killing in New Zealand. Governments and companies can join the Christchurch Call. Notably, OpenAI and Anthropic have committed to the cause. 

 

💡 Getting Inspired with The Digital Wellness Lab

The Digital Wellness Lab at Boston Children’s Hospital announced the early signatories of the Inspired Internet Pledge, which include Pinterest, TikTok, and Modulate. The Pledge is a commitment by tech and media companies to make the internet a safer and healthier place for everyone, especially young people. Read the full announcement here. 

T&S Lately divider

Industry Events

Webinar promo graphic
Webinar Recording

Play Safe, Play Smart: Navigating Child Safety in Gaming Communities

Watch the Recording Here

Modulate hosted a special webinar this month on keeping children safe in games. Alongside Lauren Tharp from The Tech Coalition, CEO & Co-founder Mike Pappas chatted about the nuance, gender differences that impact the approach to content moderation and trust & safety policies, and more. Tune into the recording! 

UK Ofcom: Online Safety webinar series

Part 2 - An introduction to illegal content risk assessments

January 16, 5-6 am EST

Register Here

 

UK Ofcom: Online Safety webinar series

Part 3 - An introduction to Ofcom’s draft Codes of Practice for illegal harm

January 18, 6-7 am EST

Register Here

secondary-color-light-reversed

Modulate, 212 Elm St, Suite 300, Somerville, MA 02144, USA

Manage preferences