Free webinar on the ins and outs of finding and implementing AI moderation
View in browser
TRUST & SAFETY Lately _ light

Welcome back to Trust & Safety Lately, your monthly recap of all things trust and safety with a lens toward the gaming industry! It's hard to believe we haven't sent a newsletter since last year 😏

 

A lot's happened in the first month of 2024 already but before we dive in, check out this article from Teleperformance on the World Economic Forum about tackling toxicity in gaming communities and opportunities for change.

 

In this month's issue: 

  • 🚫 Anti-Toxicity Efforts in Call of Duty Are Paying Off

  • ✂️ Cutting Corners in Trust & Safety

  • ⚠️ New Chinese Gamer Safety Laws

  • ⚖️ Content Moderation on the SCOTUS Stand

  • ❌ All About X 

  • 🌟 Criminalizing Cyberbullying in Brazil

  • 📹 Free Webinar on Implementing AI Moderation

Let's get into it! 🔸

T&S Lately divider

Data and Reports

 

🚫 Making Progress in the Fight Against Toxicity in Call of Duty

Activision released their anti-toxicity progress report this month, outlining changes to their community Code of Conduct in Call of Duty and, perhaps more importantly, the results of voice chat moderation efforts. Some stand-out stats: 

 

"~50% reduction in players exposed to severe instances of disruptive voice chat since Modern Warfare III’s launch."

"Examining month-over-month data, Call of Duty saw an 8% reduction of repeat offenders since the rollout of in-game voice chat moderation."

 

As it turns out, taking a proactive approach to content moderation can have big impacts on user retention and experience outcomes -- who knew! 

✂️ When Cutting Trust & Safety Corners Leads to Cut Revenue 

It's no secret that social media platforms rely on advertiser dollars to generate profit. What happens when advertisers flee your site, unwilling to take on the risk of inadvertently having their ads showing up next to CSAM or extremist misinformation?

 

Earlier this month, TechPolicy.press recaps 2023 and the huge fines that social platforms are being hit with for failing to handle misinformation on their sites:

 

"On X, advertising revenue has decreased 55% or more since it was acquired and its trust and safety experts were laid off." 

 

" 42 State Attorneys General allege that Meta violated the Children’s Online Privacy Protection Act, and traumatized victims of the mass shooting in Buffalo last May are suing YouTube and Reddit for radicalizing the shooter."

 

These lawsuits and fines leave us wondering: is it really worth it to not have a trust & safety business function in place? We think you already know the answer. 

T&S Lately divider

Industry News 

 

⚠️ New gamer safety laws in China lead to stock scramble

Via Reuters: just before the new year, Chinese regulators announced new legislation banning the use of certain in-game incentives like rewards for logging in daily or benefits for spending money consecutively. Following the announcement, both Reuters and Forbes reported on Tencent and NetEase share values taking nosedive. 

 

In similar player incentives news... One week later, Nexon was hit with a historic fine of $8.9 million by the Korea Fair Trade Commission for failing to notify MapleStory players of a change to the probability of winning in-game loot box prizes. Read more via GamesIndustry.biz

 

⚖️ Content Moderation on the SCOTUS stand

 

Coming soon, The United States Supreme Court will hear oral arguments in NetChoice, LLC v. Paxton and Moody v. NetChoice, LLC. At issue are Texas and Florida laws that question a platform's ability to remove harmful or inaccurate content. In other words, should social platforms be able to remove content, or should they be forced to keep content accessible even if it may be incorrect? Stay tuned for follow up coming out of the planned February 26 SCOTUS session. Axios reports. 

 

From Mike Pappas, Modulate CEO & Co-founder:

 

“The Florida and Texas bills might be better thought of as anti-content-moderation bills, so it will be very interesting to see takeaways here -- this decision basically determines whether states have the right to *ban* platforms from moderating things.”

 

❌ All about X

 

New year, same old story (maybe) as in early January, AP News reported on The Australian eSafety Commission's warning that X has cut 30% of its trust and safety staff, including a sizable loss of safety engineers from 279 globally to just 55. But then just last week, Elon Musk announced X's commitment to hiring 100 moderators to staff the company's new Austin-based content moderation center. These new staffers will focus on detecting and preventing the spread of CSAM on the social media site. The Verge has more. 


Meanwhile, X's former Head of Trust & Safety moves onto greener pastures. Ella Irwin is heading up trust and safety efforts for Stability AI, an open-source generative AI company headquartered in London. NBC News reports. 

 

🌟 Cyberbullying Criminalized in Brazil

 

President Luiz Inácio Lula da Silva signed a new bill into law that criminalizes cyberbullying, with punishments for individuals being fines or up to two to four years in prison. The law defines cyberbullying as any form of systematic harassment carried out via digital means, including in games, on social media, and through apps. This is certainly a different approach to building online safety: penalize the individual rather than the platform for failing to ensure a safe environment. eSports.gg has more. 

T&S Lately divider

Industry Events

updated_resized_GDC24_Email_logolockup_horiz

Modulate is heading back to the annual Game Developers Conference this year! We've got two talk sessions lined up about emerging tech and how data can influence the way you approach community safety -- won't you stop by? 

Using AI to Create Safer Online Spaces

GDC Core Concepts Panel

 

Learn about ways AI can be used in safety, with specific insights on: how humans and AI work together; the biggest safety risks from AI; the most exciting ways AI can improve online safety; and what it takes to create inclusive spaces in today's complex and divided world.

Read More

Voice Chat Unlocked: Novel Insights from an Untapped Source

GDC Core Concepts Panel

 

Gain exclusive insights from extensive research and case studies conducted by Modulate together with its partners and customers. Discover actionable strategies for nudging players towards pro-social behaviors and creating positive and safe voice chat environments, backed by research and real-world experiments.

Read More

The Internet We Deserve with Larry Lessig

https___cdn.evbuc.com_images_682674649_1833984379103_1_original
 

On February 6 at 2 p.m., hear from Larry Lessig, Roy L. Furman Professor of Law and Leadership at Harvard Law School, founder of the Center for Internet and Society at Stanford Law School and Equal Citizens and a founding board member of Creative Commons. This hybrid lecture series explores a portion of the Internet that provides a fresh perspective and much needed context: its history.

Register Here

How AI-driven Content Moderation is Improving Player Experiences

GameDeveloper.com x Modulate webinar_579x250_1 of 2

On February 21 at 2 pm ET, join Modulate and Schell Games experts for a free webinar with GameDeveloper.com to learn the ins and outs of finding the right content moderation platform to protect your users. What are the technical challenges of implementing AI-driven moderation in a VR game meant for younger audiences? 

Register Here
secondary-color-light-reversed

Modulate, 212 Elm St, Suite 300, Somerville, MA 02144, USA

Manage preferences