As Wired reports, this law is not the first of its kind since China passed regulations in August but the EU's AI Act is much wider ranging, even impacting the collection and use of biometric data by law enforcement. Not surprisingly, strict transparency requirements for companies offering AI services and products are also included in the Act. Via Wired:
"Companies that don’t comply with the rules can be fined up to 7 percent of their global turnover. The bans on prohibited AI will take effect in six months, the transparency requirements in 12 months, and the full set of rules in around two years."
There's a lot to unpack here and we found this write-up to be particularly helpful in breaking down this new law.
☁️ X, Meta, Bluesky, Oh my!
As always, there's been a lot of movement in the social media space this month. Strap in...
X ended its contract with the Irish outsourcing company CPL, which had provided content monitoring services for X in France, Germany and South Korea. More on this in The Irish Times.
Things are looking up (get it?) at Bluesky, as the company launches new safety tools like automated content moderation and so-called moderation lists, or ban/mute lists. TechCrunch has the rundown.
In early December, CNN reported on Meta's oversight board launching an internal review of the decision to remove two videos related to the Israel-Hamas War. Just under two weeks later, the oversight board ruled that Meta should reinstate both posts. So what happened inside Meta's content moderation system in the first place? As The Associated Press reports:
"In a briefing on the cases, the board said Meta confirmed it had temporarily lowered thresholds for automated tools to detect and remove potentially violating content."
As we head into the 2024 elections, non-profit media watchdog Free Press found 17 trust & safety or content moderation policies had been rolled back or in some cases eliminated at Alphabet, Meta, and X. Free Press also points to over 40,000 layoffs that directly impact those companies' ability to prevent the spread of misinformation on their platforms. The Guardian has more.
🚨 Extremism Has Entered the Chat
The Washington Post documents Discord's "problematic pockets" of bad actors, pointing to a handful of recent scandals involving planned violence, extremism, and other illegal activities including the 2022-2023 case of leaked confidential documents by former US airman Jack Teixeira. While Discord data is not end-to-end encrypted and so could theoretically be scanned for illegal content or content that violates terms of service, the company generally opts not to do so, leaning on its privacy first approach. That said, it seems when Discord does become aware of harmful content they tend to take quick action.
Only two days after The Post published its report, ABC News published a story on Discord tipping off law enforcement to potential planned mass violence, leading to an arrest.
Across the globe, New Zealand Prime Minister Jacinda Ardern chats with Axios in an exclusive interview on the Christchurch Call to Action, which was a dual nation response to the 2019 Christchurch mass killing in New Zealand. Governments and companies can join the Christchurch Call. Notably, OpenAI and Anthropic have committed to the cause.
💡 Getting Inspired with The Digital Wellness Lab
The Digital Wellness Lab at Boston Children’s Hospital announced the early signatories of the Inspired Internet Pledge, which include Pinterest, TikTok, and Modulate. The Pledge is a commitment by tech and media companies to make the internet a safer and healthier place for everyone, especially young people. Read the full announcement here.