Advertisement

Meta, TikTok and others agree to teen safety ratings

Attendees visit the Meta booth at the Game Developers Conference 2023 in San Francisco on March 22, 2023.
Attendees visit the Meta booth at a Game Developers Conference in San Francisco.
(Jeff Chiu / Associated Press)
0:00 0:00

This is read by an automated voice. Please report any issues or inconsistencies here.

Meta, TikTok and Snap will be rated on their teen safety efforts amid rising concern about whether the world’s largest social media platforms are doing enough to protect the mental health of young people.

The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources.

TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

Advertisement

“These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments,” Antigone Davis, vice president and global head of safety at Meta, said in a statement.

TikTok and Snap executives also expressed their commitment to online safety.

Parents, lawmakers and advocacy groups have criticized online platforms for years over whether they’re protecting the safety of billions of users. Despite having rules around what content users aren’t allowed to post, they’ve grappled with moderating harmful content about self-harm, eating disorders, drugs and more.

More than 60 families are suing Snap, arguing the Santa Monica-based company is responsible for drug sales to teens that are facilitated through its app. Snap denies the allegations.

Meanwhile, technology continues to play a bigger role in people’s lives.

The rise of artificial intelligence-powered chatbots has heightened mental health concerns as some teens are turning to technology for companionship. Companies have also faced a flurry of lawsuits over online safety.

Advertisement

This week, a highly watched trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms kicked off in Los Angeles.

TikTok and Snap, the parent company of disappearing-messages app Snapchat, settled for undisclosed sums to avoid the trial.

In opening statements, one of the lawyers representing the California woman who alleges she became addicted to YouTube and Instagram as a child said the products were designed to be addictive.

Tech companies have denied the allegations made in the lawsuit and say internal documents are being twisted to portray them as villainous when there are other factors, such as childhood trauma, leading to the mental health issues of some of their users.

Meta Chief Executive Mark Zuckerberg is expected to testify at the Los Angeles trial. Another trial over a lawsuit that alleges Meta failed to protect children from sexual exploitation and violated New Mexico’s consumer protection laws also kicked off this week.

Jury selection was scheduled to began in L.A. County Superior Court on Tuesday in the first of a series of closely-watched lawsuits seeking to prove social apps inflict harm on children, but a last-minute agreement by TikTok left uncertainty around how the case would proceed.

The new ratings were also announced on Tuesday on Safer Internet Day, a global campaign that promotes using technology responsibly, especially among young people. Companies on Tuesday, such as Google, outlined some of the work they’ve done around safety, including parental controls to set time limits for scrolling through short videos.

Advertisement

The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they’re not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven’t been completed yet.

“By creating a shared framework for accountability, S.O.S. helps move us toward online spaces that better support mental health and well-being,” Kenneth Cole, the fashion designer who founded the Mental Health Coalition, said in a statement.

A website for S.O.S. states that technology companies didn’t influence the development of the new standards and they aren’t funding the project. The Mental Health Coalition, though, has teamed up with Meta in the past on other initiatives. Meta and Google are also listed as “creative partners” on the coalition’s website.

The coalition, which is based in New York, didn’t immediately respond to an email asking about its funding.

Companies have published their online rules and data on content moderation. Those that are interested in participating in the project voluntarily hand over documents on policies, tools and product features.

Inside the business of entertainment

The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.

By continuing, you agree to our Terms of Service and our Privacy Policy.

Advertisement
Advertisement