Advertisement

YouTube will use spotty AI to keep targeted ads off videos for kids

Share

YouTube will stop selling personalized ads on videos aimed at children, the video streaming giant agreed Wednesday as part of a settlement with the Federal Trade Commission. But the company’s plan relies on technology that has struggled to make nuanced decisions in the past.

The Google unit will use artificial intelligence to identify which videos are aimed at kids, then prevent those videos from being paired with targeted ads.

Politicians and consumers have heard that plan before. YouTube has used AI for years to find and take down unwanted content, including pornography, terrorist propaganda and extreme violence. Other tech companies, such as Twitter Inc. and Facebook Inc., have also said AI is the answer to their problems — issues such as online harassment and election meddling by foreign states.

Advertisement

Alphabet Inc.’s Google is one of the most accomplished AI companies, but with so much online content, the technology sometimes falls short, as it did when thousands of videos of the March terrorist attack on two New Zealand mosques were uploaded to YouTube.

AI isn’t the first line of defense. YouTube is telling video creators to self-report if their content is aimed at kids. But creators rely heavily on ad revenue, so they may have little incentive to tell YouTube when their clips are for kids. Some are already describing their productions as “family-based play” or “co-play,” rather than videos specifically for children. That suggests AI will have a major role in policing the new rules and finding videos that might fall into a gray zone between kids and other content.

“If creators intentionally fail to properly classify their content, we will take appropriate action,” a Google spokeswoman said.

Under the settlement announced Wednesday, YouTube will pay $136 million to resolve FTC allegations that it collected children’s personal data without their parents’ consent. It will pay an additional $34 million to New York state to resolve similar allegations brought by the state’s attorney general.

YouTube uses machine learning, a type of AI software that gets smarter by crunching more data and needs less input from human coders. Google is a leader in the field, but it’s unclear how well the technology will work when applied to the reams of kids content on YouTube.

When YouTube launched a kids-specific app in 2015, it used software to pick the right videos from the billions of clips on the main YouTube site. Less than three months after launch, child and consumer advocacy groups found inappropriate content on the app, including explicit sexual language and jokes about pedophilia.

Advertisement

“The AI has been improving its ability to identify content, and, although I’m sure there will be mistakes here and there, I believe it will adequately identify content intended for children,” said Melissa Hunter, founder of the Family Video Network, which has a channel on YouTube. “As scammers develop new methods to trick the AI … YouTube engineers will update the classifiers to overcome those tricks. Nothing is foolproof, but I think it is up to the task.”

Brenda Bisner, an executive at Kidoodle.TV, a rival streaming service, is less convinced. She said the U.S. government should have forced YouTube to eliminate all kids videos from its website and banned YouTube from schools.

“Anyone who makes kids content shouldn’t be on YouTube,” she added. “It’s been proven time and again that it’s not safe.”

This April, YouTube software mistook a live video of the Notre Dame cathedral fire for a clip of the 9/11 terrorist attacks. “Our systems sometimes make the wrong call,” a YouTube spokesman said at the time. YouTube’s algorithms can also be tricked by making slight tweaks to a video, such as changing the color of some pixels or flipping it on its side, especially if the content is new to the service. That’s how many of the New Zealand mosque shooting videos got through YouTube’s digital defenses.

YouTube is taking other steps to try to protect children from such blunders in the future. It plans to promote the YouTube Kids app more aggressively and recently limited which channels can be part of this children’s video service.

Targeted ads are more valuable for YouTube owner Google. But YouTube’s solution is far less expensive than other potential remedies, such as doing away with all types of ads on children’s videos.

Advertisement

Research firm Loup Ventures estimates YouTube’s total revenue will be $10 billion to $15 billion this year, with $500 million to $750 million of that coming from children’s media. Eliminating targeted ads on videos for kids will dent total revenue by 1% at most, said Doug Clinton, a Loup Ventures analyst.

“Bottom line: YouTube will still serve ads alongside kids content, but with less data, and the data probably doesn’t add as much of a premium to the inventory as one might think,” he said.

The change may take a larger bite out of the cottage industry of creators who have grown thriving businesses by making videos for kids and posting them to YouTube, taking a cut of the ad money. YouTube said it expects a “significant business impact” for these kinds of channels and is setting aside $100 million to help fund “thoughtful, original” kids content.

“The fund is a big deal,” said Chris Williams, chief executive of kids media company Pocket.Watch. “It clearly shows that YouTube is going to try and soften the blow.”

The Associated Press was used in compiling this report.

Advertisement