Advertisement
Technology

Inside YouTube’s struggles to shut down video of the New Zealand shooting

Variety Entertainment Summit at CES, Las Vegas, USA - 09 Jan 2019
Neal Mohan, YouTube’s chief product officer, was at the helm when videos of last week’s mass shooting in New Zealand began to flood the platform.
(Isaac Brekken / Variety/REX/Shutterstock)
Washington Post

As a grisly video recorded by the alleged perpetrator of Friday’s bloody massacres at two New Zealand mosques played out on YouTube and other social media, Neal Mohan, 3,700 miles away in San Bruno, Calif., had the sinking realization that his company was going to be overmatched — again.

Mohan, YouTube’s chief product officer, had assembled his war room — a group of senior executives known internally as “incident commanders” who jump into crises, such as when footage of a suicide or shooting spreads online.

The team worked through the night, trying to identify and remove tens of thousands of videos — many repackaged or recut versions of the original footage that showed the horrific murders. As soon as the group took down one, another would appear, as quickly as one per second in the hours after the shooting, Mohan said.

As its efforts faltered, the team finally took unprecedented steps — including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips had been altered in ways that outsmarted the company’s detection systems.

Advertisement

“This was a tragedy that was almost designed for the purpose of going viral,” Mohan said in an interview that offered YouTube’s first detailed account of how the crisis unfolded inside the world’s largest video site. “We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that — especially in the case of more viral videos like this one — there’s more work to be done.”

The uploads came more rapidly and in far greater volume than during previous mass shootings, Mohan said. Video, mainly from victims’ points of view, spread online from the shootings at a concert in Las Vegas in October 2017 and at a Pittsburgh synagogue last October. But neither incident included a livestream recorded by the perpetrator. In New Zealand, the shooter apparently wore a body-mounted camera as he fired into crowds of worshipers.

Each public tragedy that has played out on YouTube has exposed a profound flaw in its design that allows hate and conspiracies to flourish online. YouTube is one of the crown jewels of Google’s stable of massively profitable and popular online services, but for many hours, it could not stop the flood of users who uploaded and re-uploaded the footage showing the mass murder of Muslims. About 24 hours later — after round-the-clock toil — company officials felt the problem was increasingly controlled, but acknowledged that the broader challenges were far from resolved.

“Every time a tragedy like this happens we learn something new, and in this case it was the unprecedented volume” of videos, Mohan said. “Frankly, I would have liked to get a handle on this earlier.”

Advertisement

The company — which has come under increasing fire for allowing Russians to interfere in the 2016 U.S. presidential election through its site and for being slow to catch inappropriate content — has worked behind the scenes for more than a year to improve its systems for detecting and removing problematic videos. It has hired thousands of human content moderators and has built new software that can direct viewers to more authoritative news sources more quickly during times of crisis. But YouTube’s struggles during and after the New Zealand shooting have brought into sharp relief the limits of the computerized systems and operations that Silicon Valley companies have developed to manage the massive volumes of user-generated content on their sprawling services.

In this case, humans determined to beat the company’s detection tools won the day — to the horror of people watching around the world.

YouTube was not alone in struggling to control the fallout Friday and over the weekend. The rapid online dissemination of videos of the terrorist attack — as well as a manifesto, apparently written by the shooter, that railed against Muslims and immigrants — seemed shrewdly planned to reach as many people online as possible.

The attack at one of the two mosques was livestreamed by the alleged shooter on Facebook, and it was almost instantaneously uploaded to other video sites, most prominently YouTube. The shooter appealed to online communities, particularly supporters of YouTube star PewDiePie, to share the video. (PewDiePie, whose real name is Felix Arvid Ulf Kjellberg, swiftly disavowed him.)

Many of the uploaders made small modifications to the video, such as adding watermarks or logos to the footage or altering the size of the clips, to defeat YouTube’s ability to detect and remove it. Some even turned the people in the footage into animations, as if a video game were playing out. For many hours, video of the attack could be easily found using such simple basic terms as “New Zealand.”

Facebook said it removed 1.5 million videos depicting images from the shooting in the first 24 hours after it happened — with 1.2 million of those blocked by software at the moment of upload. Reddit, Twitter and other platforms also scrambled to limit the spread of content related to the attack. YouTube declined to say how many videos it removed.

YouTube has been under fire over the past two years for spreading Russian disinformation, violent extremism, hateful conspiracy theories and inappropriate children’s content. Just in the past month, there have been scandals over pedophiles using YouTube’s comment system to highlight sexualized images of children and, separately, a Florida pediatrician’s discovery that tips on how to commit suicide had been spliced into children’s videos on YouTube and its children-focused app, YouTube Kids.

Pedro Domingos, a professor of computer science at the University of Washington, said that artificial intelligence is much less sophisticated than many people believe and that as Silicon Valley companies compete for business, they often portray their systems as more powerful than they actually are. In fact, even the most advanced artificial intelligence systems still are fooled in ways that a human would easily detect.

Advertisement

“They’re kind of caught in a bind when something like this happens because they need to explain that their AI is really fallible,” Domingos said. “The AI is really not entirely up to the job.”

Other experts believe that the continuous spread of horrific content cannot be weeded out completely by social media companies when the core feature of their products enables people to post content publicly without prior review. Even if the companies hired tens of thousands more moderators, the decisions these humans make are prone to subjectivity error — and AI will never be able to make the subtle judgment calls needed in many cases.

Former YouTube engineer Guillaume Chaslot, who left the company in 2013 and now runs the watchdog group AlgoTransparency, says YouTube has not made the systemic fixes necessary to make its platform safe — and he said it probably won’t without more public pressure.

“Unless users stop using YouTube, they have no real incentive to make big changes,” he said. “It’s still whack-a-mole fixes, and the problems come back every time.”

Political pressure is growing. Sen. Mark R. Warner (D-Va.) singled out YouTube in a sharply worded statement Friday. And both Democrats and Republicans have called on social media companies to be more aggressive in policing their platforms to better control the spread of extremist, hateful ideologies and the violence they sometimes provoke.

YouTube executives say they began addressing content problems more aggressively in late 2017 and early 2018. Around that time, Mohan tapped one of his most trusted deputies, Jennifer O’Connor, to help reorganize the company’s approach to trust and safety and to build a playbook for emerging problems. The teams created an “intel desk” and identified incident commanders who could leap into action during crises. The intel desk examines emerging trends not only on YouTube but also on other popular sites, such as Reddit.

The company announced it was hiring up to 10,000 content moderators across all of Google to review problematic videos and other content that have been flagged by users or by AI software.

Executives also shored up YouTube’s software tools, particularly in response to breaking news incidents. They quietly built software, called a “breaking news shelf” and a “top news shelf,” that is triggered when a major news incident occurs and people are going to YouTube to find information, either by searching for it or by coming across it on the homepage. The breaking news shelf uses signals from Google News and other sources to show content from more authoritative sources, such as mainstream media organizations, sometimes bypassing the content that everyday users upload. Engineers also built a “developing news card,” which pops up on top of the main screen to give people information about a crisis even before they search. More recently, the company said it made changes to its recommendation algorithms, the popular content-suggestion software that is the way most users discover new videos.

Advertisement

The breaking news software worked as designed during the school shooting in Parkland, Fla., in February of last year, O’Connor said in an interview. But over the following days, another unexpected development emerged: Survivors of the school shooting began to be harassed online. Some videos alleging that these students were “crisis actors” and not true victims became extremely popular on YouTube. Though the site had banned harassment since mid-2017, YouTube moderators were still learning how to apply its policies, O’Connor said, acknowledging that mistakes were made.

Like the Parkland shooting, the New Zealand shooting presented another set of challenges that stressed the company’s systems, Mohan said.

When the original video was uploaded Thursday evening, Mohan said, the company’s breaking news shelf kicked in, as did the developing news cards, which ran as banners for all YouTube users to see. Basic searches directed viewers to authoritative sources, and the autocomplete feature was not suggesting inappropriate words as it had during other incidents.

Engineers also immediately “hashed” the video, meaning that artificial intelligence software would be able to recognize uploads of carbon copies, along with some permutations, and could delete them automatically. Hashing techniques are widely used to prevent abuses of movie copyrights and to stop the re-uploading of identical videos of child pornography or those featuring terrorist recruitment.

But in this case, the hashing system was no match for the tens of thousands of permutations of video being uploaded about the shooting in real time, Mohan said. While hashing technology can recognize simple variations — such as if a video is sliced in half — it cannot anticipate animations or two- to three-second snippets of content, particularly if the video is altered in some way.

“Like any piece of machine learning software, our matching technology continues to get better, but frankly, it’s a work in progress,” Mohan said.

Moreover, many news organizations chose not to use the name of the alleged shooter, so people who uploaded videos about the shooting used different keywords and captions to describe their posts, presenting a challenge to the company’s detection systems and its ability to surface safe and trustworthy content. Mohan said he agreed with the editorial decision not to name shooters, but the name of a shooter is one of the most common search terms people use and a big clue for AI software.

The night of the shooting, Mohan worried that the company wasn’t moving quickly enough to address the problems. He made the unusual decision to suspend a core part of the company’s operations process: the use of human moderators.

Under normal circumstances, software flags problematic content and routes it to human moderators. The reviewers then watch the video and make a decision.

But that system wasn’t working well enough during the crisis, so Mohan and other senior executives decided to bypass it in favor of software that could detect the most violent portions of the video. That meant the AI was in the driver’s seat to make a final and immediate call, enabling the company to block content far more quickly.

But the decision came with a huge trade-off, Mohan said: Many videos that were not problematic got swept up in the automatic deletions.

“We made the call to basically err on the side of machine intelligence as opposed to waiting for human review,” he said. That the publishers whose videos were erroneously deleted can file an appeal with the company, he said.

By mid-Friday, Mohan still wasn’t satisfied with the results. He made another decision: to disable the company’s tool that allows people to search for “recent uploads.”

As of Monday, both the recent upload search and the use of moderators were still blocked. YouTube said they would stay disabled until the crisis subsides.

The company acknowledges that is not a final fix.


Newsletter
Get our weekly California Inc. newsletter
Advertisement