The killing of 49 people at two mosques in Christchurch, New Zealand, was engineered to be viewed and shared on the world’s largest technology platforms, taking full advantage of Silicon Valley’s laissez-faire approach to content moderation.
It began with a racist manifesto uploaded to the document-sharing service Scribd and linked on Twitter for anyone to read.
There was a helmet-mounted camera of the attack synched to a Facebook Live account, and a link to the stream shared among a hate-filled online community.
There was footage, in real-time, of the massacre served up to an audience that had found the link.
Facebook only deleted the user’s account after local police alerted the company of the active shooting documented on the stream.
But by then, others had already posted the video to YouTube. That site, owned by Google parent company, Alphabet, has scrambled to delete new uploads of the video. Twitter has said it’s doing the same.
Soon, the clips circulated on Reddit, a sprawling online message board service owned by Condé Nast parent company Advance Publications. Reddit removed message boards titled, WatchPeopleDie and Gore, which showcased the video along with other clips of people being killed or injured. Those message boards had been operating on the site for the past seven and nine years, respectively.
Hours after the attack, users were posting in the YouTube comments below mainstream news organizations’ coverage of the attack with links to the original livestream.
On one account, a user who self-identified as a 15-year-old spoke over a black screen, saying that the platform wouldn’t allow him to post the footage directly to the site, but a link to the video was in the description.
The link led to the full 17-minute livestream of the mass shooting. It was hosted on Google Drive.
The unfiltering of the world was long hailed as a Utopian goal of the internet, a way to dismantle the gates kept guarded by the bureaucracies of print and broadcast media. Blogs covering niche news and catering to under-served communities could proliferate. Amateur brilliance that would never be allowed to air on even the smallest cable channels could be seen by millions. Dissidents could share information that would otherwise be censored.
But that vision overlooked the toxic spores that the gatekeepers had kept at bay.
The United Nations has implicated Facebook in fanning the flames of hate against Rohingya Muslims in Myanmar, who were subject to an ethnic cleansing campaign by the country’s military. YouTube has allowed child pornography and exploitation videos to reach millions, and its recommendation algorithms have been singled out as promoting violent white supremacy by suggesting increasingly radical channels to viewers. Twitter is infamous for its coordinated harassment campaigns, often inspired by virulent misogyny and bigotry.
“There are so few incentives for these platforms to act in a way that’s responsible,” said Mary Anne Franks, a law professor at the University of Miami and president of the Cyber Civil Rights Initiative, which advocates for legislation to address online abuse. “We’ve allowed companies like Facebook to escape categorization, saying ‘we’re not a media company or entertainment company,’ and allowed them to escape regulation.”
In response, the tech giants have called it impossible to vet the billions of hours of content that pass through their platforms despite the efforts of employees and contractors hired to sift out the worst posts flagged by users or automatic detection systems.
People who share the footage of the Christchurch shootings “are likely to be committing an offense” since “the video is likely to be objectionable content under New Zealand law,” the New Zealand Department of Internal Affairs said in a statement Friday.
“The content of the video is disturbing and will be harmful for people to see. This is a very real tragedy with real victims and we strongly encourage people to not share or view the video.”
But the tech companies that host the footage are largely shielded from legal liability in the U.S. by a 1996 telecommunications law that absolves them of responsibility for content posted on their platforms. The law has empowered companies, which generate billions in profit each year, by placing the onus of moderation on its users.
“If you have a product that’s potentially dangerous, then it’s your responsibility as an industry to make the appropriate judgement calls before putting it out in the world,” Franks said. “Social media companies have avoided any real confrontation with the fact that their product is toxic and out of control.”
The risks of live broadcasting have been present since the invention of radio, and the media has developed safeguards in response.
It was illegal for radio shows to allow live callers to be broadcast on air until 1952, when an Allentown, Pa., station got around the law by inventing a tape delay system that allowed for some editorial control.
In 1998, a Long Beach man exploited the live broadcast model to get his message out by parking at the interchange of the 110 and 105 freeways and pointing a shotgun at passing cars. Alarmed drivers called the police — soon, L.A.’s car chase choppers were on the scene, reporting live.
He fired shots to keep the police at a distance and unfurled a banner on the pavement: "HMO's are in it for the money!! Live free, love safe or die."
Then, to conclude what the Los Angeles Times called “one of the most graphic and bizarre events ever to unfold on a Los Angeles freeway,” he detonated a Molotov cocktail in the cab of his truck and shot himself in the head on live TV.
In response to public outcry over the grisly footage, which in some cases had interrupted afternoon cartoons, TV networks started to more broadly institute tape delays in live coverage, and approach situations with visibly disturbed subjects with more caution.
The tape delay system isn’t perfect. In 2012, a glitch in the system led Fox News to accidentally live broadcast the suicide of a man fleeing from the police. “That didn’t belong on TV,” anchor Shepard Smith said in a shaken apology to viewers. “We took every precaution we knew how to take to keep that from being on TV, and I personally apologize to you that that happened. . . . I’m sorry.”
When the Facebook Live streaming launched in 2016, Facebook chief executive Mark Zuckerberg laid out a different mindset behind the feature to reporters at Buzzfeed News.