Advertisement

Facebook and Google pledged to stop fake news. So why did they promote Las Vegas-shooting hoaxes?

Share

Accuracy matters in the moments after a tragedy. Facts can help catch the suspects, save lives and prevent a panic.

But in the aftermath of the deadly mass shooting in Las Vegas on Sunday, the world’s two biggest gateways for information, Google and Facebook, did nothing to quell criticism that they amplify fake news when they steer readers toward hoaxes and misinformation gathering momentum on fringe sites.

For the record:

5:30 p.m. Oct. 3, 2017

An earlier version of this story incorrectly said Google’s algorithms are designed to favor stories that get the most shares and comments.

Google posted under its “top stories” conspiracy-laden links from 4chan — home to some of the internet’s most ardent trolls. It also promoted a now-deleted story from Gateway Pundit and served videos on YouTube of dubious origin.

Advertisement

The posts all had something in common: They identified the wrong assailant.

Law enforcement officials have named Stephen Paddock as the lone suspect, and so far pinpointed no motive. But the erroneous articles pointed to a different man, labeling him a left-wing, anti-Trump activist.

Meanwhile, Facebook’s Crisis Response page, a hub for users to stay informed and mobilize during disasters, perpetuated the same rumors by linking to sites such as Alt-Right News and End Time Headlines, according to Fast Company.

“This is the same as yelling fire in a crowded theater,” Gabriel Kahn, a professor at the USC Annenberg School for Communication and Journalism, said of Google’s and Facebook’s response. “This isn’t about free speech.”

The missteps underscore how, despite promises and efforts to rectify the problem of fake news with fact checkers and other tools after the 2016 presidential election, misinformation continues to undermine the credibility of Silicon Valley’s biggest companies.

Google and Facebook have since tweaked their results Monday to give users links to more reputable sources — acknowledging their algorithms were not prepared for the onslaught of bogus information.

“This should not have appeared for any queries, and we’ll continue to make improvements to prevent this from happening in the future,” a Google spokesperson said about the 4chan link, which surfaced only if users searched for the wrongly identified shooter’s name and not the attack in general.

Advertisement

Facebook did not respond to a request for comment but told Fast Company it regretted the link to Alt-Right News.

“We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused,” the social network said.

Both Google and Facebook — along with Twitter — are under growing pressure to better manage their algorithms as more details emerge about how Russia used their platforms to interfere in the presidential election to sow discord.

The platforms have immense influence on what gets seen and read. More than two-thirds of Americans report getting at least some of their news from social media, according to the Pew Research Center. A separate global study published by Edelman last year found that more people trusted search engines (63%) for news and information than traditional media such as newspapers and television (58%).

Facebook’s algorithms are designed to favor the kinds of stories and posts that get the most shares and comments. Promoting those posts drives up engagement, and with it advertising revenue.

But that strategy also helped inflame the spread of fake news during the campaign season — intensifying calls for the platforms to behave more like media companies by vetting the content they promote.

Advertisement

That would require more human management, something tech companies are loath to do given their very existence is owed to replacing human activity with software.

Still, Facebook has tried to strike a balance. In March, it rolled out a third-party fact-checking program with PolitiFact, FactCheck.org, Snopes.com, ABC News and the Associated Press. Those partnerships, however, did not stop inaccurate reports from landing on Facebook’s Crisis Response page.

Putting people in charge of content can help tech companies avoid controversy. Snapchat, the disappearing messaging app, maintains strict control over news shared on its platform by employing staffers, including journalists, to curate and fact-check its stories. Granted, Snapchat attracts far fewer users — and far less content — than Facebook or Google.

Facebook has begun boosting its human oversight team. On Monday, the Menlo Park, Calif., social network pledged to hire more than 1,000 employees to vet its advertisements for propaganda.

The changes come amid growing frustration in Washington as lawmakers push Facebook, Google and Twitter to be more forthcoming in the investigation into Russian election meddling.

Facebook on Monday gave congressional committees more than 3,000 ads purchased during the 2016 election campaign by a firm with ties to Russian intelligence. In a blog post, the company said an estimated 10 million people in the U.S. saw the ads. Last week, Twitter briefed Congress on the number of fake accounts run by Russian operatives. And Google said it would conduct an internal investigation on Russian interference. (In a separate move to placate news organizations, the search giant said Monday it will tweak policies to help publishers reach more readers.)

Advertisement

Still, skepticism abounds that the companies beholden to shareholders are equipped to protect the public from misinformation and recognize the threat their platforms pose to democratic societies. Now, calls are growing to regulate the companies more strictly. As platforms, they aren’t liable for most of the content they distribute.

“These algorithms were designed with intent and the intent is to reap financial reward,” USC’s Kahn said. “They’re very effective, but there’s also collateral damage as a result of designing platforms that way.

“It’s not good enough to say, ‘Hey, we’re neutral. We’re simply an algorithm and a platform.’ They have a major responsibility that they still have not fully come to terms with.”

To read the article in Spanish, click here

david.pierson@latimes.com

Follow me @dhpierson on Twitter

Advertisement

ALSO

Steve Lopez: She was shot by a sniper in 1966. The Las Vegas massacre brings her pain — and anger about inaction on guns

How a Las Vegas concert went from melody to mayhem

Army veteran risked his life to save others at Las Vegas concert massacre, but don’t call him a hero

Advertisement