Advertisement

Editorial: Having Google and Facebook censor content is no way to stop fake news

Facebook CEO Mark Zuckerberg delivers the keynote address at a conference in San Francisco on April 12.

Facebook CEO Mark Zuckerberg delivers the keynote address at a conference in San Francisco on April 12.

(Eric Risberg / Associated Press)
Share

As if the avalanche of misleading political advertisements weren’t bad enough, Internet users were subjected this year to a surge in supposed news stories about the presidential race that were even less truthful — in fact, much less truthful — than the 30-second spots aired by Donald Trump and Hillary Clinton. And yet millions of readers happily recirculated this “fake news” as if it were independent and credible.

These included items claiming that Pope Francis had endorsed Trump, that emails released by WikiLeaks proved Hillary Clinton sold weapons to Islamic State and that an FBI agent investigating Clinton’s emails was killed in a mysterious murder-suicide.

This sort of thing goes beyond the complaint on the right that the mainstream media favored Clinton by publishing far more stories critical of Trump, or the complaint from the left that Trump received uncritical saturation coverage. The sites in question were circulating stories that they knew weren’t true, at times aided by bots that created huge fake audiences on social media.

Advertisement

Surveys by Pew Research Center find that 62% of U.S. adults get at least some of their news from social media.

The problem is obvious: When surveys by Pew Research Center find that 62% of U.S. adults get at least some of their news from social media, and 20% of social-media users say things they read online have changed their views on an issue or candidate, the electorate is all the more vulnerable to a calculated campaign of manipulation and disinformation. According to Buzzfeed, the 20 most popular fake-news stories in the last three months of the campaign were shared more often on Facebook than the top 20 stories from leading mainstream news sites. Nearly 90% of the fake-news stories were pro-Trump or anti-Clinton, Buzzfeed noted.

Some observers argue that the public’s receptivity to fake news is a sign that we live in a “post-factual” society, with people who are mainly interested in information that comports with their preexisting notions. That’s not a new phenomenon, however; people have been dividing into information tribes with competing echo chambers at least as long as there have been blatantly partisan talk radio stations and cable news channels. Nor is it new when politicians — and particularly Republicans — try to drive people away from established news sources by accusing them of bias.

What’s different now is the extent of the polarization, and how vulnerable the system is to manipulation. With a vast and growing number of information sources, people don’t automatically discredit a website or a publisher just because they’ve never heard of it. And the news pipeline is no longer controlled by a small number of local broadcasters and newspapers; even the smallest publisher has free access to the global distribution services supplied by search engines and social media networks. Although those are welcome developments for free speech and diverse viewpoints, they also are a boon to those whose mission is not just to offer an alternative to the mainstream media, but to peddle flat-out falsehoods.

Here’s another crucial difference: Unlike cable news networks, newspapers or local broadcasters, the most powerful distributors of content today — Facebook and Google search — do not want to be seen as media companies. They style themselves as open platforms, not content gatekeepers.

But that description is not quite accurate; both use technology and, in some cases, human editors to shape what they present to the public. And neither is willing to explain exactly how they do so, other than to say their goal is to deliver the most relevant results possible to individual users. As a consequence, Facebook prioritizes the material that users see in their “news feeds,” elevating some of their friends’ posts over others. Similarly, Google pushes some search results higher than others, depending on a secret list of signals it’s tuned to online.

Advertisement

Executives at both companies said this week that they are working on the problem of fake news, and already have measures in place to cut off the flow of advertising dollars to fake-news sites. Yet Facebook arguably made the problem worse this year, when it dumped the editors who helped curate its list of “trending” stories in favor of a more algorithm-driven approach. The change was made after former editors said their colleagues had favored mainstream news sources over alternative and conservative sites. But the new approach allowed bogus news items to taint the trending list.

It’s tempting to say that these platform operators should own up to the role they play in informing (or misleading) the public and do more to weed out bad information. But as Facebook CEO Mark Zuckerberg noted, telling the difference between fact, opinion and fiction is a tricky task. Good reporters can get things wrong, after all.

The fact that Facebook and Google are so powerful is reason enough not to want them to become censors. That doesn’t mean they can’t do more to identify and flag content that readers should question. They can also make it harder for publishers to game the system in ways that increase the audience for their content. Ultimately, though, their technologies rely on their users to elevate credible material over fantasies. And as long as people cling to the latter over the former, they’re holding the door open to disinformation campaigns and manipulation.

Follow the Opinion section on Twitter @latimesopinion and Facebook

Advertisement