Facebook is introducing new policies to stem the flow of fake news, including partnering with third-party fact-checkers to flag fabricated articles in a seeming admission that humans still play a vital role in policing the world's biggest social network.
The company said Thursday it would work with organizations that have signed the International Fact-Checking Network's code of principles, a program established by Poynter, a media institute in St. Petersburg, Fla. Signatories in the U.S. include Snopes, PolitiFact, the Associated Press, ABC News and the Washington Post Fact Checker.
Facebook said it would also tweak its News Feed ranking algorithm to de-emphasize fake news and make it easier for users to report instances of fake news.
"We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we're approaching this problem carefully," Adam Mosseri, Facebook's vice president for News Feed, said in a blog post. "We've focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations."
Facebook has come under intense pressure to tackle its problem with fake news, which spread unheeded on its platform in the run-up to the November presidential election. At one point, fake news articles were being seen more than those published by reputable organizations, according to BuzzFeed.
At first, Facebook Chief Executive Mark Zuckerberg appeared to downplay the problem, dismissing the notion that fake news may have influenced the election as a "pretty crazy idea."
Within a few days, however, Zuckerberg said the company took misinformation very seriously and later booted fake news sites from Facebook's advertising network — though such sites continue to profit using other advertising networks.
Facebook said it would not pay the fact-checkers, whose ranks will consist of five organizations in the U.S. at first before possibly expanding.
The move toward human moderation is something of a reversal for Facebook. Less than four months ago, Facebook replaced human editors responsible for curating its Trending News feature with an algorithm after critics complained they omitted conservative viewpoints.
By partnering with human fact-checkers, Facebook is signaling it "can't simply reduce things to algorithms and wash their hands," said Gabriel Kahn, a professor of journalism at USC, who believes Thursday's moves were the first concrete steps the tech giant has taken to combat fake news.
"This company cannot expect to reap rewards for being a central marketplace for news without accepting responsibility," Kahn added. "They have a huge responsibility here."
There's no guarantee that most users will accept Facebook's third-party fact-checking given the nation's hyper-partisan climate and the continued attacks on news organizations (and fact-checkers) — not the least of which are coming from President-elect Donald Trump.
"Reports by @CNN that I will be working on The Apprentice during my Presidency, even part time, are ridiculous & untrue - FAKE NEWS!" Trump tweeted Saturday. The day before, top Trump advisor Kellyanne Conway told the news network that the president-elect would continue working on the reality TV show.
The fact-checking organizations will review articles flagged by users who click an arrow on the upper right corner of posts and report them as "fake news" stories.
If the organizations determine a post is fake, Facebook will flag it as "disputed" and include a link to an explanation of the fact-checkers' findings.
"It will still be possible to share these stories, but you will see a warning that the story has been disputed as you share," Mosseri said. "Once a story is flagged, it can't be made into an ad and promoted, either."
It's unclear whether fake posts will be identified before they have a chance to go viral. It's also unclear how newsrooms will allocate their resources for the partnership with Facebook.
The Associated Press said the work would be an extension of their existing nonpartisan fact-checking efforts.
Although it's a good start, the policy doesn't go far enough, said Christopher Ali, an assistant professor in the department of media studies at the University of Virginia.
"If this content is vetted by users and vetted by trusted news sources … after all of this deliberation [Facebook] is just putting a warning up, I wonder if it's going to make a difference," he said.
The new policy is not aimed at articles offering debatable opinions, but stories that are demonstrably false, such as reports that an FBI agent investigating Hillary Clinton's private email server was found dead or that Pope Francis endorsed Donald Trump.
That's an important distinction, given fears on all sides of the political spectrum that Facebook's fact-checking could lead to censorship. Facebook, as a private company, is not obligated to honor 1st Amendment rights to free speech.
But with 1.8 billion monthly users, anything that gains traction on Facebook's platform stands a chance of going viral.
A Pew Research Center poll released Thursday found that about 1 in 4 Americans said they have shared fake political news online. Nearly two-thirds of respondents said they believe fake news has left Americans confused about basic facts.
Though experts say it's impossible to determine if fake news swayed the election, a Pew poll over the summer found that 20% of social media users changed their views on a political or social issue because of something they read on social media.
3:45 p.m.: This article was updated with additional details about how Facebook will flag and identify fake news, which organizations will participate in fact-checking, results of a Pew Research Center poll and comments from journalism experts.