BUSINESS
Your Thanksgiving dinner is cheaper this year. Here's why

Facebook bans fake news from its advertising network — but not its News Feed

Fake news isn't disappearing from Facebook anytime soon.

Despite Facebook’s move this week to ban phony news sites from using its advertising network, the company’s attempt to quell criticism that it influenced the outcome of the presidential election will do little to thwart the spread of such articles on its platform. That’s because the strategy mistakes the social network’s role in the false news ecosystem, experts say.

Fake news organizations, like real news organizations, mainly generate revenue by running ads on their own sites. Rather than sell ads themselves, many turn to marketing services, including the largest, Google AdSense, to surround their articles with ads.

But there's no money in the business unless there’s enough readers. That's where Facebook comes in. Though the Menlo Park, Calif., tech giant operates its own advertising service, its more vital purpose to fake news sites is its ability to steer traffic to their stories.

Operating under monikers such as the Denver Guardian and American News, these ersatz news organizations have no name recognition and must rely on social media to find an audience. Once Facebook’s algorithm picks up on the rising popularity of their content (such as a fictional post about actor Denzel Washington supporting Donald Trump), it spreads to other users’ news feeds, generating the likes, comments and clicks. And with each click comes additional advertising revenue.

Though fake news sites bank on Facebook’s traffic, few rely on Facebook’s advertising network to serve ads — one of the chief reasons why reactions were mixed Tuesday about its attempt to curtail the spread of misinformation. Experts were more optimistic about Google’s move to ban fake news from its advertising platform Monday since it affected the offending sites directly. 

“It’s a step in the right direction. However, Facebook generates traffic and Google monetizes it,” said Filippo Menczer, a professor of computer science and an expert on fake news at Indiana University. “For Facebook to do this with advertising, it’s not clear how that would help. You never really see sponsored posts from fake news sites on Facebook.”

Publishers of false news articles can also use competing advertising services to circumvent bans by Facebook and Google — ensuring ad dollars will keep flowing so long as social media platforms keep steering eyeballs their way.

“That’s why this is not going to have any impact at all,” said Antonio Garcia-Martinez, a former Facebook employee and author of “Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley.” “This is a purely cosmetic move.”

Facebook, and Google to a lesser extent, have faced a backlash for allowing the spread of phony news articles that could have swayed people’s views of the candidates during the presidential campaign season. 

The move to restrict fake news sites from using Facebook’s advertising tools comes days after Facebook Chief Executive Mark Zuckerberg said it was a “crazy idea” to think the social network could have influenced the election. Facebook characterized its shift as a clarification of existing policies.

Pew Research Center findings show social media can have an impact, however. A survey conducted by the group over the summer found that 20% of social media users changed their views on a political or social issue because of something they read on social media.

Fake news sites have reportedly enriched themselves by creating content that has spread virally on Facebook and Google. BuzzFeed, for example, reported on teens in Macedonia responsible for making hundreds of politically charged make-believe articles for American audiences and reaping the ad dollars that ensued.

Google, meanwhile, featured a story at the top of its search results Sunday claiming that Donald Trump won the popular vote. He did not.

As technology companies rather than media companies, the two Silicon Valley giants have long argued they are not responsible for the content their users publish. That viewpoint is protected by Section 230 of the Communications Decency Act, which prevents tech platforms like Facebook from being sued for libel or defamation over content posted by its users. That has led to a hands-off approach that mitigates legal risks.

But it’s a defense that has become more tenuous in the court of public opinion now that the $360-billion company has emerged as the de facto leader in media distribution. Forty-four percent of Americans get their news from Facebook, according to Pew, whereas only 2 in 10 U.S. adults get news from print newspapers.

Some critics now say Facebook needs to accept that it has morphed into a media company and should start acting like one by vetting its content.

“I don’t know if their position is tenable anymore,” said Gautam Hans, a clinical fellow at the University of Michigan Law School and expert on the Communications Decency Act. “They can keep saying they’re this and not that, but everyone knows what they are.”

Hans believes that Facebook has the means to remove more fake stories from news feeds, citing its success in restricting nudity and images of beheadings at the hands of terrorists. News sources can also be ranked or tagged to help consumers determine their validity, much like Google search results, based on a litany of criteria such as user ratings, spam and traffic so that reliable news sources are more prominent.

Of course, Facebook had a similar process for curating its trending news feature with trained editors before abruptly firing them this year after conservatives complained that they omitted right-wing news sites.

Jennifer Stromer-Galley, an information studies professor at Syracuse University, said Facebook could implement something called a nudge, which alerts users with a pop-up that a story has been debunked or discredited. But that too is problematic because it’s unclear whether the majority of users would believe that the story was wrong. That’s especially true now that Facebook communities are commonly made up of like-minded people.  

“At the end of the day, the problem is one of confirmation bias, which is our natural human tendency to look for information that confirms what we believe and ignore information that goes against what we believe,” Stromer-Galley said. “We fall for fake news because something about it confirms our beliefs about the world and because we are in a news-grazing rather than news-reading culture.”

Times staff writer Paresh Dave contributed to this story.

david.pierson@latimes.com

Follow me @dhpierson on Twitter

ALSO

Lyft cuts its mustache logo

Here's evidence Snapchat Spectacles are headed to Texas, Oregon, Nevada and Washington

Chinese app maker comes to Santa Monica in a bid to build a live-streaming empire


UPDATES:

5:20 p.m.: This article was updated with additional reporting including comments from Indiana University computer science professor Filippo Menczer, former Facebook employee Antonio Garcia-Martinez, University of Michigan Law School clinical fellow Gautam Hans and Syracuse University information studies professor Jennifer Stromer-Galley.

This article was originally published at 10:40 a.m.

Copyright © 2017, Los Angeles Times
EDITION: California | U.S. & World
81°