Facebook says Russia-linked posts may have reached 126 million users

A photo illustration shows Facebook's logo.
(Jaap Arriens / NurPhoto)

Facebook Inc. says a Russian group posted more than 80,000 times on its service during and after the 2016 U.S. presidential election, potentially reaching as many as 126 million users.

The company plans to disclose these numbers to the Senate Judiciary Committee on Tuesday, according to a person familiar with the testimony. The person declined to be named because the committee has not officially released the testimony. Facebook, Twitter Inc. and Google, a division of Alphabet Inc., are scheduled to testify at three hearings Tuesday and Wednesday.

Twitter plans to tell the same committee that it has uncovered and shut down 2,752 accounts linked to the same group, Russia’s Internet Research Agency, which is known for promoting pro-Russian government positions.

That number is nearly 14 times as large as the number of accounts Twitter handed over to congressional committees three weeks ago, according to a person familiar with the matter who was not authorized to speak publicly about the new findings before Tuesday’s hearing.


Colin Stretch, Facebook’s general counsel, plans to tell the Judiciary Committee panel that 120 pages set up by Russia’s Internet Research Agency posted the material from January 2015 to August 2017. Facebook estimates that roughly 29 million people were directly served these items in their news feeds from the agency over that time period.

Some of those people received the posts because they liked one of the agency’s pages, or because a Facebook friend liked or commented on a post. Others shared the Russia-linked posts, helping them spread widely.

U.S. intelligence agencies said in January that the Internet Research Agency probably was financed by a close ally of Russian President Vladimir Putin “with ties to Russian intelligence.”

Stretch’s prepared testimony makes clear that many of the 126 million people reached this way may not have seen the posts. People may not have logged in when it was available, or they may have looked past it. The company says the total number of agency posts accounted for less than 1 of every 23,000 posts on Facebook.


These “organic” posts that appeared in users’ news feeds are distinct from more than 3,000 advertisements linked to the agency that Facebook has already turned over to congressional committees. The ads — many of which focused on divisive social issues — pointed people to the agency’s pages, where they could then like or share its material.

In the testimony, Stretch says that the discovery of Russian interference has “opened a new battleground for our company, our industry and our society,” and that Facebook is determined that it not happen again. “What these actors did goes against everything Facebook stands for,” Stretch says.

The Menlo Park, Calif., company has said it will take steps to fix the problem, with an announcement last week that it will verify political ad buyers in federal elections and build transparency tools in which all advertisers will be associated with a page. Twitter has also said it will require election-related ads for candidates to disclose who is paying for them and how they are targeted, and it announced last month that it will ban ads from RT and Sputnik, two state-sponsored Russian news outlets.

Stretch says Facebook was aware of — and reported to law enforcement — threats from actors with ties to Russia before last year’s election. He says that includes activity from a cluster of accounts that the company assessed belonged to a group called “APT28” that has been linked to Russian military intelligence. He says the company “warned the targets who were at highest risk.”


In the hearings this week with the Judiciary Committee panel and the House and Senate intelligence committees, the three companies are expected to face questions about what evidence of Russian interference they found on their services — and, probably, why they didn’t find it earlier. They will almost certainly do what they can to convince lawmakers that they can fix the problem on their own, without the need for regulation.

The companies have been under constant pressure from Congress since it was first revealed this year that Russians had infiltrated some of their platforms. Facebook has spent more than $8.4 million lobbying the government this year, according to federal disclosure forms.

Facebook and Twitter — though not Google — have publicly outlined steps they are taking to give the public more information about who buys and who sees political advertising on their respective sites. The moves are meant to bring the companies more in line with what is required of print and broadcast advertisers.

A bill unveiled this month would require social media companies to keep public files of election ads and require companies to “make reasonable efforts” to make sure that foreign individuals or entities are not buying political ads in order to influence Americans.


The issue goes far beyond ads. Fake news, fake events, propaganda and other misinformation spread far and wide on the platforms in 2016 without the need for paid advertisements. But regulating online speech would be more difficult for U.S. lawmakers.

In addition, analysts and online speech advocates have warned that policing election ads on the internet is not the same thing as doing so in print newspapers or on TV. Automated advertising platforms enable basically anyone with an online account and a credit card to place an ad with little or no oversight from the companies.

Facebook has said it is building machine learning tools to address this issue, but didn’t provide details.