The president may be fond of complaining about “fake” news, but the truth is that journalism drives the national conversation, and science has proven it.
A new study published Thursday in the journal Science demonstrates that even small news outlets can have a substantial impact on the issues Americans talk about and when they talk about them. That’s especially true when these news outlets work together.
“Journalists have a job that affects American democracy,” said study leader Gary King, director of Harvard University’s Institute for Quantitative Social Science. “People talk about this a lot, but now we actually have evidence of it.”
King and his co-authors found that if three small- to medium-sized news outlets publish stories on the same topic simultaneously, they can cause the volume of social media posts on that issue to increase by an average of nearly 20% in a single day.
“This is a big impact, especially given the size of the outlets we worked with,” said Ariel White, a political scientist at MIT who worked on the study.
The news industry has lots of ways to monitor how many people are reading an individual article online, what devices they are reading it on, and how they came to find those stories in the first place.
What’s traditionally been harder to measure is whether the articles inspire readers to talk about a topic with friends and family, or take a public stand on a particular issue.
Before the rise of social media, the only way to know if media coverage was moving the needle on national conversations was to eavesdrop on water cooler discussions, read letters to the editor, and in the deeper past, listen to soap box speeches in public squares and read leaflets, the authors said.
But times have changed.
“Today, we can take advantage of the fact that much of the conversation has moved to, and is recorded in, the 750 million social media posts that appear publicly on the web each day,” they wrote.
Still, measuring how a single set of stories can influence what gets discussed online is no easy feat.
Researchers can’t control the news. And if they can’t do that, how can they run an experiment?
To help them get started, the investigators enlisted the help of Jo Ellen Green Kaiser, executive director of The Media Consortium. Her group is an association of mostly small, independent news outlets and includes publications like Grist, Ms. magazine and The Chicago Reporter.
Green Kaiser, who previously worked as an editor at the progressive Jewish magazine Tikkun, knew that journalists would not sit on a breaking news story, even for the sake of science. But she thought they might be flexible with some of their feature stories that were not tied to a specific event and thus could run at any time.
“Any good editor worth his or her salt has feature stories stowed away for dry periods when you don’t have any news,” she said. “We realized we could randomize the timing of those stories. So the intervention was not about the content. It was about the timing.”
It took three years of negotiation, but the researchers and journalists from 48 news outlets finally agreed on a study design.
They worked together to come up with a list of 11 broad topics — such as immigration, climate change, race relations and education policy — that the news organizations were either covering or were interested in covering.
Next, a set of two to five outlets volunteered to publish stories on the same topic on their websites at the same time.
The journalists were in charge of deciding what story they told, and how they told it. However, if the end result strayed too far from the original topic, the researchers could decide not to include it in the experiment.
If a story simply didn’t come together, the outlets could pull out of that particular experiment.
Next, the researchers identified a pair of consecutive weeks during which they expected the news to be generally slow. One week was randomly selected to serve as the “treated week,” when the stories would be published, and the other week would serve as a control.
That allowed the authors to measure the effect that a cluster of stories had on the Twitter conversation. All they had to do was compare the number of times the agreed-upon topic was mentioned in the week after the stories were published to the number of mentions during the control week. (For the record, that was a lot harder than it sounds).
The investigators ran the experiment 35 times between October 2013 and March 2016.
The effect was significant: social media posts about a given topic jumped 19.4%, on average, the day after stories on that topic were published. That translated into an average of 13,166 additional posts about a topic after it was covered in the press.
The size of the effect varied widely, and the authors said it would likely scale up with the size of the news outlets that published a story.
To see if this was true, they measured the effect on the Twitter conversation of a New York Times story about fracking and water quality that few other outlets covered. The day the story ran, there was a one-day spike of more than 300% in tweets about water quality and related topics.
The effect that news stories had on the social media conversation was equal regardless of tweeters’ gender, political party, place of residence or the number of Twitter followers an individual has.
While the average effect of a news “intervention” was larger than the researchers had expected, it was still quite small compared to the influence of “huge entertainment events,” such as the airing of a new episode of the TV show “Scandal,” the study authors wrote.
Still, King, White and their co-author Benjamin Schneer of Florida State University conclude that journalists do indeed influence what the country talks about.