How much of a boost did President Obama’s 2012 campaign really get from its much-vaunted mastery of data analysis? A significant amount, but not as much as some people think, according to a new analysis by two leading political scientists.
The Obama campaign’s data efforts have changed the nature of presidential campaigns, much as the “Moneyball” revolution taught some baseball teams that careful analysis of statistics could lead to a wiser use of resources than just going with the gut instincts of longtime scouts. The campaign’s high-tech efforts also have captured the imaginations of people who normally have minimal interest in the inner workings of field operations.
But as with most successful efforts, the myth has outrun the reality.
“Moneyball did not win the election. Even Obama’s data crunchers will freely tell you that,” Lynn Vavreck of UCLA and John Sides of George Washington University wrote in an article in the online Pacific Standard. The article extends the analysis the two produced in their book on the 2012 campaign, “The Gamble.”
The Obama campaign innovated in several areas involving data. The Democrats could buy more ads than Mitt Romney’s campaign for the same amount of money, for example. In part, that was because they studied data on which television programs were watched most by the voters they were targeting. That allowed them to focus some of their ad dollars on cheaper cable channels.
The campaign experimented with how best to target messages, particularly in fundraising, where they discovered that “asking donors for odd amounts like $59 or $137 was more effective than asking them to give in amounts rounded to the nearest $5 or $10,” Vavreck and Sides report.
Perhaps most important, an experiment in February 2012 involving 300,000 phone calls by volunteers allowed the campaign to build a model to estimate how “persuadable” different types of voters might be. That model helped guide subsequent voter contacts.
But some widely hyped aspects of data analysis actually contributed little or nothing, Sides and Vavreck report. For example, despite reports to the contrary, the campaign made very little use of consumer data, such as the favorite sports of potential voters, what beers they prefer or what cars they drive. In the end, a person’s race, income, gender and voting history tell campaigns pretty much everything they need to know, data analysts from both parties said.
And micro-targeted messages aimed at specific voting groups turn out to be not all that useful in the context of a presidential campaign. When they compared targeted messages with broader ones, “Usually the winning email was universal,” the two political scientists quote Obama’s director of digital analytics, Amelia Showalter, as saying.
In all, the campaign’s efforts probably contributed “two points … at most” to Obama’s eventual four-point victory margin, Elan Kriegel, who directed Obama’s analytics efforts in battleground states, told the two. Other senior campaign officials said they wouldn’t hazard a guess on how much difference the work made in the final margin.
In a close enough election, of course, even one point could make a difference – just ask former Vice President Al Gore. And there’s little doubt that much of what the Obama campaign pioneered in 2012 will be adopted and adapted by future presidential campaigns. Republicans have vowed for a year to learn from what the Obama operation accomplished.
But Republicans who link their loss to being out-gunned in campaign technology probably should look elsewhere, the evidence suggests. As Sides and Vavreck’s work shows, even the most sophisticated campaign efforts are no silver bullet.