As election returns rolled in Tuesday night, the creator of the USC/Los Angeles Times Daybreak tracking poll was in Washington for a speaking engagement. He watched the results on television in a hotel bar, surrounded by about 20 drunken Danes who were in the capital to study the election.
“It was an odd experience,” Arie Kapteyn said Wednesday morning.
The same might be said of the furor that surrounded the Daybreak poll during the campaign. It was the only major public survey that consistently showed Donald Trump winning. As a result, it drew frequent and loud denunciations from many Democrats, especially as election day neared and passions rose.
But on Wednesday, as many other pollsters struggled to explain why their surveys seemed blind to Trump’s support, Kapteyn and his colleagues were among the few who could say their work got the basic issue right.
“To be honest, I was surprised,” said Kapteyn, a USC economist and expert on public opinion.
In an interview several days ago with a radio station in Holland, where he grew up and received his doctoral degree, Kapteyn had predicted that “Clinton is going to win, but I think it’s going to be a lot tighter than people think,” he recalled.
That prediction, he said, highlighted the problem with most efforts at political analysis.
“When you look at pundits and their predictions, the correlation is zero” between what they forecast and how things actually turn out, he said, citing work done by Daniel Kahneman, the Nobel Prize-winning behavioral economist.
“You have to trust the numbers,” he said. “Don’t get distracted by all the things you think about plausibility.”
“What you think personally doesn’t matter,” he added. “I thought Clinton would win. But that shouldn’t change the numbers.”
The tracking poll was not perfect, of course; it projected Trump to win in the popular vote by slightly more than 3 percentage points, but in reality Hillary Clinton seems set to gain a slender popular vote majority, currently about 0.2%. Her margin could expand as late ballots are counted in heavily Democratic California.
That result, however, was well within the poll’s margin of error. The more crucial point was that the poll correctly detected Trump’s appeal to a key bloc of voters: conservative whites who had sat out the 2012 election but intended to vote this year. That group strongly favored Trump, the poll found.
The poll’s ability to pick up those voters, Kapteyn said, stemmed from its approach, which differs notably from the one used by most major surveys.
Instead of asking people to simply choose between the candidates, the Daybreak survey asked respondents to rate, on a scale from 0 to 100, their chance of voting for Trump, Clinton or some other candidate. The poll also asked people to use the same 0-100 scale to rate their likelihood of voting.
That method, which Kapteyn had used four years ago to accurately forecast President Obama’s reelection, “is the most important part” of what the poll demonstrated, he said.
By asking people to give a probability, the poll avoided forcing voters into making a decision before they were truly ready. As a result, it may have more accurately captured the ambiguity many people felt about their choice.
Moreover, by asking participants to rate their chance of voting, the poll could take advantage of information from everyone in its sample group, rather than cast aside those who do not meet a test for being a “likely voter,” as most traditional surveys do.
“One of the ways in which other polls may have gone wrong is that they have a hard time defining who is going to vote,” he said. Polling firms “should look at their likely voter model” and think about whether they are excluding too many potential voters, he said.
The polling profession plans to review why so many of this year’s general election polls were inaccurate. The American Assn. for Public Opinion Research, the professional group for pollsters, has set up a committee to study the polls and report back by May.
Unlike many pollsters, Kapteyn, 70, is neither a political scientist nor a political activist. His work on predicting election results is an outgrowth of his main research interest on how people make decisions on their finances.
The fact that he could approach the poll as a science project provided some mental insulation from the criticism the poll drew during the last few months.
“There was some flak,” he said. Among other researchers, however, the response all along had been more appreciative, he said.
“People would say, ‘You may be wrong. We think you’re wrong, but we understand what you’re doing,’” he said.
“In science, you don’t have to be right to do something useful,” he said, noting that even if the test of a new method fails, researchers can learn from it.
This time, however, as was the case in 2012, the test succeeded. It was not, however, the outcome he wanted.
“I’m very unhappy” about Trump’s victory, he said. “But that’s the way it is.”
For more on Politics and Policy, follow me @DavidLauter
4:50 p.m.: This article was updated for clarity at the top.
This article was originally published at 1:40 p.m.