Which pollsters called California’s top races correctly, and which missed the mark?
This article was originally on a blog post platform and may be missing photos, graphics or links. See About archive blog posts.
In the final 10 days before the election, 14 polls of the races for governor and U.S. Senate in California were released by 10 nonpartisan polling organizations.
Those pre-election polls divided into two noticeably different camps.
One group, which included the L.A. Times/USC poll and the Field Poll, projected hefty wins by Democratic candidates Jerry Brown and Barbara Boxer. The other, which included polls by the Rasmussen organization and Public Policy Polling, showed both Democrats likely to win, but by much smaller margins. Some showed the Senate race in particular getting closer.
Republican candidates and strategists were, of course, eager to draw attention to the surveys in the latter group. The Republican candidate for governor, Meg Whitman, directly attacked the Times/USC poll in several speeches, saying incorrectly that Times polls always favored candidates the paper had endorsed.
In the end, Brown won by 12 points and Boxer by nine. The poll that came closest to nailing the results: The L.A. Times/USC survey, which had projected a 13-point margin for Brown and an eight-point margin for Boxer. Field, which had projected margins of 10 points for Brown and eight for Boxer, came in a close second.
The worst record? The Rasmussen surveys, which were conducted for Fox News and Rasmussen’s own survey website. Those polls projected a Boxer margin of three points and a Brown win by four.
Several differences could account for the gap among the polls. The polls that did best are ones that used the traditional method of live interviewers calling people and interviewing them on the telephone. The Field Poll and the Times/USC poll called both landlines and cellphones.
Some of the polls that were in the less-successful group are so-called robo-polls that use automated interviews. Those polls have been reasonably successful in the past, but some polling analysts this year said they thought the robo-polls were producing results that were too weighted toward Republican candidates.
Another point of difference involves the models that pollsters use to determine which voters are likely to actually cast ballots. The Times/USC survey based its likely voter model on questions about a person’s enthusiasm about voting this year, the respondent’s expressed certainty about voting and his or her voting history. Some Republican analysts said that the emphasis on past voter history was screening out Republicans who had not voted in 2006 and 2008 but who would show up this year. In the end, those hypothetical voters turned out to be something of a mirage. Exit polls this year showed an electorate that was quite similar to the group that voted in the 2006 midterm elections.
-- David Lauter