Advertisement

The art and science of taking polls

Share
Barabak is a Times staff writer.

Every day dozens of polls on the presidential race are published, reporting voter sentiment nationally and in key states. Depending on the numbers, either Barack Obama is headed for an electoral vote landslide Tuesday, or John McCain has a shot at yet another come-from-behind victory.

Obviously, both can’t happen, which suggests that at least some polls are askew. Take, for instance, Nevada, a state that Obama hopes to win as part of a Democratic incursion into the conservative-leaning Rocky Mountain West. One poll this week put the Illinois senator’s lead there at 12 percentage points. Another gave Obama a 10-point advantage and still another a 7-point lead. One said Obama’s lead was 5 points and two others said 4, meaning Sen. McCain of Arizona could actually be ahead slightly, given the margin of sampling error.

Why such a big difference in polls conducted in the same state over roughly the same period of time?

Advertisement

There are several reasons, some having to do with the inherent nature of polling, others with factors unique to this highly unusual presidential campaign, which has given fits to even the most experienced pollsters.

Opinion surveys are based on statistical probabilities. The idea is that by interviewing a representative sample of voters, pollsters will achieve the same result as if they had interviewed every voter in a given area.

Though some are skeptical of that fundamental premise, the pioneering George Gallup had a ready retort: “An accurate blood test requires only a few drops of blood.” In other words, a pollster can attain a reasonably accurate gauge of how 100 million or more Americans will behave on election day by conducting a scientific sampling of about 1,200 or so voters.

But there are any number of reasons that polls come up with varying results. Sometimes questions are worded differently, or posed in a different order. There are also different ways of choosing whom to sample. Some polls, such as the Los Angeles Times Poll, will talk to individuals at random. Others work off lists of registered voters.

The age, gender or ethnicity of the person asking the questions can affect the response. For that reason, some pollsters employ interactive technology, using a recorded voice or the Web. Others, however, frown at the practice because there is no way to know whether the respondent is a voter or their 6-year-old child. Any and all of those factors can cause results to differ.

So how do pollsters know they are interviewing a representative sample of voters?

That’s where art and science come together. A pollster will attempt to determine who among those interviewed are the most likely to vote in the election. This year it’s especially tough to define a “likely voter,” given Obama’s particular appeal to black voters and young people, two groups that typically fail to vote in numbers commensurate with their share of the population. Moreover, minorities and young people are especially hard to get ahold of, given their mobility, the fact they often work odd hours and their preference for cellphones.

Advertisement

Different pollsters have different ways of determining whom they consider a “likely voter.” That accounts for the biggest variation among samples. For instance, one recent national survey that showed the race neck-and-neck included a large number of evangelical Christians -- too many, in the judgment of some pollsters -- which improved McCain’s performance and narrowed the gap with Obama. On the other hand, McCain’s camp says many polls are overstating the projected turnout of black and younger voters, to the detriment of the GOP nominee.

What else is important in assessing polls?

Timing is crucial. Polls taken before or after a significant event can vary considerably. A survey on voters’ concerns about terrorism would have undoubtedly yielded very different results depending on whether it was taken in the days leading up to or just after Sept. 11, 2001.

Although no event of that magnitude has occurred this year, there have been several developments -- such as the selection of the candidates’ running mates, the two major-party conventions, the presidential debates and the crisis on Wall Street -- that affected public opinion, especially in the short term. When looking at polls, it’s important to compare surveys conducted over roughly the same time frame.

What do pollsters mean when they talk about “a margin of error”?

Because they are not talking to every single voter, pollsters recognize there is a certain squishiness in their numbers. This “sampling error” is measurable, based on a standard statistical calculation. (Rule of thumb: The bigger the sample size, the smaller the margin of error.)

Jill Darling, associate director of the Times Poll, explains: “If Smith has 52% of the vote and we have a margin of error of plus or minus 3 percentage points, that means that if everyone voted right now, Smith would get between 55% and 49% of the vote. And if my survey finds that Jones has 48% of the vote, then his actual vote would be somewhere between 45% and 51%.” So to say that Smith is ahead, his lead would have to be twice the margin of error, or more than 6 percentage points. That is why a poll showing Smith at 52% and Jones at 48% means the race is about even.

It is also worth remembering that there is at least a 5% chance that any given poll number will be an outlier, meaning that the vote for Smith or Jones could be much greater or smaller than reported.

Advertisement

Any other suggestions for sorting through this barrage of polls?

Neil Newhouse is one of the country’s leading Republican pollsters. He also conducts, in tandem with Democrat Peter Hart, the NBC-Wall Street Journal poll, one of the most respected around. When it comes to assessing polls, Newhouse said “it’s like watching gymnastics. Throw out the high score, throw out the low score and average the rest.”

(Several websites offer a running sample of surveys, including Pollster.com RealClearPolitics.com and 538.com.)

Perhaps the most important thing to remember is that even the best poll can do no more than capture sentiments at a given moment in a campaign. No survey can predict the future.

--

mark.barabak@latimes.com

Advertisement