Advertisement

Pollsters Can’t Just Phone It In

Share
Philip J. Trounstine is director of the Survey and Policy Research Institute at San Jose State University.

If you’ve been following the presidential campaign, you’re aware that the polls have been fluctuating throughout the year — with President Bush and Sen. John F. Kerry flipping and flopping from leader to loser, depending on which poll you read and when.

Often, the surveys contradict one another, or they swing erratically overnight, for no apparent reason. Sometimes ABC has Bush ahead while Gallup says Kerry is in the lead. How are we supposed to make sense of this numerical Tower of Babel?

To begin with, there are certain caveats to keep in mind when reading national public opinion surveys.

Advertisement

Here’s one: In the end, this election won’t be decided by the popular vote, but by electoral college votes. That means that the real key to the election is not the nationwide vote but how the candidates do in each of about eight key swing states. As a result, what you read in the national head-to-head polls — which measure only the popular vote — may, in the end, have little or nothing to do with who wins and who loses.

Here’s another caveat: Even in a perfectly constructed poll of 1,000 respondents, probability theory tells us that the results may really be off by 3 percentage points in either direction for each of the two candidates. That’s what’s known as the margin of error.

So when Gallup reports that the race is at 49% for Kerry and 48% for Bush, the results could actually be 46% Kerry to 51% Bush, or they could be 52% Kerry to 45% Bush. Or somewhere between. And right now, virtually all the polls are within the margin of error.

Moreover, polling relies on the concept of random selection. If the sample is poorly drawn or if respondents aren’t chosen randomly, the survey can end up with too many Democrats or too few Latinos or too many women, skewing results.

But changes in communications technology make it increasingly difficult to construct a truly representative sample. Until recently, for instance, it was safe to say that virtually every home in America had a telephone, so polls conducted by phone have been seen as a reliable way to give every household an equal chance of being surveyed. But now, there’s a category of people — mostly younger people and renters — who have cellphones but no land lines.

Federal law makes it illegal to survey cellphones without consent if phone owners pay for calls they receive. Pollsters don’t want to endanger people who may be driving cars. And with portability of numbers, you never know if a cellphone’s area code actually reflects where the owner lives.

Advertisement

As a result, an entire category of people — we can call them CPOs, for “cellphone only” — are missing from surveys. Does it matter? Probably not this year, because CPOs constitute only an estimated 4% to 7% of the population right now. But if there were a huge, unexpected surge of voting by CPOs in, say, swing states like Ohio with big college student populations, they could have an effect not reflected in the polls. Voila: Truman beats Dewey.

Another issue when you’re comparing surveys is how the pollster determines which respondents are “likely voters.” Most nationwide pollsters believe they get the most representative sample if they use what is known as “random digit dialing.” That’s when a computer constructs possible phone numbers based on a proper distribution of area codes and exchanges, allowing pollsters to call both listed and unlisted phone numbers.

But when it gets close to an election, pollsters want to measure the views of only those who are likely to vote. So they have to ask respondents if they are registered, along with a series of other questions to help determine whether they will really turn out on election day. Every polling organization has a different set of questions — another reason why results are often different from poll to poll.

Then there’s the issue of weighting. When results are gathered, pollsters have to compare them with known demographics — to make sure they have the right proportion of respondents by gender, race, age, education and geography. It gets trickier if you try to adjust for political party identification.

It’s relatively simple to adjust the data for demographic information from the U.S. Census Bureau and other reliable organizations. If your survey has too few Latinos, say, computer programs allow you to change the results to mathematically increase the relative “weight” of answers from Latinos.

But what if your phone calls yield too many Democrats and not enough Republicans? Should you “weight up” Republicans? And if so, to what balance? Should it be the same as previous elections? Which ones? How do you know, except from exit polls, how many people of each party voted?

There’s also the matter of nonresponse — the increasing problem of getting people to answer. Though studies show that nonresponders aren’t much different from responders, the issue still worries pollsters.

Advertisement

Does all this mean you can’t trust polling? Not at all. Most national pollsters from the media, nonpartisan organizations, think tanks and the like do a superb job of playing by the rules and minimizing bias and error. But in a race as close as this one, it would be foolish to bet the house. And let’s hope the polling industry figures out a way to deal with those cellphone-only folks before the next election.


Advertisement