Advertisement

As Vote Nears, Numbers Are Polls Apart : Politics: Contradictory results raise questions about reliability. Experts advise readers and viewers to study surveys’ track records.

Share
TIMES STAFF WRITER

In the final days of the 1994 campaign, an array of contradictory polls--with results as much as 15 points apart and different candidates ahead--reveals some alarming truths about the reliability of such surveys and offers a sharp warning about media coverage of politics, pollsters conceded Friday.

The number of polls, many sponsored by news organizations, has mushroomed in recent years. And even in the most competent hands, polls are not as reliable as media portrayals would suggest--especially political “horse race” polls that measure only which candidate is ahead.

What’s more, many Americans fail to grasp such basics of polling as the “margin of error,” a statistical measure in which a 10-point lead can be the same thing as a dead heat.

Advertisement

All this makes readers and viewers vulnerable to a confusing buffet of numbers. Consider that:

* In Pennsylvania’s closely watched Senate race, Democratic incumbent Harris Wofford leads Republican challenger Rick Santorum by five points in one poll. But he trails by 10 in another.

* In Michigan’s U.S. Senate race, Republican Spencer Abraham is ahead of Democrat Bob Carr by six to 10 points in two polls but behind by one point in a third.

* In California, Democratic Sen. Dianne Feinstein leads Rep. Mike Huffington by 10 points in a KCAL-TV poll and trails by two in a KCBS-TV poll.

* Nationwide, an ABC/Washington Post “generic party” poll shows that the country is tilting toward the Democrats on Tuesday, with 48% of Americans saying they intend to vote for the Democratic Party in their local congressional campaigns while 45% favor Republicans.

But a CBS/New York Times poll finds that the country is tilting Republican, 49% to 45%.

What is a reader or viewer to think?

One guide, pollsters said, is to watch to see if the poll is conducted by the campaign itself. In recent years, the number of polls done by campaigns has exploded and many are not reliable, according to veteran California pollster Mervin Field.

Advertisement

These polls tend to be conducted quickly and cheaply and generally are not designed to find precise horse race numbers as much as general trends, said Washington Post pollster Richard Morin. And frankly, Morin added, campaigns often lie about the results.

“A lot of these numbers are released for our benefit,” he said, to try to impress journalists and alter press coverage. “And these campaigns have two sets of books.”

Another factor is the rising number of polls in general, particularly those taken by media outlets.

“Technology has made bad polling easier,” said CBS pollster Kathy Frankovic. “This business used to require statistical expertise and expensive computers. Today you can buy a voter sample, a computer program and run it on a PC and get yourself a package that allows 10 people with phones to do the poll.”

One suggestion for separating good polls from bad is to rely on those by familiar sources who have been polling for years.

In California, for instance, the Field Poll and the Los Angeles Times Poll have long track records for accuracy. The same is true nationally of the network news polls and the Gallup Poll. Another indicator is whether a poll is done in connection with a university, which would subject its methods to academic peer review.

Advertisement

A common-sense piece of advice is to compare poll results and throw out those that do not conform. For instance, three polls now show Feinstein ahead by six to 10 points in California--the Field, Times and KCAL surveys. Only the one taken for KCBS-TV shows Huffington leading, and the same pollster has recorded similar anomalous results in a Michigan race.

But even if all of that advice is followed, the public should still proceed with caution--especially when looking at so-called “horse race” polls. One major reason is the difficulty in determining which of the people polled should be counted because they will actually vote on Election Day and which should be ignored because they are unlikely to vote.

“The nasty little secret of the polling business is that this so-called likely voter definition is just an educated guess,” said Los Angeles Times pollster John Brennan.

Every pollster has a different way of defining likely voters, Brennan said. Some base it on past voting behavior, which can undercount young voters. Some just ask a few questions. Others have elaborate screens, including demographics and knowledge of the political race.

Other factors can also alter a poll’s results:

* Is the pollster choosing respondents from a master list of voters or from the phone book? Each has advantages but will render a different pool of callers.

* How often do the pollsters call back if there is no answer? If they do not call back two or three times at the minimum, pollsters said, they are likely to undercount young people and the affluent and overcount people more likely to be home--those with young children and the elderly.

Advertisement

* What is the race or accent of the person asking the questions, particularly if the political contest has a racial dynamic such as the battle over Proposition 187 in California?

* How are questions worded? For example, one reason the “generic party” poll results are at odds, according to Frankovic, is that different organizations ask different questions.

* Are the respondents aware of the political world? Only 30% of the public can even name a congressman from their state, Frankovic said.

All of those problems are subtle and much more difficult to measure than the frequently cited “margin of error,” a calculation that counts only the statistical probability of accuracy based on the number of people interviewed.

Advertisement