Advertisement

Daily Voter Polls : Tracking: Does It Lead to Truth?

Share
Times Staff Writer

In the bar at the Sheraton Wayfarer Inn near Manchester, N.H., where the political wisdom was poured, the analysis went like this:

George Bush saved his political career by coming back in the last three days of the New Hampshire primary to overtake Bob Dole and win.

And Bob Dole, who had the Republican presidential nomination within reach, now finds his campaign critically wounded. Within a week, Dole had started firing aides.

Advertisement

In the week since the New Hampshire primary, this analysis has determined what the press has reported, how candidates have behaved, how much money they have raised and, in essence, what Americans know about the race for President.

But is it really true? Had Dole really caught up to Bush and possibly passed him earlier in the week? Did Bush really come from behind?

Maybe Not

The answer, say political professionals, is maybe not.

Much of this conventional wisdom is distilled from public-opinion studies called tracking polls, which purport to monitor daily changes in how the public thinks. This year, not only candidates are using these polls to map strategy. The news media also are conducting their own polls to determine who is ahead in the race day by day. And they detected Dole’s rise in the week between the Iowa caucuses and the Feb. 16 New Hampshire primary.

The problem, say many pollsters and political professionals, is that tracking polls can be wildly misleading. They are hard to conduct and easy to misread. And they were never designed to reliably predict who was ahead or behind in elections, as the press is doing this year.

“Tracking polls have value,” said longtime Republican political consultant Eddie Mahe. “But misunderstood, poorly executed, used (by reporters) as the lead for stories to define the hard reality of the world, that is straining science . . . into science fiction.”

Fiction Becomes Real

And then fiction becomes real. In the looking-glass logic of politics, perception is reality, victory a matter of expectation. Whether tracking polls are accurate doesn’t matter if enough people believe them.

Advertisement

And here, say political professionals, the politician and the press are in cahoots.

“Using tracking polls to reposition candidates does influence the expectations game, and colors how elections get reported,” veteran Republican political consultant John Deardourf said. “But I think campaigns themselves bear part of the blame,” putting too much faith themselves in tracking.

In a sense, tracking polls are a game of statistical leapfrog played over several days. Traditional polls ask dozens of questions of 1,200 to 1,500 people. But due to the constraints of time and money, tracking polls ask only a few hundred people a night a handful of questions: Do you view the following people favorably or unfavorably? Whom do you plan to vote for?

Since each one-day sample is too small to be meaningful, the pollster adds each night’s results to the last four or five, subtracting the oldest sample each day .

So doing, the thinking goes, the pollster tests daily changes in public opinion without having to interview thousands of people every night.

Computers made tracking possible in the late 1970s, by allowing pollsters to correlate interview results instantly.

But before 1980, presidential campaigns tracked only sparingly in primaries, and news organizations not at all.

Advertisement

Then, in 1984, ABC News and the Washington Post tracked in New Hampshire and picked up that Gary Hart could beat Walter F. Mondale there.

In the world of political techno-journalism, that counts as an old-fashioned exclusive.

So this year, media organizations of all sorts are tracking. For pollsters, the New Hampshire primary was like the opening weekend of hunting season.

Not only did ABC and the Post track again during the week leading up to the primary, so did CBS News, the Boston Globe, the Boston Herald and a local Boston TV station.

Closed the Gap

Their effect, in sum, was to create the impression that Dole, boosted by his Iowa victory Feb. 8, closed the gap and perhaps even passed Bush.

And in the political wilderness of mirrors, that made Bush’s nine-percentage-point victory, which might have seemed small a week earlier, seem stunning.

Consider this progression:

On the Friday before the Tuesday vote, CBS led its broadcast with tracking polls showing the race dead even. But, said Dan Rather, “the trend suggested by this poll taken over the past two days is that Bush’s support is down and dropping; Dole’s is up and climbing.”

Advertisement

By Monday afternoon, local TV station WBZ in Boston showed Dole 1 point ahead of Bush. The station did not reveal what margin of error the poll contained.

Monday night, ABC News and the Washington Post reported that Dole over the last three days was ahead of Bush by 7 points. It, too, did not reveal margins of error.

By Tuesday, the ABC-Post poll showed Dole leading by 3 points. CBS had Bush up by 4.

“For Bush (the media tracking polls) were the best thing that could have happened,” said Mary Klette, director of elections and polling for NBC, which is not tracking this year.

“Whether they intended it or not, the impression the press left was that they were sharing numbers that are predicting something,” said Deardourf. “The technology does not do that. It doesn’t claim to.”

“The problem is the press believes tracking polls too much,” said Times political consultant William Schneider, and as a result so does the public.

In New Hampshire, indeed, most television broadcasts, including those of the networks, led their political stories with the nightly tracking polls. The polls also helped frame the top political story each day in the Boston Globe, which circulates widely in New Hampshire.

Advertisement

One reason the press finds tracking polls so irresistible, Washington Post polling director Richard Morin has written, is that “often the surveys provide the closest thing to news in a reporter’s otherwise stale recitation of even more stale stump speeches.”

But, say defenders, would you leave how we view the race to simple humans, for Dan Rather or perhaps Tawny Little, to interpret?

“What is the margin of error on a political reporter?” said Jeff Alderman, director of polling for ABC News. “Who is more accurate, a pollster or a pundit? I say the pollsters. Sure we’re not perfect. Polling is not physics. Maybe expectations about the objectivity of polling are too high.”

However, even the political professionals who first employed tracking polls say they never were designed to show who is ahead.

Rough Yardsticks

They were used, instead, as rough yardsticks against which to measure campaign strategy. Have the rival camp’s attack ads hurt your candidate? Should you counterattack with ads of your own? Is your new message getting across?

And even then, said political consultants, what you make of the information is largely intuitive. “You know the moves you can plan. You see the turns,” said longtime political consultant David Garth. But what you actually do with it, “that comes from years of experience,” said Garth.

Advertisement

“Tracking polls are like sending a canary into a mine shaft,” said a pollster for one of the leading candidates, who spoke on the condition that his name not be used. They “can detect if there are any noxious fumes in the environment, but they can’t really detect the source.”

As predictors of how voters will behave, or even how they really feel day to day, “tracking polls are the least reliable of all polls,” said Schneider.

The reason is that tracking results are too crude, with time allowing for only a few questions. And the samples are too small, as few as 150 respondents a night--roughly a tenth of what pollsters normally consider reliable.

Asking too few questions means that tracking polls usually grossly overestimate who will vote.

In New Hampshire, for example, 83% of the voters said they would “certainly” or “probably” vote, but only 49% ended up actually doing so. “So obviously there are a lot of liars out there,” said I.A. Lewis, director of the Los Angeles Times Poll.

To screen out those “liars,” Lewis said, traditional polls often ask 15 or more questions, probing past voting behavior.

Advertisement

But most tracking polls only ask two or three questions to screen out people unlikely to vote, said pollsters. And critics believe that having so many nonvoters in the sample is one reason tracking polls can be so unreliable.

“I think the only responsible way for the media to report tracking polls is tell people what the estimated turnout is based on,” said Schneider. “But of course they won’t do that, because if they say this is based on an estimated turnout of 60% of potential voters, everyone would know how absurd it was. There ain’t no primary anywhere with 60% turnout.”

More Basic Problem

Then there is the more basic problem: Even if the screening is carefully done, the nightly sample sizes in most tracking polls are too small to be statistically reliable.

With a sample of 200 people interviewed in a night, for instance, the margin of error is 7 percentage points up or down, according to Lewis.

“A guy would have to be 14 points ahead to be truly ahead, and nobody had that finding in New Hampshire (for Republicans). So it was totally meaningless.”

“A single day’s (polling) results can be disastrously misleading,” consultant Doug Bailey warned the day before the New Hampshire primary, writing in the Presidential Campaign Hotline, a daily newsletter for the press and campaigns.

Advertisement

To compensate, pollsters add several nights’ results together into a “rolling average,” making up a sample of more than 1,000.

“But there isn’t a person in this business who doesn’t peek at those one-night numbers,” said Democratic media consultant Robert Squier, particularly when events are changing rapidly.

In New Hampshire, the media started peeking, too--on the air.

The night before the primary, for instance, ABC anchor Peter Jennings could not resist characterizing the previous night’s findings.

On election day, the Washington Post peeked, too, emphasizing its one-night sample (400 people) over the three-day average.

As it turned out, the Post-ABC poll showed Bush gaining, though still behind. ABC, by the way, did not in those final days indicate what the margin of error was in its polls.

Even in their rolling averages, many news organizations in New Hampshire included only two or three nights. CBS, for instance, used two nights in its final tracking poll, a sample of 596 people, with a margin of error of 4 points up or down. It reported Bush with 34%, Dole 30%.

Advertisement

“We said it was a close contest with Bush leading Dole,” said Warren Mitofsky, director of elections and surveys at CBS News. “We did not make it sound like it would be a big gap.”

Margin of Error

But in fact, within the margin of error of the CBS poll, Dole could easily have been ahead of Bush.

Peeking at one-night numbers can be dangerous enough for campaigns, causing officials to overreact. But the impact on campaign expectations can be even more significant when the media do it.

One solution, some argue, is to average all news organizations’ tracking polls each night together into a kind of rough estimate.

“Four bad polls equals one good poll,” said Schneider. As a result, Schneider advocates that those networks that do track report how their results compare with others, not just report their own poll as if it were objective.

In New Hampshire, indeed, all the polls showed the race tightening, though the various polls as of election day differed by several percentage points.

Advertisement

But, argued Mahe, in primary elections, tracking polls are likely to be wrong even when they agree, and thus they will distort expectations.

In primaries, where voters are expressing a preference for a nominee from a field of candidates rather than electing a President, a large proportion of voters don’t decide until the last day. And they may waffle widely before that day.

In New Hampshire, for instance, 25% made up their mind in the final three days, 15% the last day, an exit poll by The Times found.

That is a far higher number than any of the tracking polls indicated.

And that means that the election effectively was decided after most or all of the tracking polls were completed.

One defense of tracking polls, offered by ABC’s Alderman, is that even if the press doesn’t track, the campaigns will, and then leak their results to reporters anyway.

“How the hell are we supposed to judge what some politician hands us,” Alderman said. “I just don’t want to go back to that era of guessing.”

Advertisement

Klette of NBC suggests that the media should do tracking on their own to assess campaign strategies, but they shouldn’t put the numbers on the air. But how many news organizations, one might ask, would spend the money to conduct polls that don’t make news?

Tracking polls have still other problems. They don’t have time to call people back several times, which means, said pollster Harrison Hickman, that they are skewed toward those voters who are home a lot: the less affluent without money to eat out, and the elderly. This is especially true over weekends, the crucial time before elections.

Getting Out Vote

And tracking polls cannot measure the ability of campaign organizations to get out the vote. Assume the tracking polls in New Hampshire were accurate, with Bush and Dole dead even. Bush could have won by nine points because of superior ability to get his supporters to the polls.

In the next primary, the 20-contest Super Tuesday election March 8, CBS isn’t planning on tracking, and ABC won’t say what its plans are.

“It is just too big do anything with tracking,” Mitofsky said. “It would cost millions.”

But a week after comes Illinois, and with all those results from Super Tuesday to sort out, it could be tracking season again.

Researcher Eileen V. Quigley contributed to this article.

Advertisement