Advertisement

Times Poll: Frequently Asked Questions

Share

These are some questions about public opinion polling we've been asked frequently over the years. If your question isn't answered here, you can e-mail it to timespoll@latimes.com and we'll answer it if we can.

  1. What is a "margin of sampling error"? A "confidence interval"?
  2. Why do different polls sometimes get such different results?
  3. Why don't I see the opinions of Asians cited in Times Poll stories as often as the opinions of other groups?
  4. ...but aren't there as many Asians as blacks in California?
  5. Why doesn't the Times Poll conduct online or call-in polls?
  6. Why haven't you ever called me or anyone I know?
  7. Everyone I know disagrees with your poll.
  8. Who are these "likely voters"?
  9. How can I participate in your polls?

Question 1

What is a "margin of sampling error"?
What is a "confidence interval"?

Answer

These two terms -- margin of sampling error and confidence interval -- are closely related and together help us gauge the "strength" or "truth" of a statistical number which is obtained from a sample, such as a polling result. This is why any reputable survey will include a statement of the margin of sampling error with the results of the survey.

Ideally, to answer a question like "Are the voters going to elect Mr. Alpha or Ms. Beta to be mayor of our city?" one would contact and ask every voter in the city how he or she intends to vote. Even if all voters had made up their minds already and would truthfully tell a pollster their preferences, it is obviously just not possible to interview that many people. Pollsters instead interview a smaller number of randomly-selected city residents and use standard statistical methods to project their answers to the rest of the population.

The margin of sampling error and the confidence interval are the expression of the confidence with which that projection may be made. Typically, a sample is analyzed with a standard confidence level of 95%, meaning that 95% of the time the actual number will lie within the margin of sampling error. (This is such a standard measure that we usually don't even mention it.) The margin of sampling error, then, is the range of numbers surrounding the projected figure, such that we can be 95% confident that the actual number lies within that range.

An example may help:

In a sample of 500 city voters, 45% said they will vote for Mr. Alpha, 51% said they will vote for Ms. Beta and 4% will vote for someone else. The figures seem to tell us that Ms. Beta is leading by a fairly solid six points; but can we say with confidence that Ms. Beta is ahead?

The answer lies with the margin of sampling error. Based on a survey of 500 people, we can be 95% sure that the correct value lies within 4 percentage points of our result. (Don't worry about where this number 4 comes from. It is a standard calculation that can be found in any basic statistics book.) We say that this survey has a "margin of sampling error of plus or minus 4 percentage points."

chart

What this means is: Our small sample enables us to project that Ms. Beta currently has the backing of between 47% and 55% of city residents (add and subtract 4% from 51%). This range of values is called the confidence interval. By the same reasoning, the survey tells us that Mr. Alpha's level of support is between 41% and 49%. Since the top range of Mr. Alpha's vote (49%) and the bottom range of Ms. Beta's vote (47%) overlap, this race would be too close to call based on a sample size of only 500 -- that is, we could not say with a 95% level of confidence that Ms. Beta is actually ahead. On the other hand, a larger sample (which would produce a smaller margin of sampling error) might let us make that call.

The Times Poll's sample sizes typically produce a margin of sampling error of +/- 3 percentage points.


Question 2

Why do different polls sometimes get such different results?

Answer

"Polls are a snapshot in time." This is a cliche, but -- like most cliches -- true. Surveys are done over a period of days and responses can be affected by such things as television coverage of events, campaign advertisements and opinions expressed by people in the news. Two survey organizations never ask exactly the same questions in the same order of the same people over the same period of time. Even if they did, their results could vary by several points (see "margin of sampling error" above) and still be considered statistically valid.

Sampling error Polls measure responses to specific questions and are subject to random and introduced error of many kinds. Survey results are often discussed by the media as if they are actual numbers when they are, in truth, measured approximations. Confidence intervals help analysts account for random sampling error, but not for error introduced by surveys with leading question wording, order bias or interviewers that fuel a respondent's natural desire to please.

Differences in question wording and/or context An example taken from late Times Poll Director John Brennan's column of May 20, 1993, illustrates the point: On May 6th a Nightline broadcast noted that a majority of Americans now supported military action in Bosnia-Herzegovina. However, the next day's USA Today headline, "55% Oppose Air Strikes," sent a completely different message.

How is it that two news organizations had such different perspectives on public opinion? Examine the different question wordings below:

Gallup/CNN/USA Today question:

"As you may know, the Bosnian Serbs rejected the United Nations Peace plan and Serbian Forces are continuing to attack Muslim towns. Some people are suggesting the United States conduct air strikes against Serbian military forces, while others say we should not get militarily involved. Do you favor or oppose U.S. air strikes?"

ABC News question:

"Specifically, would you support or oppose the United States, along with its allies in Europe, carrying out air strikes against Bosnian Serb artillery positions and supply lines?"

Favor35% Favor65%
Oppose55   Oppose 32  
No Opinion6   No Opinion 3  
Depends (volunteered)3  

Poll of 603 adults nationwide, taken 5/6/93. Margin of error +/- 5 percentage points.

Poll of 516 adults nationwide, taken 5/6/93. Margin of error +/- 5 percentage points.


The Gallup poll question (cited by USA Today) did not mention European allies, making it sound like the U.S. would be acting alone in carrying out air strikes. This question wording found only 35% in favor. But when ABC News asked about support for air strikes in conjunction with our allies, 65% were in favor. The differences illustrate the importance of question wording in survey research.

Question 3

Why don't I see the opinions of Asians cited in Times Poll stories as often as the opinions of other groups?

Answer

The Times Poll asks itself this question in a different way: "How do we produce reliable samples of the different peoples in our multi-racial, multi-lingual population while still maintaining our rigorous standards and meeting our deadlines?"

When Asian Americans or other minority groups' specific opinions are not cited in a Times Poll story, graphic or stat sheet, it is because the Poll never cites results for subgroups containing fewer than 100 respondents. Due to the multi-lingual nature of the Asian population and its high level of community dispersal, it is very difficult to obtain good samples of the populations living in Southern California. Merely interviewing more English-speaking Asians in order to have enough in our poll to cite results would result in OVER-reporting responses of the smaller group who speak English. This problem is what keeps us from regularly "oversampling" the Asian population in order to include their answers in our results.

We have accepted the challenge of surveying this important community while maintaining good sampling techniques by undertaking a series of polls conducted in the language of the respondent's choice, each one focused on a particular Asian subpopulation. We have completed in-depth surveys* of Korean (poll #267), Vietnamese (#331), Filipino (#370) and Chinese (#396) groups as of summer 1997. Not only are our results published in The Times, but the surveys have attracted national attention and we have made the data available to academic and media analysts all over the country.

*The Stat Sheets for these surveys -- and for most of the other polls we've done since 1992 -- can be found in the Poll's free archives.


Question 4

...but aren't there as many Asians as blacks in California?

Answer

It is true that the 1990 Census and subsequent projections show that the two populations are of similar size in California. The main difference between the two groups from a polling perspective is the proportion of English speakers in those populations.

chart

In California, for example, according to the 1990 Census, 94% of black adults speak English at home while only 19% of Asian adults speak English at home. Virtually all (99%) of the black adult population speaks English at home, or speaks English well or very well; 78% of Asians fit that description. This means that the black population is fully represented in our sample, while we are unable to speak with more than one out of every five Asian adults that we reach.


Question 5

Why doesn't the Times Poll conduct online or call-in polls?

Answer

Online polls are surveys that computer users can participate in by answering questions over an Internet connection or some other online service. Call-in polls are surveys in which people are invited to call a phone number (which may or may not be toll-free) to register their views.

The Times Poll does not conduct polls of these kinds because their results are unreliable as a measure of public opinion. They have several methodological problems.

Our polls (like all scientifically sound public opinion surveys) are conducted by first selecting a random sample of people to interview -- usually based on their telephone numbers -- and then calling each of them and persuading them to talk to us about the subject that we are interested in. We carefully monitor the results to be sure that our sample is representative of the race, educational attainment, regional distribution, etc., of the population we are sampling. The results of such a survey can be relied upon to approximate the views of the entire population.

Online and call-in polls, on the other hand, represent only the views of those who actually participate in them. Rather than seeking out and interviewing a representative sample of the larger population, such surveys tend to attract those who are especially motivated to respond to a particular issue. This is known as a "self-selected sample." Such a group would probably be weighted toward people who write letters to the editors of newspapers or call up radio talk shows -- mainly those with strongly held opinions on a particular subject.

Any online survey is also self-selecting in that it is accessible only to those who have a connection to the Internet or to the online service provider, which at present still excludes a very large portion of the population.

Unfortunately, the media sometimes fail to distinguish between such surveys and scientifically valid public opinion polls. There are both good and bad surveys out there, so it is important that you as a viewer or reader exercise your own informed judgment about the statistics being presented to you in the media. We hope this FAQ helps you to do that.


Question 6

Why haven't you ever called me or anyone I know? I've lived here for many years...

Answer

Here is a thought-experiment to illustrate the answer. First, note that there are approximately three million people over 18 years of age living in the city of Los Angeles. The Times Poll speaks with approximately 1500 people per survey and has completed over 400 polls as of the end of 1997. So we have spoken to over 600,000 people in 20 years of polling. Even if all those polls had been conducted in the city of Los Angeles (they weren't, of course -- they were conducted all over the world), we still would have spoken to only about 17% of the adult population of the city. More than four out of every five people in L.A. would never have been called by the Times Poll over those 20 years.

So the odds are not very high that we will happen to call your phone number or the phone number of any of your friends. On the other hand, you have just as good a chance of being called as anyone else who has a telephone.


Question 7

Everyone I know disagrees with your poll. Did you make up the results?

Answer

It is easy to believe that the people that you work with, live near and associate with socially are representative of the country or city in which you live, but actually most people are surrounded by others who are more like them in their political beliefs, demographics and personal opinions than not. That is part of what allows pollsters and the Census Bureau to sample populations with as much accuracy as we do.

The Times Poll complies with the standards set by the American Association of Public Opinion Research and the National Council on Public Polls. We carefully monitor interviews in progress to be sure that the questions are asked in a uniform and neutral manner by our interviewing staff, and we use random-digit dialing techniques to ensure that everyone with a telephone has an equal chance of being included in our surveys. (Coverage of the 5% of Americans with no phone in their homes is a different subject.) In other words, we take great pains to see to it that the people we interview are a truly representative sample of the entire adult population. This ensures, among other things, that a proportionate number of our respondents will represent your views.


Question 8

Who are these "likely voters" and how are they chosen?

Answer

Likely voters are an elusive group. Each polling organization has its own way of identifying likely voters, generally based upon the answers respondents give to questions about their past voting behavior and their present intention to vote. No matter how the group is defined, the purpose is the same: Pollsters want to know the likely outcome of the election and therefore are especially interested in those voters who will actually go to the polls on election day and not just in the much larger group of registered voters.

There is no foolproof way to identify likely voters. Someone who fully intends to vote when asked three weeks before an election, for example, might change his mind next week or be kept from voting by some last-minute emergency.


Question 9

How can I participate in your polls?

Answer

Unfortunately, we cannot interview volunteers, although we always appreciate the offer.

Here's why:

In order to speak with a representative sample of all Americans, we only call telephone numbers that have been randomly selected by a computer from a list of every possible phone number in the U.S. (or in California, if it's a state poll, etc.). This ensures that everyone with a telephone has an equal chance of being called, and therefore that all kinds of people with all kinds of opinions will be proportionately represented in our sample. (See also the discussion in Question 5 about "self-selected samples.")

Advertisement