How the Berkeley-Los Angeles Times poll of congressional races works
The UC Berkeley Institute for Governmental Studies poll that the Los Angeles Times is releasing Thursday uses a method that differs from our past surveys. Here’s an explanation of how it works, and why we’re doing it this way.
Like many surveys, the Berkeley IGS Poll, directed by veteran pollster Mark DiCamillo, starts with what’s known as the voter file. That’s the information the state makes public about each registered voter.
The voter file doesn’t show how a person voted, but it does show who voted in each election and some basic information, including a voter’s party registration and where the voter lives. In most states, the voter file includes a telephone number, which is where pollsters most often get the numbers they use for surveys.
In California and some other states, the voter file can include an email address. About one-third of voters in the state have supplied an email.
The Berkeley IGS poll uses those publicly available e-mail addresses to send a sample of voters a message asking if they would be willing to take a survey about the election. Those who respond to that query or to follow-up emails get a survey they can complete online.
This approach has several advantages
Doing the poll this way provides a random sample of voters, unlike many online surveys that use a non-random sample and then must adjust it to match a voter model.
Using email and an online survey lowers costs considerably compared with telephone surveys, which need live interviewers.
And because the sample is drawn from official state and county records, we know that everyone surveyed is registered to vote and which district they’re registered in. We also know the voter’s party registration, age, gender and in many cases, race and ethnicity. All that information can be used when the pollsters adjust the sample to match the demographics of the voting population.
The file also provides accurate information about how often each voter has participated in past elections. That’s helpful in identifying which registered voters are most likely to vote this time.
Finally, this method allows the poll to easily query people in languages other than English and to show respondents visual displays, such as the text of the ballot language for an initiative.
For this poll, respondents were allowed to take the survey in English or Spanish and, in the 48th Congressional District, in Vietnamese. The languages made available were based on the number of voters in each district who have requested a non-English ballot in past elections.
What are the drawbacks?
The biggest potential problem is that only about one-third of voters include an email address in their voter file, and that one-third isn’t randomly distributed.
Certain groups, Latino voters, for example, are less likely to have an email in the voter file. (This is also a challenge faced by telephone surveys that use the voter file, since not everyone has a working number on file, but the problem is bigger with emails.)
To deal with potential underrepresentation of some groups, the survey uses a technique that pollsters call stratification. That means Berkeley sends emails to a larger share of people who are part of underrepresented groups. Stratification adds some complication to the polling, but it results in a more representative sample. Telephone surveys also often stratify their samples to ensure they get adequate representation of hard-to-reach groups.
Once the survey results come in, the poll takes an additional step to ensure that the sample accurately represents the population at large, weighting the data to reflect known demographics, including age, race and ethnicity, gender and party registration.
Finally, the poll selects those voters deemed likely to actually cast ballots. That’s based on their voting history and their answers to questions about their certainty of voting and the degree of interest they have in the current election.
Why do this rather than a traditional phone survey?
As most people who follow politics know, polling has gotten harder in recent years.
People have more telephone numbers and answer fewer of them. The share of voters willing to talk with a pollster — what’s known as the response rate — has dropped dramatically. This year, many telephone surveys of congressional districts have found that only 1% to 2% of calls yield a completed interview.
Response rates as low as that create serious challenges for pollsters, who need to have a random sample of the population for their surveys to accurately represent people’s views. The huge number of calls needed to get a reasonably sized sample also drives up the cost.
Despite those challenges, polls have continued to accurately forecast elections in the vast majority of cases. Even in the 2016 presidential election, most national and state polls were accurate, although commentators made a lot of bad predictions based on the polls.
But declining response rates and rising costs have led pollsters and media organizations to hunt for new ways to accurately measure public opinion. This is one.
This Berkeley IGS poll, which was done for the Los Angeles Times, surveyed 5,090 likely voters in eight congressional districts — the 10th, 22nd, 25th, 39th, 45th, 48th, 49th and 50th — from Sept. 16 to 23. The number of likely voters in each district varies from 912 in the 22nd district to 519 in the 45th. The margin of error ranges from roughly 4 to 6 percentage points in either direction.
The view from Sacramento
For reporting and exclusive analysis from bureau chief John Myers, get our California Politics newsletter.
You may occasionally receive promotional content from the Los Angeles Times.