In a typical year, talking about how the weighting of a poll sample works would be a good way to put people asleep. As we know, this isn't a typical year.
On Wednesday, the New York Times published an interesting piece that looked at how the USC/Los Angeles Times “Daybreak” tracking poll weights its sample. The article suggested that the weighting was the main factor in causing the Daybreak poll to be the only major survey that had showed
That's not entirely the case, and here are some questions and answers about why. Note that as of Thursday, the poll has the two candidates tied, but the trend over the past week has been a steady decline of Trump's standing and an increase for Clinton. Over the next few days, we'll see how long that trend continues and whether Clinton takes a firm lead in the survey.
What is weighting and why do pollsters do it?
For a poll to be legitimate, groups in the population must be proportionally represented. Usually, though, the numbers for at least one group are out of whack. In order to make sure that the sample is fully representative, polls will give extra weight to some respondents. The idea is to match the sample to known demographic statistics, including race, gender and age.
How much difference does weighting make in the poll?
After the New York Times story was published, we reweighted the data to remove one of the weights that has attracted the most discussion — balancing the sample to match how people said they voted in 2012. The result? Some change, but not an overwhelming amount. In Of the poll's 14 weeks so far, there were three times when removing that weight switched the result from a Trump lead to a Clinton lead. Generally, the shift was between one and two percentage points.
Does the Daybreak poll go further than typical weighting? Why?
Yes. The pollsters wanted to make sure that the sample was not just balanced for big population groups, such as men and women, but also for smaller groups like young minority voters, who are often underrepresented in polls. Leading political scientists have suggested such an approach to make sure that polls accurately reflect the U.S. population. Four years ago, some polls ended up being wrong because they didn't fully represent the population's diversity.
Is there a downside?
There sure is, and that's where the 19-year-old poll respondent the New York Times article focused on comes in. If a poll has a small number of people who belong to a particular group — young black men, for example — there's always a risk that those few respondents won't truly represent the group's views. When pollsters then give those few people extra weight, the results, especially for that subgroup, can be skewed.
In the case of the Daybreak poll, there's one young black respondent who is a strong supporter of Trump. He hasn't responded to the poll every week, but when he does, he pushes up Trump's number among African Americans.
How does the poll deal with that?
When the poll's results shift suddenly, the margin of error increases. That should be a red flag for everyone that such a blip in the poll's results may simply be statistical noise, not a real shift in public opinion. As we've written before, the fluctuations in Trump's black support fall into that category of statistical oddities, not a real indication of movement. Some people have ignored the margin of error in order to make a political point, but there's not much we can do about that.
Did that young black Trump supporter have a big impact on the poll’s overall results?
He did have some impact on Trump's support some weeks — often less than a point, but definitely a measurable difference. It's not a huge shift, but if several small shifts all move in the same direction, they can add up. Of course, the whole point of weighting is to make sure that all groups are represented; Trump does have some young supporters, even among minority groups. There's another young, black man in the poll who is a strong Clinton supporter. When he participates, he pushes up Clinton's vote. Both men have strong weights in the poll because they are part of a group that would otherwise be underrepresented.
About that weighting according to how people voted in 2012. Why?
One of the groups often underrepresented in polls is people who don't vote in every election. In order to make sure the poll has an adequate representation of past nonvoters, and to try to get the partisan balance right, the pollsters weight the sample so that 48% are people who did not vote in 2012 — the share of people eligible to vote this year who sat out that election either because they were too young or merely didn't cast a ballot — and the rest are divided the way the actual vote happened.
What’s the downside there?
People may not tell the truth about whether they voted or for whom. Typically, some people will claim they voted for the winner even though they didn’t. Some of the poll respondents who claim to have voted for
For more on Politics and Policy, follow me @DavidLauter
10:00 p.m.: This article was updated with information about a second young, black poll respondent who is a Clinton supporter.