USC/LA Times poll criticized for survey methods
While most political polls have found that the majority of voters support Democratic nominee Hillary Clinton in the 2016 presidential race, the Daybreak Poll, a partnership between the Los Angeles Times and the USC Dornsife Center for Economic and Social Research, paints a different picture. The poll, which was first published in July, predicts a victory for Republican nominee Donald Trump by 44.4 percent over Clinton’s 44.1 percent. However, the poll has recently come under fire from public figures, other pollsters and national news outlets — most notably The New York Times — for its methods, which critics say have skewed the data in favor of Trump.
The Daybreak Poll data comes from The Understanding America Study, a survey panel of about 3,200 American households recruited for their demographic diversity. The study was designed to encompass the full diversity of the American people and even includes underrepresented socioeconomic groups who might normally be left out of polling surveys. According to Tania Gutsche, managing director of CESR, this gives it an advantage over other polls that may cut out certain segments of society that don’t have the means to respond to polls.
“People who say they don’t have internet or don’t have a computer, we offer them a tablet that has 4G access,” Gutsche said. “It helps bridge the divide between people in the U.S. who have and have not access to computers.”
Furthermore, the selected participants are part of a long-term study rather than a one-time opportunity. Once they agree to participate, they answer weekly surveys online that cover a wide variety of topics, including voting preferences.
However, this sampling method also has disadvantages, according to news outlets that have conducted their own polls and obtained different results. While the study coordinators go to great lengths to choose a representative sample for the entire country, that sample never changes. It consists of the same 3,200 people from beginning to end. This alternative method for conducting polls could explain the heavy skew toward Trump, according to statistical analysis website FiveThirtyEight. Given an initial sample that is predisposed to support Trump from the beginning, all figures down the line would be affected until a new sample is chosen — which would not occur until the election is over.
The Daybreak Poll differs from its counterparts not only in how the participants are selected but also in how the questions are asked. Instead of presenting the question as having a “yes or no” answer, the Daybreak Poll presents the question as a probability, asking participants to rate the likelihood of their voting choice on a scale from zero to 100.
In addition to sample size and question framing, the Daybreak Poll’s post-processing procedure has received criticism. Each day, the responses from the survey are weighted to match demographic characteristics from the U.S. Census Current Population Survey and compared to survey patterns from the 2012 election. This means that the Daybreak Poll’s sample is weighted based on how participants voted in the 2012 presidential election.
While this may seem like a good way to balance out the sample, people rarely accurately report who they voted for in the past, according to The New York Times. For example, if more people in the sample mistakenly recall having voted for Obama in 2012, the results would be skewed toward Trump after the adjustment based on patterns for the 2012 election.
The Daybreak Poll makes information about recruitment procedures, survey weighting and raw data readily available to the general public in an effort to be transparent, Gutsche said. And while its methods have been criticized, she said that she believes the poll to be accurate and fair.
“When we first ran the study in 2012, we were showing much higher scores for Obama than any of the other polls,” Gutsche said. “And at the end of the count, we ended up being the closest of all the polls in terms of what the final vote was, so that gives us confidence that this is the same methodology we’re doing again. It may be right, or that could have just been a fluke.”