Virginia Miori, Ph.D., assistant professor of decision and system sciences at Saint Joseph’s University in Philadelphia and an expert on predictive analytics, is not surprised by the findings. “Surveys and polls are known for problems with bias even in the best cases,” she says.
A poll’s accuracy is largely tied to the sample of respondents surveyed, which is a crucial step in producing a poll, says Miori. “It’s like trying to make a scale model of the overall election results by looking at a smaller, yet similarly configured collection of voters.”
“Simply stated, the polling agencies (like those reviewed by Silver) did not question the right combination of individuals,” says Miori. “In addition, a particular polling agency will be very likely to ask questions biased toward the results they are seeking.”
But even if a polling organization selects a sample reflective of the voting public, certain factors could influence its precision. Miori describes polling as a delicate science in which something as simple as the time of day a poll is conducted affects its outcome.
“Consider blue collar workers versus white collar workers. They are available at different times of the day and are likely to have very different access to technology such as cell phones during both work and non-work hours. If a poll is taken over a specific four-hour period, as this was, it is likely that these groups would not be as equally represented,” she says. “Further, consider that one person may answer the same question differently based on their mood, or the type of day they've had.”
So what can be done to ensure the best possible statistical accuracy?
“The best we can do when polling is to maintain complete consistency in the approach, ask unbiased questions and strive to survey a sample that closely matches the characteristics of the population of voters. The polling agencies with the least interest in the outcome of the election always perform the best.”