Javascript required
Skip to content Skip to sidebar Skip to footer

Why Were Polls Wrong Again in 2020

Credit: Joseph Prezioso Getty Images

In the weeks leading up to the November 2016 election, polls beyond the country predicted an easy sweep for Democratic nominee Hillary Clinton. From Vanuatu to Timbuktu, everyone knows what happened. Media outlets and pollsters took the heat for failing to project a victory for Donald Trump. The polls were ultimately right almost the popular vote. Merely they missed the marker in cardinal swing states that tilted the Electoral College toward Trump.

This time, prognosticators made assurances that such mistakes were so 2016. But as votes were tabulated on November three, nervous viewers and pollsters began to experience a sense of déjà vu. One time once more, more ballots were ticking toward President Trump than the polls had projected. Though the voter surveys ultimately pointed in the wrong management for merely two states—North Carolina and Florida, both of which had signaled a win for Joe Biden—they incorrectly gauged just how much of the overall vote would get to Trump in both scarlet and bluish states. In states where polls had favored Biden, the vote margin went to Trump by a median of two.6 additional percentage points. And in Republican states, Trump did even better than the polls had indicated—by a whopping 6.4 points.

Iv years agone, Sam Wang, a neuroscience professor at Princeton University and co-founder of the weblog Princeton Election Consortium, which analyzes election polling, called the race for Clinton. He was so confident that he made a bet to eat an insect if Trump won more than 240 balloter votes—and concluded up downing a cricket live on CNN. Wang is coy near any plans for arthropod consumption in 2020, but his predictions were once again optimistic: he pegged Biden at 342 electoral votes and projected that the Democrats would have 53 Senate seats and a 4.6 percent gain in the Business firm of Representatives.

Scientific American recently spoke with Wang about what may have gone wrong with the polls this time around—and what bugs remain to be sorted out.

[An edited transcript of the interview follows.]

How did the polling errors for the 2020 ballot compare with those we saw in the 2016 contest?

Broadly, there was a polling error of about two.5 percentage points across the lath in shut states and blue states for the presidential race. This was similar in size to the polling error in 2016, merely information technology mattered less this time considering the race wasn't as close.

The main thing that has inverse since 2016 is not the polling but the political situation. I would say that worrying about polling is, in some sense, worrying near the 2016 trouble. And the 2020 trouble is ensuring in that location is a full and fair count and ensuring a shine transition.

Nonetheless, there were meaning errors. What may have driven some of those discrepancies?

The big polling errors in red states are the easiest to explain considering at that place'south a precedent: in states that are historically not very shut for the presidency, the winning candidate usually overperforms. It'south long been known turnout is lower in states that aren't competitive for the presidency because of our weird Balloter College mechanism. That effect—the winner's bonus—might be enhanced in very cherry states by the pandemic. If you're in a very cherry-red land, and you're a Democratic voter who knows your vote doesn't affect the issue of the presidential race, y'all might be slightly less motivated to turn out during a pandemic.

That'due south 1 kind of polling fault that I don't think we need to be concerned about. But the error we probably should be concerned about is this ii.five-percentage-point error in close states. That error happened in swing states just also in Autonomous-trending states. For people who watch politics closely, the expectation was that we had a couple of roads we could have gone downwards [on election night]. Some states count and report votes on election night, and other states take days to report. The polls beforehand pointed toward the possibility of North Carolina and Florida coming out for Biden. That would have effectively ended the presidential race right there. Simply the races were shut enough that at that place was also the possibility that things would continue. In the end, that's what happened: we were watching more than counting happen in Pennsylvania, Michigan, Wisconsin, Arizona and Nevada.

How did polling on the presidential race compare with the errors we saw with Senate races this year?

The Senate errors were a bigger deal. There were vii Senate races where the polling showed the races within three points in either direction. Roughly speaking, that meant a range of outcomes for betwixt 49 and 56 Democratic seats. A small polling miss had a pretty consequential outcome considering every percentage point missed would lead to, on average, some other Senate seat going one way or the other. Missing a few points in the presidential race was not a big deal this year, but missing by a few points in Senate races mattered.

What would more accurate polling have meant for the Senate races?

The real reason polling matters is to aid people determine where to put their energy. If we had a more than accurate view of where the races were going to end upward, information technology would accept suggested political activists put more energy into the Georgia and Northward Carolina Senate races.

And it's a weird error that the Senate polls were off by more than than the presidential polls. 1 possible explanation would be that voters were paying less attending to Senate races than presidential races and therefore were unaware of their own preference. Very few Americans lack awareness of whether they adopt Trump or Biden. But maybe more people would be unaware of their ain mental processes for say, [Republican incumbent] Thom Tillis versus [Democratic challenger] Cal Cunningham [in Northward Carolina'due south Senate race]. Considering American politics have been extremely polarized for the past 25 years, people tend to [cease up] voting [a] straight ticket for their own political party.

Considering that most of the polls overestimated Biden'due south pb, is information technology possible pollsters were simply not adequately reaching Trump supporters past phone?

David Shor, a data analyst [who was formerly head of political data science at the visitor Civis Analytics],  recently pointed out the possibility that people who answer to polls are not a representative sample. They're pretty weird in the sense that they're willing to choice up the telephone and stay on the phone with a pollster. He gave evidence that people are more probable to pick upwards the phone if they're Democrats, more than likely to pick up under the atmospheric condition of a pandemic and more than probable to pick upwards the phone if they score high in the domain of social trust. It's fascinating. The idea is that poll respondents score college on social trust than the general population, and considering of that, they're non a representative sample of the population. That could exist skewing the results.

This is also related to the thought that states with more QAnon followers experienced more inaccurate polling. The QAnon belief system is certainly correlated with lower social trust. And those might exist people who are but not going to option upwardly the phone. If you believe in a monstrous conspiracy of sex corruption involving one of the major political parties of the U.S., and then you might be paranoid. Ane could not dominion out the possibility that paranoid people would also be disinclined to answer opinion polls.

In Florida'south Miami-Dade Canton, we saw a surprising surge of Hispanic voters  turning out for Trump. How might the polls have failed to take into account members of that demographic bankroll Trump?

Pollsters know Hispanic voters to be a hard-to-attain demographic. In add-on, Hispanics are also not a monolithic population. If you look at some of the exit polling, information technology looks similar Hispanics were more than favorable to Trump than they were to Clinton iv years ago. It'southward certainly possible Hispanic support was missed past pollsters this fourth dimension around.

Given that the presidential polls have been off for the by two elections, how much attention should people pay to polls?

I think polling is critically of import because it is a mode by which we can mensurate public sentiment more than rigorously than any other method. Polling plays a critical office in our society. One affair nosotros shouldn't exercise is catechumen polling information into probabilities. That obscures the fact that polls tin be a few points off. And it's better to go out the reported data in units of opinion [as a pct favoring a candidate] rather than endeavour to catechumen it to a probability.

It's all-time non to strength too much meaning out of a poll. If a race looks similar it's within iii or iv points in either direction, we should simply say information technology's a close race and not force the data to say something they can't. I think pollsters volition accept this inaccuracy and try to do better. But at some level, we should end expecting too much out of the polling data.

mcdonelldiferathe.blogspot.com

Source: https://www.scientificamerican.com/article/why-polls-were-mostly-wrong/