FEATURE19 January 2021

Voting in Covid-19: The 2020 US election

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

Features Impact North America

The 2020 US presidential election took place against the backdrop of a global pandemic. Chris Jackson shares some of the polling challenges.

US election 2020

During the best of times, election polling is fundamentally challenging. The act of asking people who they may support relies on them predicting their future behaviour, something research indicates people are notoriously bad at doing. Even for the quickest, highest-quality poll, time moves, circumstances change, and support for candidates can shift. So, pollsters poll a lot to try to capture this movement throughout an election cycle.

In the US, presidential elections are increasingly decided by a few thousand votes as the public has become locked into their party of choice. Low turnout or variations in how ballots get counted can be the difference between winning and losing. The 2020 election amplified these issues.

A life-altering, deadly global health crisis during a highly contentious presidential election magnified the known challenges of electoral polling, while pandemic-specific problems threw us more curveballs. Not only did we have to consider the usual questions, such as who is going to vote – as we do every election cycle – but we also had to adapt our thinking around how, and when, people would vote.

There are three things that made US presidential polling particularly challenging in 2020: there was no playbook for voting during a pandemic; we were living through a volatile time; and there were many disturbances to voting and vote counting that made it difficult to quantify how voting would play out.

To begin with, we were largely going off script, as there was no past data to gauge how a pandemic impacts turnout in an election. This meant there was no widely accepted way to cross-check any of the assumptions we made about voters under these unique conditions.

We worked around this by employing a multi-modal approach to research. We ran national surveys twice a week on our opt-in panel and tracked six swing states weekly, from mid-September until election day. To calibrate our weighting and ensure our opt-in sample was representative of both the population and the likely electorate, we ran surveys on our probability-based panel, and compared findings at the beginning and end of the research period.

Polling frequently served two purposes. First, it acted as a thorough way of checking the quality of the information we were receiving. Second, so much of our world has changed over the past year, so there was a higher potential for volatility among voters. What we saw, though, was that partisanship largely held behaviour and attitudes constant throughout this cycle.

Perfect storm?

As a result of polling at a high volume, we were initially worried that, with all that was going on, people may not want to – or be able to – focus on a poll. The quantity of polling and the turmoil so many people were experiencing in their day-to-day lives could have been the perfect storm for respondent fatigue.

At the beginning of the pandemic, surprisingly, we saw an uptick in online survey response rates, as people were tied to their screens at home. We also benefited from having a robust online polling infrastructure to ensure polling was not going to the same people week after week. The cadence of our polling this season allowed us to keep polls short. No survey took longer than 10 minutes to complete.

We tweaked our likely voter questions to reflect the messy reality of how voting played out across the country. We always ask a series of questions about vote history, interest in the election, and people’s intention to vote. We then use that information to score each respondent on their likelihood of voting, charting the probability of each candidate winning when turnout and vote share changed.

We also asked how people planned to vote, in addition to our usual series of likely voter questions. We then compared that with information from places such as Ballotpedia, or the various secretaries of state, on how ballots were counted early, and compared that with the results we got from our polls.

By no means was this a silver-bullet solution. Every state counts votes at different times and does so differently. We expected lots of lawsuits over what votes would or would not count. This is largely beyond the scope of what pollsters can price into their likely voter models. To the best of our ability, we tried to check our results against the most readily available public sources, but these are imperfect tools that are subject to change.

Those adjustments gave us a fuller picture of how people were feeling about voting, whether they were following the election, and how they planned to cast their vote.

It was a daunting challenge to put all the pieces together and make it work. After the 2016 election, we re-evaluated how we conducted our election research, choosing this year to poll up until election day, nationally and in six key battleground states, and using many different modes of research outside of polling to paint a fuller picture of where the country stands.

With 2020 resulting in a victory for Joe Biden in the electoral college, what the polls and the vote count can agree on is that the US is a divided nation.

  • Around 66.5% of the US electorate voted in the 2020 presidential election
    (US Elections Project)
  • More than 101m people voted early in the 2020 election
    (US Elections Project)
  • There were 107,872 positive tests for Covid-19 in the US on 4 November – the day after the election
    (New York Times).

Chris Jackson is senior vice-president and lead for public polling at Ipsos US.

This article was first published in the January 2021 issue of Impact. Articles from past issues can be read here.

0 Comments