My Blog
Politics

The Polls Nailed The 2022 Election

The Polls Nailed The 2022 Election
The Polls Nailed The 2022 Election


Let’s give a big round of applause to the pollsters. Measuring public opinion is, in many ways, harder than ever — and yet, the polling industry just had one of its most successful election cycles in U.S. history. Despite a loud chorus of naysayers claiming that the polls were either underestimating Democratic support or biased yet again against Republicans, the polls were more accurate in 2022 than in any cycle since at least 1998, with almost no bias toward either party.

Of course, some pollsters were more accurate than others. And today, we’ve updated the FiveThirtyEight pollster ratings to account for each pollster’s performance in the 2022 cycle. Our ratings are letter grades that we assign to each pollster based on historical accuracy and transparency. (You can read exactly how we calculate pollster ratings here.) They’re one of many tools you should use when deciding how much stock to place in a poll.

Polls aren’t dead. Here are the pollsters to trust | FiveThirtyEight

Before we reveal the best- and worst-rated pollsters, let’s start with our regular review of polling accuracy overall. We analyzed virtually all polls conducted in the final 21 days before every presidential, U.S. Senate, U.S. House and gubernatorial general election, and every presidential primary, since 1998, using three lenses — error, “calls” and statistical bias — to conclude that 2022 was a banner year for polling.

In our opinion, the best way to measure a poll’s accuracy is to look at its absolute error — i.e., the difference between a poll’s margin and the actual margin of the election (between the top two finishers in the election, not the poll). For example, if a poll gave the Democratic candidate a lead of 2 percentage points, but the Republican won the election by 1 point, that poll had a 3-point error.

As we’ve written many times, some degree of polling error is normal. Taken altogether, the polls in our pollster-ratings database have a weighted-average error of 6.0 points since 1998. However, polling in the 2021-22 election cycle had a weighted-average error of just 4.8 points, edging out the 2003-04 cycle for the lowest polling error on record.

Polls were historically accurate in 2021-22

Weighted-average error of polls in the final 21 days* before presidential primary and presidential, Senate, House and gubernatorial general elections since 1998

Cycle Primary General Senate House Gov. Combined
1998 7.5 7.1 8.1 7.7
1999-2000 7.9 4.4 6.0 4.3 4.9 5.5
2001-02 5.5 5.6 5.2 5.4
2003-04 7.0 3.3 5.3 5.8 5.5 4.8
2005-06 5.2 6.5 5.1 5.7
2007-08 7.7 3.5 4.7 5.9 4.4 5.5
2009-10 4.9 7.0 4.7 5.8
2011-12 8.9 3.7 4.7 5.5 4.9 5.3
2013-14 5.3 6.8 4.5 5.3
2015-16 10.2 4.9 5.0 5.8 5.4 6.8
2017-18 4.2 4.9 5.2 4.9
2019-20 10.2 5.0 5.8 6.5 6.4 6.3
2021-22 4.8 4.0 5.1 4.8
All years 9.2 4.3 5.4 6.1 5.4 6.0

Includes polls of special elections and runoffs. Excludes polls from pollsters that are banned by FiveThirtyEight, New Hampshire primary polls taken before the Iowa caucuses and other states’ primary polls taken before the New Hampshire primary. Also excludes presidential primary polls if their leader or runner-up dropped out before that primary was held, if any candidate receiving at least 15 percent in the poll dropped out or if any combination of candidates receiving at least 25 percent in the poll dropped out.

Polls are weighted by one over the square root of the number of polls that their pollster conducted for that particular type of election in that particular cycle.

*Based on the poll’s median field date.

Sources: Polls, state election officials

Interestingly, the weighted-average errors of Senate and gubernatorial races were only slightly lower than usual last year. Polling of the 2021-22 cycle mostly owes its success to a low error in House races. This past cycle was the first time since the 1999-2000 cycle that House polls were more accurate than Senate and gubernatorial polls.

But this isn’t as impressive as it sounds. The “House polls” group includes district-level polls of individual House races and national generic-congressional-ballot polls. And something we noticed early on in 2022 was that pollsters were conducting more generic-ballot polls and fewer district-level polls. Overall, since 1998, 21 percent of the House polls in our pollster-ratings database have been generic-ballot polls — but in 2021-22, 46 percent were. That’s higher than in any other election cycle. 

And generic-ballot polls are historically much more accurate than district-level polls. Since 1998, generic-ballot polls have had a weighted-average error of 3.9 points, while district-level polls have had a weighted-average error of 6.7. So, by eschewing district polls in favor of generic-ballot polls last year, pollsters made their jobs much easier.

But that’s not the only reason House polls were more accurate in this past cycle. The few district-level polls conducted had a weighted-average error of only 5.0 points — the lowest of any cycle since at least 1998.

The second lens we use to gauge polling accuracy is how often polls “called” the election correctly. In other words, did the candidate who led a poll win their race? Historically, across all elections analyzed since 1998, polling leaders come out on top 78 percent of the time (again using a weighted average). By this metric, the 2021-22 cycle was the least accurate in recent history.

Polls have “called” elections correctly 78 percent of the time

Weighted-average share of polls that correctly identified the winner in the final 21 days* before presidential primary and presidential, Senate, House and gubernatorial general elections since 1998

Cycle Primary General Senate House Gov. Combined
1998 87% 49% 85% 75%
1999-2000 95% 67% 83 55 82 75
2001-02 80 73 90 82
2003-04 94 78 82 68 70 77
2005-06 92 73 90 84
2007-08 80 93 95 82 95 88
2009-10 86 74 85 81
2011-12 63 82 87 71 91 77
2013-14 77 74 80 77
2015-16 85 71 78 58 68 77
2017-18 72 80 74 75
2019-20 80 80 73 82 92 79
2021-22 77 64 77 72
All years 82 79 81 72 82 78

Includes polls of special elections and runoffs. Excludes polls from pollsters that are banned by FiveThirtyEight, New Hampshire primary polls taken before the Iowa caucuses and other states’ primary polls taken before the New Hampshire primary. Also excludes presidential primary polls if their leader or runner-up dropped out before that primary was held, if any candidate receiving at least 15 percent in the poll dropped out or if any combination of candidates receiving at least 25 percent in the poll dropped out.

Polls are weighted by one over the square root of the number of polls that their pollster conducted for that particular type of election in that particular cycle. Polls get half-credit if they show a tie for the lead and one of the leading candidates wins.

*Based on the poll’s median field date.

Sources: Polls, state election officials

But that low hit rate doesn’t really bother us. Correct calls are a lousy way to measure polling accuracy.

Suppose two pollsters released surveys of a race that Democrats eventually won by 1 point. One of the pollsters showed the Republican winning by 1 point; the other showed the Democrat winning by 15 points. The latter pollster may have picked the correct winner, but its poll was wildly off the mark. So we’d be very wary of trusting it in a future election. The other pollster may have picked the wrong winner, but it was well within an acceptable margin of error; essentially, it just got unlucky. 

And you will not be surprised to learn that polls have a worse chance of “calling” the election correctly if they show a close race. In fact, the percentage of correct calls made is simply a function of how close the polls are.

Close polls often miss “calls”

Share of polls that correctly identified the winner in the final 21 days* before presidential primary and presidential, Senate, House and gubernatorial general elections since 1998, by how close the poll showed the race

Poll Margin % Picking Winner
<3 pts 55%
3-6 69
6-10 86
10-15 93
15-20 97
≥20 99

Includes polls of special elections and runoffs. Excludes polls from pollsters that are banned by FiveThirtyEight, New Hampshire primary polls taken before the Iowa caucuses and other states’ primary polls taken before the New Hampshire primary. Also excludes presidential primary polls if their leader or runner-up dropped out before that primary was held, if any candidate receiving at least 15 percent in the poll dropped out or if any combination of candidates receiving at least 25 percent in the poll dropped out.

Polls get half-credit if they show a tie for the lead and one of the leading candidates wins.

*Based on the poll’s median field date.

Sources: Polls, state election officials

To quote, uh, myself from almost three years ago, “Polls’ true utility isn’t in telling us who will win, but rather in roughly how close a race is — and, therefore, how confident we should be in the outcome.” Historically, candidates leading polls by at least 20 points have won 99 percent of the time. But candidates leading polls by less than 3 points have won just 55 percent of the time. In other words, races within 3 points in the polls are little better than toss-ups — something we’ve been shouting from the rooftops for years

So for 2022, the only substantive lesson we can glean from this metric is that polls were historically close — something we already knew! In fact, 55 percent of the polls we analyzed for last cycle were closer than 6 points. That’s a higher share than in any other cycle since at least 1998. 

Finally, statistical bias is the third lens through which we view polling accuracy. While error tells us how much a poll missed, bias tells us in what direction it missed. In other words, did it overestimate Democrats or Republicans? 

There’s been a lot of interest in statistical bias in recent years — specifically, whether polls are systematically biased against Republicans nowadays. These concerns stem primarily from polls overestimating Democratic support in the 2016 and 2020 cycles; as the table below shows, the polls in 2015-16 had a weighted-average bias of D+3.0, and the polls in 2019-20 had an even worse weighted-average bias of D+4.7. A lot of people assumed the polls would have a similar bias again in 2022. But that assumption was wrong: For 2021-22, polls had a weighted-average bias of just D+0.8.

Polling bias is pretty unpredictable from election to election

Weighted-average statistical bias of polls in the final 21 days* before presidential, Senate, House and gubernatorial general elections since 1998

Cycle President Senate House Gov. Combined
1998 R+4.5 R+0.9 R+5.7 R+3.8
1999-2000 R+2.4 R+2.8 D+1.1 R+0.2 R+1.9
2001-02 D+2.0 D+1.6 D+3.4 D+2.7
2003-04 D+1.2 D+0.8 D+2.2 D+2.0 D+1.5
2005-06 R+2.1 D+1.1 D+0.4 D+0.1
2007-08 D+0.9 D+0.4 D+1.3 R+0.1 D+0.8
2009-10 R+0.8 D+1.3 R+0.2 D+0.4
2011-12 R+2.5 R+3.2 R+3.3 R+1.6 R+2.8
2013-14 D+2.7 D+4.0 D+2.3 D+2.8
2015-16 D+3.3 D+2.8 D+4.1 D+3.1 D+3.0
2017-18 EVEN R+0.5 R+1.0 R+0.5
2019-20 D+4.1 D+4.9 D+6.1 D+5.6 D+4.7
2021-22 R+0.3 D+0.2 D+1.3 D+0.8
All years D+1.2 D+0.6 D+1.2 D+0.9 D+1.0

Includes polls of special elections and runoffs. Excludes polls from pollsters that are banned by FiveThirtyEight.

Bias is calculated only for elections in which the top two finishers were a Republican and a Democrat. Therefore, it is not calculated for presidential primaries. Polls are weighted by one over the square root of the number of polls that their pollster conducted for that particular type of election in that particular cycle.

*Based on the poll’s median field date.

Sources: Polls, state election officials

Ironically, after the election, a narrative emerged that 2022 polling was actually too good for Republicans — a claim that our data doesn’t bear out, either. While the polls in a few closely watched races — like Arizona’s governorship and Pennsylvania’s Senate seat — were biased toward Republicans, the polls overall still had a bit of a bias toward Democrats. That’s because generic-ballot polls, the most common type of poll last cycle, had a weighted-average bias of D+1.9, and polls of several less closely watched races, like the governorships in Ohioand Florida, also skewed toward Democrats. It was a weird year in that some states zigged and other states zagged; usually, polling bias in a given year is correlated from race to race.

But if you’re trying to find patterns in polling bias from year to year … good luck. Statistical bias tends to bounce around from cycle to cycle. Sure, the polls had overestimated Democrats in three of the past four elections before 2022. But before that, in 2012, they overestimated Republicans. And there was barely any polling bias in the three cycles before that. 

This is why we’re constantly warning people against trying to predict the direction of polling error in advance. In fact, we’ve noticed that when pundits try to predict polling bias, they have a knack for guessing wrong. See, pollsters aren’t passive actors in all this. Pollsters are well aware when their polls have a bad year. Many adjust their methodology to avoid making the same mistakes. Simply put, if polling is broken, pollsters don’t just sit on their hands; they try to fix it! 

Up to this point, we’ve been talking about “the polls” and “pollsters” as if they were a monolith. But there are important differences in quality between pollsters. So, without further ado, here is how each pollster performed in the 2021-22 election cycle according to our three metrics:

The most and least accurate pollsters of 2021-22

Average error, share of elections “called” correctly and average statistical bias of each pollster’s polls in the final 21 days* before Senate, House and gubernatorial general elections in the 2021-22 cycle, for pollsters that conducted at least five such polls

Pollster
# of Polls
Average Error
% of Correct Calls
Average Bias
Suffolk University 6 1.9 83% R+0.7
Siena College/The New York Times Upshot 12 1.9 88 EVEN
Alaska Survey Research 5 2.2 100 R+1.9
SurveyUSA 12 2.4 100 EVEN
Echelon Insights 8 2.4 63 R+1.2
Beacon Research/Shaw & Co. Research 10 2.9 70 R+0.9
Marist College 10 3.0 90 D+2.8
Cygnal 19 3.1 92 D+0.6
Fabrizio, Lee & Associates 7 3.2 79 R+2.5
Research Co. 23 3.3 87 D+0.2
OH Predictive Insights 5 3.4 60 D+0.3
Ipsos 6 3.8 17 D+3.7
KAConsulting 11 3.9 91 R+3.9
Remington Research Group 10 4.1 70 R+2.6
Emerson College 55 4.1 82 R+1.3
YouGov 18 4.1 81 D+2.8
Data for Progress 33 4.4 79 R+1.8
Rasmussen Reports 7 4.4 86 R+4.4
Civiqs 17 4.5 88 D+3.3
Siena College 7 4.5 71 D+3.8
Targoz Market Research 9 4.8 56 R+3.7
RRH Elections 5 5.1 80 R+5.1
Morning Consult 6 5.2 8 D+5.2
University of New Hampshire Survey Center 7 5.2 100 R+4.2
Trafalgar Group 37 5.3 62 R+4.9
InsiderAdvantage 29 5.3 67 R+3.8
Patriot Polling 7 5.5 71 R+5.5
Phillips Academy 7 5.5 57 R+3.5
Wick 9 5.7 56 R+4.6
co/efficient 15 5.9 63 R+5.8
Moore Information Group 5 8.3 30 R+8.3
Amber Integrated 7 9.4 86 D+7.8
Ascend Action 6 15.0 83 D+15.0

Includes polls of special elections and runoffs. Excludes pollsters that are banned by FiveThirtyEight.

Polls get half-credit for “calling” an election correctly if they show a tie for the lead and one of the leading candidates wins. Bias is calculated only for elections in which the top two finishers were a Republican and a Democrat.

*Based on the poll’s median field date.

Sources: Polls, state election officials

Unsurprisingly, the top of the list is generally populated by academic and media-affiliated pollsters that have been trusted names in polling for a long time. But special congratulations are due to Suffolk University and Siena College/The New York Times Upshot, which had the lowest average errors of any pollster that conducted at least five qualifying polls last cycle. As a result, Suffolk’s pollster rating has increased from B+ to A-. Siena College/The New York Times Upshot already had an A+ grade, so it didn’t get a ratings boost. Still, its stellar performance did push it past Selzer & Co. for the distinction of most accurate pollster in America (at least by FiveThirtyEight’s reckoning). 

Meanwhile, the bottom of the list features quite a few Republican-affiliated pollsters that systematically overestimated the GOP in 2022: RRH Elections, InsiderAdvantage, co/efficient, Moore Information Group. But the most famous of these is probably Trafalgar Group, a pollster whose methods are notoriously opaque but that played a significant role in shaping the ultimately untrue narrative that a “red wave” was building with its 37 (!) qualifying pre-election polls. Trafalgar’s polls were quite accurate in 2020, when its Republican-leaning house effects helped it avoid the big polling miss that other firms experienced. As a result, it went into 2022 with an A- pollster rating. But its poor performance last cycle has knocked it down to a B — making it the only pollster to fall two notches in our ratings this year.

Of course, we generate more data on each pollster than just a letter grade. On our pollster-ratings page, you’ll also find the pollster’s historical bias, the number of its polls we’ve analyzed and its Predictive Plus-Minus, which is our projection of how much error we think the pollster will have in future elections relative to the average pollster. (Negative scores mean we believe the pollster will have less error than average.) 

On each pollster’s individual pollster-ratings page, you’ll find the percentage of races it called correctly, the most recent cycle in which it polled, a list of all its qualifying polls and how accurate they were as well as whether it participates in the American Association for Public Opinion Research’s Transparency Initiative or shares its data with the Roper Center for Public Opinion Research. (A note for close observers of the pollster ratings: We have stopped giving pollsters credit for once belonging to the now-defunct National Council on Public Polls.) That final datum is essential because that level of transparency correlates with methodological quality and accuracy.

And if all that isn’t enough, you can also download the data underlying our pollster ratings for even more goodies, like how often a pollster missed outside the margin of error, its house effects and how much we penalize it for potential herding. Have fun exploring, and if you have any questions, email us at polls@fivethirtyeight.com.



Related posts

Biden Unleashes Devastating Seven Figure Arizona Media Blitz After Abortion Ruling

newsconquest

John Fetterman Destroys Trump’s Lies About Pennsylvania Election Cheating

newsconquest

Larry Hogan Now Says He Supports Abortion Rights

newsconquest