Guest Post: Polling peculiarities

A guest post by Stephen Russell:

As election day nears, observers of the US political scene probably hoped for increasing clarity about what’s happening there. What we’ve gotten is perhaps the opposite. 

Fivethirtyeight.com’s log now shows national election polls from 44 (!) different pollsters conducted in the last sixteen days of October, and some of those have produced multiple polls.  During this period, Joe Biden’s lead has fallen. Since reaching a peak of +10.3 in the Realclearpolitics.com (RCP) polling average on October 11, Biden has slipped to +6.5 though it is bouncing around quite a bit.  Biden likewise peaked in the Fivethirtyeight.com (538) average on Oct 16-19 at +10.7 and has since slipped to +8.3.  But detailed study shows several interesting points.

Firstly, the decline in Biden’s position may be partly a change of measurement rather than what is being measured. Each new pollster throwing their result into the pot changes the average. But each pollster has their own technique. They may be measuring exactly the same reality, but getting different results because of their different measuring sticks. That doesn’t mean Biden hasn’t slipped, just that the reality is less certain than it appears.

What do the repeat pollsters say? Cursory investigation shows mixed results. On the one hand, the results from USC Dornsife (which reports rolling averages) show no substantive change in opinion since the start of the month. YouGov has Biden increasing his lead from +9 to +11. On the other hand, Rasmussen has gone from reporting Biden +12 in the first week of October to Trump +1 in their October 25-27 survey (and back up to Biden +3 for Oct 27-29 then down to B+1 for Oct 28-Nov1).  IBD has gone up and down like a yo-yo. Ipsos shows no change.  SurveyMonkey shows a drop of about 2 points.      

Secondly, we are now probably seeing a herd effect. Mostly, pollsters can say whatever they like and we’ll never know for sure if they were right. But on election day they face the acid test. This creates a great temptation for pollsters to tweak (or even suppress) their final poll results to make them closer to the apparent average. That (they hope) will minimise the chances of them looking stupid when the result is known.

Third, a significant gap has opened up between the 538 and RCP averages. Why? 538’s average has usually been a little more friendly to Joe Biden than RCP’s: typically by about half a point. But lately the difference has been larger, and at one stage hit a full three points. And why did RCP’s figure start going down while 538’s was still going up?

The reason is largely in the deluge of new polls. Many (though not all) of the new polls feeding into 538 are more Biden-friendly, and that has kept the 538 average up. RCP’s average is based on a tight group of around ten of the most recent polls. A single new poll, especially if it is an outlier, can have a big effect on the average. And some major Biden-friendly polls (eg CNN, NBC, YouGov and USC Dornsife) are excluded because they are a week old. This procedure means RCP will usually reveal a change in a candidate’s popularity well before the 538 average shows it. However, the price for that sensitivity is volatility. It jumps around a lot and the changes it shows have often proved to be statistical blips from rogue polls and the randomness of which polls come out in what sequence.

Fourthly – and this is the most curious thing of all – is the distribution of results. Nationally, we have seen results from Biden +19 to Trump +1. There are double digit  spreads in many state polls too. In just one week polls came out showing a Biden lead of +10 in Pennsylvania and a Trump lead of +3. In Florida results ranged from Biden +7 to Trump +5.  In Wisconsin the range was Biden +1 to Biden +17.

If polling were as simple as pulling a handful of jellybeans out of a jar to sample the composition within, you would expect multiple polls to produce something close to what statisticians call a “normal distribution”: a cluster of results around a point which is probably close to reality, and a tail of more wayward results either side: a bell curve.

Unfortunately, humans are not jellybeans. Many refuse to come out of the jar. Some don’t know what colour they are. Some change colour every time they eat pork chops, and some tell fibs. Pollsters have many techniques to try and compensate for the uncertainty this creates. There is much debate about how that is best done. In theory, these uncertainties should have a simple effect: increasing the standard deviation of the bell curve to make it fatter.

And here is the curious thing: when you examine the actual results pollsters are producing, what you see is not a fat bell. It is a drunken bell, leaning to one side. Or possibly a two-humped camel. While it has become closer to normal in the last few days, that may be the herd effect at work.

The October 22 (NZT) RCP polls provide an illustration. There were ten polls making up the average, and Biden had a lead of 7.6 points. They fell into two clusters: a small group (of three) giving Biden a lead of around 3% (given the tilt in the Electoral College, that would make for a knife-edge election); and a larger cluster of seven polls giving Biden a lead averaging 9.6%. There was a distinct gap between the clusters. The mean, median and mode of the results had become misaligned, with the later two measures much more favourable to Biden.

This is not normal statistical variation, and comes from a disagreement over how to measure the reality of voter opinion that mirrors the polarisation of the two camps’ political beliefs. Each group denounces members of the other as fools or frauds.

Robert Calhally of Trafalgar Group and Jim Lee of Susquehanna Research are both predicting a Trump win and have denounced other pollsters for failing to see huge numbers of shy Trump voters. Lee has called some other polls “garbage”. 

On the other hand, Kyle Dropp, Morning Consult’s chief research officer has said his company has looked hard for these voters, and couldn’t find them. Nate Silver complains “there’s no f***ing evidence for it!” He cites cross-data with party identification that he says ought to show it up if it exists, but doesn’t; and also evidence from polling and results for Trumpy politicians in other countries.

If either cluster were just a couple of outliers we could dismiss them as nuts and ignore them. But there are too many. The Trump-friendly cluster seems to include Trafalgar, Susquehanna, Rasmussen, Harvard-Harris, Spry Strategies, Zogby, Insider Advantage, CardinalGPS; and (arguably) HarrisX, IBD, and Emerson. Some on that list are new. The Biden-friendly group includes Opinium, USC Dornsife, Morning Consult, JL Partners, YouGov, Ipsos, CNN, Data for Progress, Quinnipiac, Siena/NYT, Fox News and a whole lot more with less familiar names. There are pollsters giving results in the middle, but oddly few of them, given that this is where the average lies.

Some Trump enthusiasts have claimed that good polls for Biden are part of a conspiracy to suppress Republican turnout, or provide justification for a post-election claim of fraud when Trump has won. Democrats might equally claim that pro-Trump polls are part of a conspiracy to provide justification for a post-election claim of fraud when Biden has won.  Trump himself has often claimed that the only way he can lose is by massive vote fraud.

Now it might be that the pollsters of Cluster A are right and those of Cluster B are wrong. Or the reverse. But the average of them all is a result that almost no-one is picking! Of course, it is possible that both groups are wrong in opposite directions, and each by just the right amount to make the overall average accurate. But how likely is that? 

More likely is that a big group of pollsters are allowing politics to dictate the results they are producing. Which means that there is an enhanced probability that one group or the other is going to be proven spectacularly wrong, and even the polling average wrong by a historically unusual amount.  One way or another, there is going to be a lot of egg on face. And if it proves that one group is essentially right and the other group on another planet, the losers will be positively drowning in it.

Comments (25)

Login to comment or vote

Add a Comment