The pollsters, the betting markets and the tenor of (mainstream) media reports all favored a Hillary Clinton win on Tuesday. The widely followed FiveThirtyEight forecast gave her a 71.4% chance, and they were relatively skeptical. The New York Times' Upshot, which had endorsed the Democrat, gave her an 84% chance. Punters were sanguine as well. The Irish betting site Paddy Power gave Clinton 2/9 odds, or 81.8%.
They were wrong, just as they had been five months prior.
"It will be an amazing day," Donald Trump thundered in Raleigh, North Carolina on Monday, Election Eve. "It will be called Brexit plus plus plus. You know what I mean?"
What he meant is that conventional wisdom ascribed similarly thin chances to a "Leave" victory in Britain's June referendum to exit the European Union. Stephen Fischer and Rosalind Shorrocks of Elections Etc summed up the probabilities on June 22, the eve of that vote: poll-based models gave "Remain" 68.5%; polls themselves gave it 55.6%; and betting markets gave it 76.7%. No matter who you asked, Britain's future in the trading bloc seemed secure. Yet the Leavers won, 51.9% to 48.1%, sending the pound plunging against other currencies and wiping out a record $3 trillion in wealth as markets across the globe swooned. (See also, Brexit's Effect on the Market.)
What happened? How could pollsters and betting markets, having just been wrong-footed by Brexit, get their forecasts so wrong again?
What Happened in Britain?
Referendums are especially difficult to poll. A 2011 plebiscite in Britain on the first-past-the-post electoral system received 32% support, compared to forecasts of 34% from ComRes and 38% from YouGov. According to the Economist, writing in the run-up to Scotland's 2014 independence referendum (which most pollsters would get right), the culprit was insufficient grasp of the issue: voters, "confronted by a bamboozling question about electoral systems, voted No just to be safe."
There is anecdotal evidence to suggest that a similar lack of awareness was at work in the Brexit vote, though it worked in the other direction. For all the speechifying on common historical bonds versus national sovereignty, the EU is at heart a complicated bundle of arcane trade and immigration rules. Given that the even most basic outline of a new agreement has yet to emerge, Brits were not really voting on trade rules, but registering their feelings on more abstract issues. Some thought they were lodging a protest vote, and that the Remain was sure to win anyway. The day after the referendum, British Google searches for "What is the EU" and "What is Brexit" spiked. (See also, Why the Brexit Is Such a Big Deal for the World.)
And yet the fact that Brexit was put to a referendum cannot be the only explanation: British pollsters also missed badly on the May 2015 general election.
Anthony Wells, director of political and social research at British pollster YouGov, wrote an exhaustive 3,500 word debrief of the Brexit polling flop in July, which YouGov said was "too close to call," though it gave an edge to Remain. He argues that widespread expectations for a Remain victory were based on flawed readings of the polls, rather than flawed data. As he put it in an email to Investopedia Tuesday afternoon, before results in the U.S. election began to trickle in, "Brexit was as much an error by journos and the establishment in interpreting the polls as it was an error in the polls."
According to Wells, online polling of Britons' likely vote on Brexit, which favored Leave, ended up being more accurate than Remain-skewed telephone interviews. In the end, Leave support was even stronger than it had been in the online polls. The media, convinced that Remain would win, discounted online polls as less accurate, creating the impression that polling supported Remain more than it did. "The polls," Wells argued in a blog post from October, "were showing a race that was neck-and-neck." (See also, How Brexit Can Affect the European Economy.)
Why were telephone polls less accurate? One reason is an apparent partisan divide in answering the phone. Leavers were more difficult to contact, and the more attempts pollsters made to interview voters, the more the resulting polls favored Brexit. Education was also a sticking point. In May, YouGov published a post fretting that phone polls appeared to favor graduates too heavily, although looking back, Wells decides that the proportion of graduates alone "cannot explain why some pollsters performed badly." (See also, A Trump Win Would Dump 5% Off S&P 500: Citi.)
Methods for adjusting the data also proved inadequate. Predicting turnout was difficult, since the trends sent conflicting signals. Those who hadn't voted in previous elections tended to favor Leave, so models based on voting behavior skewed Remain. Older voters are generally more likely to turnout, and they favored Leave, but educated and middle class voters – Remainers, on balance – also tend to make it to the polls in relatively high numbers. The final consideration, that fired-up Leave voters said they were more likely to vote, turned out to be the most important: "it looks as if a traditional" – for Britain – "approach of relying on self-reported likelihood to vote would have been more accurate."
Finally, there was the question of what to do with "don't knows." Before the vote, the argument began to gain traction that undecided voters would naturally favor the status quo, but there were other strategies used to allocate these voters. "In every case these adjustments helped remain," Wells writes, "and in every case this made things less accurate."
What Happened in the U.S.?
YouGov expected Clinton to take 317 electoral college votes to Trump's 221. While the final tallies are not in at the time of writing (11:00 am EST Wednesday), the AP has given Trump 276 votes, with 270 needed to win. (See also, The S&P 500 President Predictor Is Not Gospel.)
What went wrong? According to FiveThirtyEight's Nate Silver, not as much as critics are saying. In a podcast Wednesday, he argued that the error was not so much with the polling as with the interpretation:
"Clinton’s going to wind up winning, when the California votes are counted, the popular vote by a point-ish, maybe a point and a half. The national polling average had her up three and a half or four points. That’s a pretty ordinary polling error. And so first of all – I said this after Brexit right – the people whose first instinct is to blame the polls when there was so much smugness from the commentariat […] the polls were less wrong than the conventional wisdom in New York and Washington about Donald Trump's chances."
Wells, who made a similar argument about Brexit, disagrees. In the run-up to the election Tuesday, he appeared confident that we wouldn't see a repeat of the Brexit miss. British pundits had misinterpreting ambiguous polling data when they projected an establishment win, in his view, but in the U.S., the polling data wasn't ambiguous: national and state polls clearly showed Clinton ahead. "It's true to say that the media and the elites all think Clinton will win," Wells wrote in an email Tuesday, but only "because the evidence really is consistent, not because they are selectively reading the evidence."
In other words, there is a problem with the polls.
American pundits expected the polls to be skewed, of course. Take the question of cell phones versus land lines. According to the National Center for Health Statistics, 60.5% of Hispanic households – a group Trump has repeatedly offended with his rhetoric on immigration – lacked a land line, compared to 48.4% of non-Hispanic black households and 44.0% of non-Hispanic white ones. The disparity responds to age as well as ethnicity: while just 20.5% of adults 65 or older lack a land line, 72.6% of those aged 25 and 29 do. Add in pollsters' tendency to poll only in English, and you are left with a potentially wide margin of error. (See also, Mexican Peso Surges as Polls Point to a Clinton Victory.)
But that distortion should have benefited Clinton, muting her support in the polls and leading to a surprisingly large margin of victory for the Democrat. No dice.
Patrick Murray, director of the Monmouth University Polling Institute, told Investopedia Wednesday morning that the result was "very similar to what happened in Brexit," and pollsters "didn't learn that lesson well enough." A large contingent of Trump voters, who he describes as "quietly biding their time," simply did not show up in the polls; Monmouth's last national poll, conducted from November 3 to 6, showed Trump leading Clinton by 6 points among likely voters.
Asked what pollsters could do about these quiet voters, Murray confessed, "We don't know yet." He laid out a plan for a "first line of inquiry": see who didn't talk to the pollsters, look for patterns, try to develop a profile of the kind of voter that threw the forecasts off. "It'll take a while to sort this out," he said, adding, "there's no question that this was a big miss."
Is he worried polling will be discredited? "Yes." The industry has relied on "establishment models," and these are apparently inadequate when it comes to capturing a transformational political movement.
Writing on Wednesday, Wells – who, as a member of YouGov's British team, describes himself as "quite separate" from the pollster's American operations – offered his colleagues across the pond some advice, since "we in Britain have, shall I say, more recent experience of the art of being wrong." As with the Brexit vote, he attempts to separate the quality of the data and the quality of the narrative. Given that Clinton probably won the popular vote, polls showing her doing so are not fundamentally wrong. They favored her too much, however, and state polls were without a doubt way off the mark.
Can polling be fixed? It's too early to say, but one suggestion Wells makes is that modeling turnout on historical data, as U.S. pollsters have tended to do, can be treacherous. When Brits tried this method on the Brexit vote, rather than simply asking people how likely they were to vote, they got "burnt." (See also, So You Want to Move to Canada: Good Luck.)