close

How the polls went wrong – and why they are still worth listening to

  • By Lord Ashcroft
  • 20 January 2016
  • Polling

The preliminary results of the British Polling Council inquiry into the general election polls have been published. The findings give the industry plenty to think about.

According to the inquiry team, led by Professor Patrick Sturgis of Southampton University, the main reason the final polls did not reveal a decisive Tory lead was that polling samples – the people who took part in the surveys – were not sufficiently representative of the voting population. For example, the team highlighted evidence from the polls they examined that in the oldest age group, those aged 65 and over, too many participants were at the younger end of the scale, and not enough were aged over 75. The fact that people willing to take part in polls were also more likely than others to be politically engaged – while looking the same as everyone else in demographic terms – also helped to skew things.

In practical terms, this means that to produce more accurate voting intention figures pollsters will need to find ways of getting more representative samples, or improve the way they weight the data from the samples they can get. The inquiry team’s recommendations on how to go about this will follow next month. Like everyone in the polling world, I look forward to these with great interest.

Meanwhile, this renewed focus on polling has prompted me to reflect. My first observation is that the polling world has been remarkably open and honest in facing up to what happened and trying to put it right. Anyone who heard the first episode of the Radio 4 programme Can We Trust The Opinion Polls?, presented by former BBC head of political research David Cowling, will have been struck by the readiness of the pollsters interviewed to acknowledge the discrepancy between their findings and the election result, and by the absence of hubris.

This is as it should be. The average of the final pre-election polls was a tie between Labour and the Conservatives on 33 per cent – what my own final survey found – not the seven point Tory victory we saw the following day. This last batch of polls was remarkably uniform: ten of the eleven eve-of-poll surveys showed a dead heat or a one-point lead for one party; the other found a two-point Labour lead. This has led some commentators to accuse the pollsters of deliberately “herding” – essentially, fiddling their figures to make them look more like the polls already published, so as to avoid being accused of producing an outlier or a rogue.

But whatever the pollsters’ failings, this is not among them. And as Professor Sturgis pointed out, those who did change the way they did things as the election approached would have been further from the result, not closer to it, had they not done so.

I can also refute the charge of deliberate herding from my own experience. In the year for which we ran the Ashcroft National Poll, from May 2014 until the election, we made just one change to our methodology: including UKIP in our initial voting-intention question, from January 2015. (It made remarkably little difference).

In fact the idea that we might have tampered with our numbers to make our polls fit into the pack brings a wry smile to my face, for this reason: when the ANP raised eyebrows, it wasn’t because our numbers looked suspiciously like everyone else’s. The six-point Tory lead I published in the week before the election looks somewhat different now to the way it was received at the time.

Though it is right that pollsters should acknowledge where they went wrong, we should not give the impression that all polling is bogus. Indeed, the pre-election polls should not be dismissed outright. At the beginning of 2015, my own research uncovered the scale of the SNP landslide at constituency level – a phenomenon that well and truly came to pass. And in England and Wales, the constituency polls I conducted during the campaign had an average “error” of just three per cent and identified the correct winner seven times out of ten.

Though it is clearly hard to pinpoint parties’ exact vote shares, polling can tell us a great deal about bigger-picture questions like the strengths and weaknesses of parties and leaders or attitudes to policies and issues. Qualitative research, like my pre-election focus groups, also help us understand what is really going on. These things have been the main focus of Lord Ashcroft Polls since 2010, and I expect them to play a bigger role in forming expectations of future elections, relative to the daily numbers in the “horse race”.

More broadly, polling is valuable as a reality check, helping politicians understand what people really think and avoid making false assumptions about why they lost (or indeed won). It can remind them that their own priorities may not match those of the voters, and that the language they use and the way they conduct themselves can easily put people off.

Yet as I observed in The Unexpected Mandate, my analysis of the election (in which I reflect on my pre-election research at greater length), for polls to be able to fulfil this function they need a degree of public trust. This means restoring some of the credibility they lost last May. This is far from done, and will take time. But as Professor Sturgis said in his presentation, channelling Churchill, polling is the worst way there is of measuring public opinion, apart from all the other ways. That is why it matters that pollsters are trying to put things right, and are being seen to do so.

 

 

Related Stories
Keep up to date with political & polling news
Sign up to our newsletter below