Wednesday, April 2, 2014

How much credibility do the polls have?

Over the last couple of days, I've been contacted by three different people asking for my thoughts on this blogpost from Auld Acquaintance, which offers a number of possible reasons for believing that the polls may be understating the true strength of the Yes vote. So I thought it would probably be simpler just to give my reaction in a new post.

First of all, though, a few general observations about where we currently stand. As regular readers know, I get a bit exasperated with the 'Flat Earther' tendency among some Yes supporters in their attitudes to the polls. How often do we hear "well I never look at the polls, they're all rubbish anyway" or "the only poll that matters is on the 18th of September" or "OK, they asked 1000 people but what about the other 4 million?" It really doesn't do any of us any good to stick our heads in the sand like that and pretend that the basic principles of opinion polling haven't been comprehensively tested and verified over a period of several decades. If you interview a genuinely representative sample of 1000 people and weight the results absolutely perfectly, then on 95 occasions out of 100 you'll get a result that is accurate to within a margin of error of 3%. That is a fact.

But that knowledge certainly doesn't mean that we should go to the other extreme and treat opinion polling as some kind of infallible God. The reason we should always approach polls with a critical eye is that they usually fall at least slightly short of the absolute methodological perfection that I've just described, meaning that on a substantial minority of occasions they'll be outside the standard margin of error. OK, it's very rare in western democracies for election results to vary dramatically from the final polls, but moderate divergences are much more common. The classic example in the UK was the 1992 general election, when an average of the final polls had Labour 1% in the lead, but the Tories ended up winning by 7%. About half of that divergence was caused by genuine methodological errors in the polls (the remainder was a late swing in opinion).

So probably the best attitude to take to any individual poll is : "This is likely to be vaguely in the right ball park, but it also has an entirely indeterminate 'real world' margin of error, because there's no way of knowing for certain how many methodological mistakes are being made, or how great the impact of those blunders are."

Furthermore, we're in a highly unusual position in this particular referendum campaign, because we already know for a fact that some of the polls are getting it wrong by a significant margin. The reason we know that is the huge and sustained divergence between the numbers being produced by different polling organisations. ICM, Survation and Panelbase are currently showing a Yes vote of 45-47% after Don't Knows are excluded, YouGov and TNS-BMRB are showing a Yes vote of 40-42%, and the extreme outliers Ipsos-Mori are showing a Yes vote of just 36%. There is simply no way of reconciling those figures, which means that one of two things must be true - either a) some of the polls are wrong to a substantial degree, or b) all of the polls are wrong to a substantial degree. The first possibility is obviously the more probable of the two, but the second can't be discounted - leaving open a small outside chance that Yes may already be in the lead (or indeed that No have a bigger lead than even Ipsos-Mori are reporting). To understand why, we have to look at the underlying reason for the highly unusual failure of the different pollsters to broadly agree with each other.

The methodology for standard voting intention polls is generally perfected by using the most recent election result as a baseline. If a pollster was out by 2% in that election, they'll ponder the reasons for that small error, and identify the tweaks they could have made that would have produced a totally accurate result. Implementing those tweaks in future polls significantly reduces the likelihood of reporting misleading numbers. But the problem is that the pollsters aren't actually able to go through that process for the referendum, because there is no baseline to work from. At best, we have the devolution referendum of 1997, but that took place in a different world - most of the present-day polling organisations weren't even around back then. So instead they've had to rely on hunches about which methodology will prove most accurate, and to some extent they've all been guessing differently. The chances are that at least one of them has guessed right, but it's just conceivable that none of them have. That's why there's a slightly greater risk of all the polls being substantially wrong in this case than there would be in a regular election. The lazy London-centric assumptions that have informed some of the guesswork don't help either, although that's become slightly less of a problem as time has gone on.

There's also another scenario in which Yes could be in a slight lead without that showing up in any of the polls, and it's one that has nothing really to do with methodological 'errors' as such. If supporters of independence are significantly less likely to reveal their true intentions to pollsters than opponents of independence are, then the Yes vote would be under-reported across the board. Intuitively that seems like a genuine possibility, but nobody really knows if it's going on. Pollsters can't realistically be expected to adjust their methodology to take account of a 'Shy Yes Syndrome' that hasn't yet been conclusively proved to even exist. And in any case, respondents would presumably be less shy in front of a computer screen than they would be when talking to a live interviewer, so the fact that four of the six active pollsters conduct their fieldwork online ought to be counteracting most of the reticence.

In a nutshell, then, although it's theoretically possible that Yes are already in the lead, there are no solid grounds that I know of for assuming that to be the case. The more plausible best-case scenario is that ICM, Panelbase and Survation are right, and that Yes are just a few percentage points behind with a bit of momentum in their favour. With five-and-a-half months still to go, that would be a pretty decent position to be in.

Unfortunately, we also have to consider the worst-case scenario, which is that Ipsos-Mori are right and everyone else is wrong. There's no getting away from it - the fact that the one and only telephone pollster in this campaign are producing such lowly figures for Yes is a cause for concern. It may mean that if telephone polling was the norm rather than the exception, the average No lead would be much higher than it currently is. But we simply don't know whether the telephone element of Ipsos-Mori's methodology is what's causing the difference, or whether it's something else entirely (such as the failure to weight by recalled 2011 vote). It's also perfectly possible that Ipsos-Mori are getting it wrong precisely because they are polling by telephone. It may no longer be feasible to reach a fully representative sample of the Scottish population in the way they are attempting. (For example : are they phoning landlines only? No-one seems to know.)

I was reading an interesting article by ICM's Martin Boon recently, about the tendency for UKIP to do much better in online polling than in telephone polling. He didn't offer any definitive answers about which was more likely to be accurate, but based on past experience he did raise the intriguing possibility that the results of different types of polls would "magically converge" when polling day came into view. Could that happen in the independence referendum as well? If so, it would suggest that all of the pollsters are telling us a "form of the truth" in a mysterious way that we don't fully understand, and that all of them are equally important. I don't really believe that, though - my guess is that Panelbase and Ipsos-Mori will still be miles apart in September, meaning that at least one of the two will be proved hopelessly wrong. If one of them is showing a Yes lead and the other is showing a No lead, we really will be in for a terrifying few hours.

But perhaps campaigners' experiences on the ground can provide clues about which pollster is more likely to be correct? To return to the 1992 example, Neil Kinnock famously suspected in advance that he was going to lose, and that foresight didn't come from private polling - it was from looking people in the eye on the campaign trail. So from that point of view, the fact that Yes canvassers seem to be so buoyant at present is certainly a cause for encouragement, but it's scarcely definitive.

OK, let's turn now to the specific points that Auld Acquaintance made in his blogpost -

"Why do I think the polls are wrong too?

The evidence is out there, it is increasingly clear that Better Together cannot get the numbers of people supporting them on the ground, the few poor souls they do get cast very lonely figures on their stalls and they cannot give away their wares, the public do not want to know. Yes stalls are full of support, and people queue up to chat, ignoring Better Together.

People in their droves across the country turn up for Yes events, a mere handful turn up for Better Together."


None of these factors strike me as being proof of anything at all. It may simply be that Yes voters are far more enthused, and that No voters just don't care about the referendum as much (or at least not yet). And perhaps people have more questions for the Yes side than for the No side.

"There are debates regularly taking place in schools, universities, village halls and consistently YES wins, by larger and larger margins, when was the last time you heard of NO winning one?"

Again, the attendees at debates are self-selecting, and if Yes voters are more excited by the referendum you'd expect a skew towards them. What is genuinely encouraging is the swing to Yes that often occurs between the 'before' and 'after' straw polls at the debates. That could be a portent of great things to come when the currently disinterested section of the population is exposed to saturation TV coverage of the referendum in August and September. But it isn't in itself a sound reason for thinking that the polls are wrong right now.

"Yes campaigners and Radical Independence have been doing their own polling, and the results are nearly all positive in the favour of YES."

I presume that simply refers to canvassing, and if so it's a positive sign, but one that has to be treated with extreme caution, because canvassers are sometimes told what they want to hear. At the Cowdenbeath by-election, the SNP claimed that they had found 41% of voters were planning to vote Yes, and 36% were planning to vote No. But Labour claimed that their canvassers had come up with numbers that bore no resemblance to that, with No at around 60% and Yes in the mid-teens. Are Labour lying to us? Or were people lying to Labour because they were embarrassed to tell their traditional party that they took a different view on independence? My gut feeling is that the SNP figures are at least somewhat closer to the truth, but it must be obvious that there's huge scope for uncertainty here.

"The polling companies by taking equal numbers from each of the class bands have this wrong.

If you take the AB’s as being NO, and the DE’s as being YES, on first sight that would seem a fair balance, would you not?

You would be wrong, because in Scottish society the actual numbers of people in class AB, are dwarfed by the numbers of people in classes DE."


The problem with this claim is that I just can't see any evidence that the polling companies are taking equal numbers from each class band. For example, the last YouGov poll had 354 respondents from the DE category after weighting, compared to just 209 from the AB category. We're not dealing with complete imbeciles here - you can be sure that the pollsters will be at least attempting to interview a representative sample of the Scottish population, based on specifically Scottish data from the census and other sources. That doesn't mean that they won't sometimes make mistakes - as I've mentioned several times, Panelbase had to change their methodology a few months ago because they were under-representing older voters. But that particular mistake was actually flattering the Yes campaign.

So all in all I think there's probably a fair bit of wishful thinking at play in Auld Acquaintance's post. It's true that I've been just as critical of the polls over the last couple of years, but that's been for completely different reasons, mainly relating to lack of transparency, biased preambles, and weighting by recalled vote from 2010 rather than 2011. Most but not all of those issues have now been resolved. I'm still troubled by Ipsos-Mori's extreme secrecy, by YouGov's bizarre practice of splitting 2011 SNP voters into two categories and weighting them separately, by TNS-BMRB's apparent presumption of a 100% turnout, and by the possibility that Panelbase may yet revert to their subtly biased preamble in future polls.

Whether we think the polls are right or wrong, though, one thing there isn't any doubt over is the trend. Just lately that's proved to be a relentlessly good news story for Yes.

14 comments:

  1. Thanks James, that was fascinating.

    One thing that you didn't mention was the length of the campaign. I think a lot of people have put politics on the back burner until 'nearer the time' so I do think it will be perhaps the last six weeks or so, that will show the really interesting polling data.

    With the trends going with Yes, as well as most of the Yes/NO debates showing a healthy increase in the Yes vote afterwards, I think Yes is in a very good position right now.

    I'm sure we are winning, if not in the polls right now, at least in the hearts and minds of the voters, if that makes sense!

    ReplyDelete
  2. I was polled on the referendum by Ipsos Mori a couple of
    weeks ago (by landline). A pretty comprehensive set of
    issues was covered and some of the questions really
    made me pause and think (a bit of a personal revelation!)
    However, whilst I was asked about how often I voted in
    various types of elections (from local through to general
    elections) I was not asked which party I voted for in any
    of these, which I found a bit odd.

    ReplyDelete
  3. If pollsters are dialling landlines, what happens with Ex-directory numbers and others who have registered for no cold calls? If they are not included does that skew the figures in any way?

    ReplyDelete
  4. Thanks for a very professional article.

    I was reading - elsewhere I think - that an opinion poll conducted, what, 20 years ago, found that the over 55's were much more likely to believe people of 'import' than the younger cohort, who relied more on peer views and/or opinions.

    Which would suggest that a grass roots approach to this is the right way to achieve independence.

    Although I have to agree that there is a bit of self selection going on in an audience for either a Yes event or a No event, at least some of the audience must be undecided? As must the relative turnouts.



    ReplyDelete
  5. I too found this article useful and interesting, James. Thank you.

    With reference to telephone polls as opposed to on-line: is there not more likelihood of "embarrassment" or "shame" influencing what people say when speaking to a “real” person? You mentioned that Martin Boon's article noted UKIP did better in on-line than in telephone polling. I suspect that this is due to people being embarrassed to say to a real live person that they support a party which is generally seen as Xenophobic, even racist, insular and right wing with a tendency to attract an larger than proportional share of oddballs. I'd venture to suggest that the BNP has the same sort of difference.

    One of the suggestions at the time of Kinnock's defeat that you mention, and the fact that the pollsters were so wrong, is that people (after 18 years of Thatcher and Major) were embarrassed to own up, face to face (or ear or ear), that they would vote for what was perceived as a greedy, self serving bunch, so they said that they would vote for a centrist party, largely seen at that time as being more redistributive, then went out and did what their wallets told them.

    I'd imagine that that polling was done by phone or face to face.

    One of the problems about polling is that not everyone tells the truth for whatever reason.

    ReplyDelete
  6. Timothy (likes zebras)April 2, 2014 at 10:54 AM

    James, this is a generally excellent overview of opinion polling in general and with regards to the Independence Referendum. I would, however, like to add a couple of points.

    "If you interview a genuinely representative sample of 1000 people and weight the results absolutely perfectly, then on 95 occasions out of 100 you'll get a result that is accurate to within a margin of error of 3%. That is a fact."

    That isn't completely accurate. A better wording would be:

    "If you interview a genuinely random sample of 1000 honest people, then on 95 occasions out of 100 you'll get a result that is accurate to within a margin of error of 3%. That is a fact."

    My understanding is that if you had a perfectly random sample of people who told you the absolute truth, then you wouldn't need to apply any demographic, political or social weighting.

    One example where this might occur would be if you were to select 1000 ballot papers perfectly at random and do a count on those - this would be a statistically perfect opinion poll, and you wouldn't have to know the age, sex, past vote, etc, of any of the people who had voted.

    The reason this is important is that weighting the responses of various respondents to an opinion poll up and down to ensure that you have a representative sample after weighting reduces the effective size of the sample.

    I haven't worked through the Maths to know how large this effect would be in a typical opinion poll, but if you imagine that the effective sample size is 500 after weighting your initial sample of 1000, then this increases the margin of error from about 3% to about 4.5%. And, of course, this assumes that the weighting has been done perfectly, which is unlikely to be the case as you point out.

    This means that, even with perfect weighting, its necessity in opinion polls reduces the effective sample size and increases the margin of error.

    The second point I would add is just to emphasise a point you make that campaigns really do matter. People do change their mind, as we saw dramatically with the AV referendum, and the 2011 Holyrood elections.

    As a supporter of Independence you no doubt have some confidence in the potential of your campaign messages to change people's minds as the campaign progresses.

    What would be more interesting to you, then, are the subsidiary questions in the opinion polls that ask people for the reasons that they choose either Independence or Union.

    ReplyDelete
  7. I was phoned by ipsos mori. The first main question they asked was independence yes or no. Then they asked a series of questions straight out of the no campaign fear strategy book. Would Scotland get into EU, would there be a shared currency, etc... I think that people are going back to change their first answer after being scared into it by Ipsos mori.

    ReplyDelete
  8. Alan, was a preamble asked to you before the referendum question?

    ReplyDelete
    Replies
    1. No preamble. It was my landline about 2 weeks ago.

      Delete
  9. James this is an excellent article and has prompted me to share with you my own personal experience.

    I had never been polled before but in September I was contacted by ipsos mori. On paper I should be a no voter, retired professional, own my house, good company pension etc.

    After being asked if I would take part in the poll, I was then asked if they could contact me on a regular basis as they wanted to follow trends.

    After two or three very leading questions, one of which started "since Scotland would not be admitted to the EU, I was then asked "if the referendum was tomorrow how would you vote" answer Yes. "how likely are you to vote" answer definitely.

    After a few other questions the interviewer thanked me for my time then finished the call.

    I have never heard back from ipsos mori, I wonder why?

    ReplyDelete
  10. Interesting summary James, I enjoyed the read, and thanks for reading my article, even though you were not entirely in agreement.
    An aspect I think ought to be considered when viewing the polls is how they largely failed to get to grips with the support for the SNP in 2011.
    I am not at all sure that they have got to grips with the referendum and may be possibly attempting to apply general election methods.
    It will be interesting to see.

    ReplyDelete
  11. There is something I would like to point out though.
    Referendums don't have a so well tested methodology like general elections. In Ireland, where referendums are fairly common, polls usually get the result wrong by a mile.
    I do trust polls as much as you and I follow a lot of them in many different countries, but the fact is that here and everywhere referendums are a totally different thing.

    ReplyDelete
  12. This is an excellent article James and closely and logically argued; I can't really fault it. Having said all that, if more than half of the people of Scotland are presently in their hearts intending to vote NO I will go into the field and eat grass with the sheep. I guess in the end I trust my intuition and not my reason in this...

    ReplyDelete
  13. Timothy,
    I agree. You beat me to it. Here's what I had written:

    James,
    At the end of paragraph 2 you say:
    " If you interview a genuinely representative sample of 1000 people and weight the results absolutely perfectly, then on 95 occasions out of 100 you'll get a result that is accurate to within a margin of error of 3%. That is a fact."

    Now I see the point you are making and it is indeed helpful in making clear the undeniability of the of the statistical law that is the basis of opion polling. It's particularly helpful that you've added the numbers that define the accuracy. I don't dispute them. However, your statement worries me. This would have been more correct.
    If you interview a genuinely random sample of 1000 people then on 95 occasions out of 100 you'll get a result that is accurate to within a margin of error of 3%. That is a fact.

    I'm sure you see the point; if you interview a genuinely random sample then by definition it is representative. There is absolutely no need to weigh the results. In fact, weighing the results introduces a bias. Weighing the results introduces a deviation from the principle of randomness on which that "law" relies.

    Now I understand that it is not always possible to get a truly random sample. I also understand that what pollsters are doing therefore is aiming to get that representative sample by other means, ie by matching to other relevant population criteria. It follows however, that the 95 out of 100 with a 3% margin of error law only now applies to the extent that the representative sample thus achieved approximates to a random sample. For this to happen the sample size has to be much bigger. 1000 people just isn't enough.

    Here's why. Lets take it that the pollster believes that gender is important and so takes steps either to seek reponses from an equal number of males and females, or, takes steps to weigh the responses to correct for any gender imbalance in those polled. Pollsters do this all the time and that's fine as far as it goes however it could be argued that what we are dealing with in a situation like this are two quite distinct populations. If we are dealing with two populations then for the 95 out of a 100 and 3% error law to apply the pollsters would need to have a poll of 2000 people ( 1000 men plus 1000 women).


    ReplyDelete