Living with uncertainty is the thing most humans find most difficult.
And while it’s virtually impossible to be certain even about the exact details of the past, it is completely impossible to be certain about the future.
Yet that is the standard we demand of our pollsters, even when they don’t attempt to measure the future.
In this last election every single one of the published national polls failed to get the exact numbers right for what would happen in the future.
So how well did they do at measuring the past, given the uncertainty inherent even here? Newspoll had the ALP on 52 per cent of the vote two days out; Ipsos a bit earlier had them at 5, as did Essential and Roy Morgan.
All of these pollsters use slightly different sample sizes and different methods. Morgan and Ipsos have samples of around 1500 and 1800, but the rest are around 1000.
In the final result the ALP won 48.39 per cent of the party preferred.
Now the sample error on 1,000 is ±3.1 per cent at the 52 per cent mark (Newspoll’s result), which means 95 per cent of the time the actual figure on the whole of your population could be anywhere from 55.1 to 48.9 per cent. And five per cent of the time it could be even further out. So Newspoll was in the five per cent zone, and the rest got results that a competent pollster would get, based on the thing they weren’t measuring – election day results.
There might be a systemic thing happening here though because they are suspiciously close together.
Because we have four pollsters we can also do some meta-analysis and combine their results to effectively give us a larger sample with Labor vote, by weighting the individual results, of 51.2 per cent. The sample error is now 1.49 per cent, and which is well-outside election day as in 95 per cent of cases, we would expect that to range between 49.68 per cent and 52.68 per cent.
Either the pollsters are incredibly unlucky, or something is wrong, but what?
Don’t pay any attention to the industry. The reason they spend so much time on elections is that this is one of the few times you get a proof of concept where you can measure the accuracy of your polling against an actual outcome.
Being this far “wrong” is a commercial disaster. If you can’t call an election right, how do we know you are telling us the truth about what sort of baby wipe people are looking for.
So their explanations are diversions meant to get them through to the spot where clients have forgotten the failure.
Ipsos is suggesting establishing a polling council, to police the accuracy of polls. Like they have in the UK. Because the British Polling Council was so successful in ensuring that British polls accurately picked the Brexit vote. OK, well maybe that isn’t a solution after all, just another level of bureaucracy.
Labor pollster John Utting suggests spending more on polling to get a more representative sample, and more transparency so we can see what pollsters are cooking-up. Fine, except even in our meta sample the pollsters were out, so the larger sample didn’t help, and who is going to judge the quality of the calculations seeing none of them can get them right in the first place?
Utting claims the Australian Bureau of Statistics gets its polling right. One wonders how he knows this. They might have larger samples, but this just reduces sampling error, and makes it possible to do more fine-grained dissections, which may be just as wobbly as what we’ve seen in the last weeks.
It appears that the political parties didn’t do much better themselves. Labor used Galaxy to do its tracking polling, so it is not an effective check, but the Liberals used Crosby Textor. You could tell by the chatter coming out of the coalition, combined with the seats Scott Morrison chose to campaign in, C|T had no idea they were sitting on 10 per cent swings in central and northern Queensland, for example.
So, back to what went wrong?
There are a number of candidates. It is unlikely to be anything to do with young people and mobile phones, as some have suggested. The four pollsters we looked at use different methods. Ipsos and Newspoll use telephones, but this includes mobile phones. Essential is online only – no mobile problem there, and Roy Morgan is face to face. If there were a mobile phone problem it would only show up in half of them.
There is a problem however with all polling being essentially opt-in, unlike elections which are compulsory. When I ring someone, or approach them face-to-face, they can choose not to talk to me. Internet panels are incentivised, and again, are restricted to people who want to be involved.
This means that no polling sample is going to be representative of the population at large. And it is less representative of one group more than most – young men, followed by young women – because they are the least likely to want to talk. Even with huge resources it is very difficult to get enough of them.
Most pollsters explicitly “fix” these and other sampling problems by “weighting” except that weighting doesn’t fix anything. By inflating the size of an insufficient sub-sample you don’t cure the inaccuracy inherent in the small size, you amplify it.
Not only are polls opt-in, but generally part of their sample is invented.
There is also an incentive for them to adjust their figures so they hunt in a pack. While the individual commercial gain from being an outlier and right is considerable, so is the damage if you are wrong. If they pack together, they protect each other. This is what happens with oligopolies.
These may explain the issue, but there are other candidates. One is the allocation of preferences of minor parties, which is crucial to arriving at a two-party preferred vote. Newspoll used to allocate on the basis of the split at the last election. Now they ask voters who they intend to preference.
But that doesn’t solve the problem. Preferences count for minor parties, but minor parties generally don’t contest every seat. I might say I’m going to vote One Nation, but if they are not running in my seat I may do something other than vote for the party of my assumed second preference.
Another problem is personal votes.
If you don’t introduce candidate names into a poll you will get a significantly different read on what is happening. However, the tracking polling used by the parties does use candidate names, and as noted above, didn’t seem to get a markedly different result to the commercial pollsters.
This also favours governments in actual polls because of the sitting candidates’ incumbency advantage.
Some commentators were prepared to call the election based on a uniform swing. This is incompetence (yes, I’m talking about you Waleed Aly). Swings are never uniform, and while this doesn’t affect the accuracy of polls over the entire country, it means that significant polling margins can translate into very few seats.
Using the available quants, as well as the leaks from the major parties, I had Labor on 73 seats, and the government on the same number, in a prediction made on Thursday May 16. I was one of a group of nine, including some former very prominent political professionals who had lunch and put $50 into the middle of the table to back our judgment. We had no special knowledge, just what we could read in the papers. Seven out of nine predicted a hung parliament, and while our judgment favoured Labor it was by 73.44 to 71.11
So a careful read of the available information could get you close to the actual result, rather than Labor’s overblown expectations, despite the results of the polls.
Which means the commentariat can’t be allowed to blame the polls for their shortcomings – they had enough information.
Their problem is they are innumerate and have a naïve understanding of what polls are, what they mean, and what they can predict.
One factor hasn’t been mentioned in the commentary that I have read, and that is the role of the uncommitted voter.
When you get poll figures the pollster generally excludes the undecided voters. This is because you have no way of knowing how they will vote, and therefore ignore them, or at the most note their percentage.
This has the effect of distributing them in proportion to how the rest of the sample is voting.
But what if they break heavily for one side or the other?
Say you have 90 per cent of a sample split 50/50 with 10 per cent undecided. If that 10 per cent splits 60/40, the end result will be 51/49. A 70/30 split will make the end result 52/48.
It is also difficult to know how many uncommitted voters there really are. A declared undecided voter may just be telling you the truth, while there may be others who express a preference, but change their minds before the election.
Add to that the tall poppy syndrome, where voters will adjust voting intention when they think a party is going to win too easily. It is quite possible that what we are really seeing here is a late swing, undetectable by any poll, apart from the one on the day.
Certainly, early on climate change was a bipartisan issue that reached into the Liberal voting bloc, but on the basis of our, as yet unpublished, exit poll, it had retreated, and the economy was a more important issue with right wing voters.
Climate change favoured Labor, and the economy the Libs. The issue top-of-mind will have an effect on how you vote.
It is much more likely therefore that undecided voters, and a late swing to the government, rather than problems with methodologies, explain the failure of the polls to predict the result on the day. As far as we know they may have reported voting intentions, as distinct from voting actuals, entirely accurately.
And then there was Labor’s arrogance and indolence. Bill Shorten pulled-up stumps early and went for a beer on Friday, while Morrison was still working hard, just as people were making their final decision. These visual images can have a real impact on voting intentions.
After all, this was an election with large votes for independents and minor parties because voters were unenthusiastic about either side (we know the polls were accurate enough on this last question) so it wouldn’t have taken much to shift a voter’s intention.
The future is uncertain, and no matter how many polls you take, it will still be uncertain. Blaming pollsters for failing to be fortunetellers says more about the state of political commentary in Australia than it does polling.
If we reported elections on the issues and policies, rather than as though they were horse races, polling would be less relevant. But that would be too hard for journalists and politicians who love the false security of round numbers.
Got something to add? Join the discussion and comment below.