<iframe src="//www.googletagmanager.com/ns.html?id=GTM-K3L4M3" height="0" width="0" style="display:none;visibility:hidden">

Books

Superforecasting could spark a revolution in politics

How Philip Tetlock has transformed the science of prediction

3 October 2015

9:00 AM

3 October 2015

9:00 AM

Superforecasting: The Art and Science of Prediction Philip Tetlock and Dan Gardner

Random House Books, pp.340, £14.99, ISBN: 9781847947147

Forecasts have been fundamental to mankind’s journey from a small tribe on the African savannah to a species that can sling objects across the solar system with extreme precision. In physics we have developed models that are extremely accurate across vastly different scales from the sub-atomic to the visible universe. In politics we have bumbled along making the same sort of errors repeatedly.

Until the 20th century, medicine was more like politics than physics. Its forecasts were often bogus and its record grim. In the 1920s, statisticians invaded medicine and devised randomised controlled trials. Doctors, hating the challenge to their prestige, resisted but lost. Evidence-based medicine became routine and saved millions of lives. A similar battle has begun in politics. The result could be even more dramatic.

In 1984, Philip Tetlock, a political scientist, did something new — he considered how to assess the accuracy of political forecasts in a scientific way. In politics, it is usually impossible to make progress because forecasts are so vague as to be useless. People don’t do what is normal in physics — use precise measurements — so nobody can make a scientific judgment in the future about whether, say, George Osborne or Ed Balls is ‘right’.

Tetlock established a precise measurement system to track political forecasts made by experts to gauge their accuracy. After 20 years he published the results. The average expert was no more accurate than the proverbial dart-throwing chimp on many questions. Few could beat simple rules like ‘always predict no change’.


Tetlock also found that a small fraction did significantly better than average. Why? The worst forecasters were those with great self-confidence who stuck to their big ideas (‘hedgehogs’). They were often worse than the dart-throwing chimp. The most successful were those who were cautious, humble, numerate, actively open-minded, looked at many points of view, and updated their predictions (‘foxes’). TV programmes recruit hedgehogs so the more likely an expert was to appear on TV, the less accurate he was. Tetlock dug further: how much could training improve performance?

In the aftermath of disastrous intelligence forecasts about Iraq’s WMD, an obscure American intelligence agency explored Tetlock’s ideas. They created an online tournament in which thousands of volunteers would make many predictions. They framed specific questions with specific timescales, required forecasts using numerical probability scales, and created a robust statistical scoring system. Tetlock created a team — the Good Judgment Project (GJP) — to compete in the tournament.

The results? GJP beat the official control group by 60 per cent in year one and by 78 per cent in year two. GJP beat all competitors so easily the tournament was shut down early. How did they do it? GJP recruited a team of hundreds, aggregated the forecasts, gave extra weight to the most successful, and applied a simple statistical rule. A few hundred ordinary people and simple maths outperformed a bureaucracy costing tens of billions.

Tetlock also found ‘superforecasters’. These individuals outperformed others by 60 per cent and also, despite a lack of subject-specific knowledge, comfortably beat the average of professional intelligence analysts using classified data (the size of the difference is secret but was significant).

Superforecasting explores the nature of these unusual individuals. Crucially,
Tetlock has shown that training programmes can yield big improvements. Even a mere 60-minute tutorial on some basics of statistics improves performance by 10 per cent. The cost-benefit ratio of training in forecasting is huge.

It would be natural to assume that this work must be the focus of intense thought and funding in Whitehall. Wrong. Whitehall has ignored this entire research programme. Whitehall experiences repeated predictable failure while simultaneously seeing no alternative to its antiquated methods, like 1950s doctors resisting randomised control trials that threaten prestige.

This may change. Early adopters could use Tetlock’s techniques to improve performance. Success sparks mimicry. Everybody reading this could do one simple thing: ask their MP whether they have done Tetlock’s training programme. A website could track candidates’ answers before the next election. News programmes could require quantifiable predictions from their pundits and record their accuracy.

We now expect every medicine to be tested before it is used. We ought to expect that everybody who aspires to high office is trained to understand why they are so likely to make mistakes forecasting complex events. The cost is tiny. The potential benefits run to trillions of pounds and millions of lives. Politics is harder than physics but Tetlock has shown that it doesn’t have to be like astrology.

Got something to add? Join the discussion and comment below.

Available from the Spectator Bookshop, £12.99 Tel: 08430 600033

You might disagree with half of it, but you’ll enjoy reading all of it. Try your first month for free, then just $2 a week for the remainder of your first year.


Comments

Don't miss out

Join the conversation with other Spectator Australia readers. Subscribe to leave a comment.

Already a subscriber? Log in

Close