<iframe src="//www.googletagmanager.com/ns.html?id=GTM-K3L4M3" height="0" width="0" style="display:none;visibility:hidden">

World

The Covid Inquiry is exposing lockdown’s dodgy models

7 November 2023

8:18 PM

7 November 2023

8:18 PM

Did we lock down on a false premise? Yesterday was Ben Warner’s turn at the Covid Inquiry. He was an adviser, and one of the ‘tech bros’ brought in by Dominic Cummings to advise No. 10 on data. He was present at many of the early Sage – and other – meetings where the government’s established mitigation (herd immunity) plan was switched to the suppression (lockdown) strategy.

In Dominic Cummings’ evidence to the inquiry last week, he said that models didn’t play a big part in moving the government towards lockdown. Part of the written inquiry evidence supplied by his data man, Ben Warner’s, supports that too. The Inquiry KC was keen to highlight this early on and flashed it up on screen: ‘It is not necessary to perform large scale simulations of an epidemic to understand the main effects of a mitigation versus a suppression strategy. Simple calculations allow for reasonable approximations of the outcome.’ Those ‘reasonable approximations of the outcome’ were that the NHS was going to be overwhelmed. Unless the harshest measures were imposed. Unless we locked down.

But these simple calculations and extrapolations are models too. They should be examined as such. It also cannot be denied that the models were then used as a comms tool for compliance. They were a not so subtle ‘nudge’ for Brits to do the right thing. They nudged advisers too.

Look further into the written statements – not just the bits highlighted by the KC – and Warner repeatedly mentions models, including those produced by Neil Ferguson. Indeed, of the importance of models for the eventual decision to lockdown his statement reads: ‘I am not aware of any meeting where the Prime Minister was asked to choose between a mitigation and a suppression strategy. But if that meeting had happened before the 15 March, I am confident that the right information and evidence was not put before him to inform the consequences of this decision, simply because Neil’s work did not exist before then.’ For these reasons, the models are certainly worth examining.

Models drive two fundamental points the inquiry is taking as established truth: lockdown works and lockdown was the only thing that could stop the NHS becoming overwhelmed. Almost every question the inquiry asks of any witness centres on these assumptions. In large part because many of those advising the government at the time also hold this view. But does the evidence support them?

Firstly, that lockdown works, is something the inquiry should seek to answer definitively. Indeed Lord Stevens, former NHS Chief Exec, said as much in his evidence last week. The second point – that lockdown was the only option – seems a flawed assumption too. There’s a growing body of evidence that Covid cases peaked before lockdowns were implemented – and not just in the UK. Google mobility data shows people taking their own precautions pre-lockdown too. But even more compelling is Sweden. Look at their first wave deaths compared with ours for the first wave. Different peaks but near identical trajectories.

To understand how these beliefs arose, and why they may be wrong, we need to look at the flaws within modelling as a discipline, and the specific type of models that were used during the pandemic. The code running them is described as ‘spaghetti’ code – meaning it’s built up and added to by different people over years and years and becomes an unruly mess that’s very difficult to decipher and therefore scrutinise. They’re a black box. The people using the results – ministers and advisors – just say what comes out. They don’t understand what goes on inside them. And the models (or at least not the prominent ones) did not react to live data.


But the flaw that undermined them more than any other is their total blindness to human behaviour change. They are built with a fundamental assumption. Only restrictions will bring down infections, and will bring down the R number. It’s hardcoded. It was ministers’ lack of understanding of this issue – perhaps more than anything else – that tipped the scales towards lockdown. It wasn’t until Omicron came along 18 months after the first lockdown that this became publicly apparent – giving us the perfect case study (as The Spectator pointed out at the time).

‘Deaths could hit 6,000 a day’, reported the newspapers on 17 December 2021. A day later documents for the 99th meeting of Sage were released which said that, without restrictions over and above ‘Plan B’, deaths would range from 600 to 6,000 a day. A summary of Sage advice, prepared for the Cabinet, gave three models of what could happen next:

  1. Do nothing (ie, stick with ‘Plan B’) and face ‘a minimum peak’ of 3,000 hospitalisations a day and 600 to 6,000 deaths a day
  2. Implement ‘Stage 2’ restrictions (household bubbles, etc) and cut daily deaths to a lower range: 500 to 3,000.
  3. Implement ‘Stage 1’ restrictions (stay-at-home mandates) and cut deaths even further: to a range of 200 to 2,000 a day

After a long and fractious cabinet debate, the decision was to do nothing and wait for more data. ‘Government ignores scientists’ advice,’ fumed the BMJ. But the decision not to act meant that the quality of Sage advice could, for the first time, be tested, its ‘scenarios’ compared to actual.

The results were stark. On hospitalisations, beds occupied and deaths the models had vastly exaggerated what went on to happen. Human behaviour had been completely ignored even though by then the modellers had seen it first hand, not least in the ‘pingdemic’ of the summer that had just gone. Graham Medley – chair of SPI-M the modelling group – later described it to a parliamentary committee as: ‘By far the most effective three days in reduction of transmission that we’ve seen throughout the whole epidemic. Much more effective than any of the lockdowns’.

Whilst it took Omicron for the government and the public to realise this flaw, the modellers had known about it for years. Professor Neil Ferguson, when questioned on the issue at the inquiry, made that clear himself. Asked about Sage’s default assumption that only drivers of behaviour change were government restrictions, Ferguson mentioned an essay he had written in Nature in 2007. That essay – penned 13 years before Covid struck – contains many of the criticisms of the models discussed yesterday. ‘Today’s models largely ignore how epidemics change individual behaviour’, reads the subhead. The key quote from the piece is instructive: ‘Yet fundamental limitations remain in how well they [models] capture a key social parameter: human behaviour.’ He went on: ‘Most glaringly, the effects of behavioural responses to epidemics are given short shrift.’ The whole essay is worth reading. It sums up the glaring omission of self-motivated behaviour change. But it also begs a worrying question: why did government place so much weight behind them, despite this flaw being known 13 years before?

Even more baffling was why this affected the UK response specifically. Sweden managed to ignore models almost entirely, having been burnt by them in the past. The Danes even managed to reflect behaviour change in their models resulting in much more accurate outputs. Denmark’s Expert Group for Mathematical Modelling produced scenarios for Omicron hospitalisations that mapped well to reality.

Dr Camilla Holten-Møller – who chaired the Danish modelling group – gave evidence to the select committee hearing Medley had. She explained that they were very conscious to factor in behaviour change into their models: when cases reached a certain level in a local area, people would mix less. So the model was designed to reflect that.

Beyond behaviour change there are three often quoted defences of modelling:

  1. The models were scenarios not forecasts.
  2. The media unfairly criticises the ‘worst case scenario’ models.
  3. Sage could only model what the government asked it to. Hence no consideration of side effects or damage to education, the economy and health.

The first of these arguments has been dismissed by some of the modellers themselves. At a Royal Society conference chaired by Sir Patrick Valance in June last year, Professor Steven Riley of the UKHSA made the point: ‘We can’t redefine commonly used words. If we have a chart that’s got a time axis and the time axis goes past the current day and there’s a line going off to that side of the chart then we are making forecasts. Whether we like it or not.’

The second – that we in the press paid too much attention to what were worst case scenarios – was picked apart by Professor Ferguson. He told the inquiry: ‘I was always uncomfortable with labelling what I felt was our central estimate as being the reasonable worst case. Because calling it the reasonable worst case, even if in theory policymakers are meant to be planning to it, makes it sound like it’s an unlikely eventuality, whilst in my view it was the most likely eventuality if nothing more was done.’  So: worst case scenarios were not seen by all as unlikely.

The third: that government asked the wrong questions – is harder to answer. Perhaps Baroness Hallet will get to the bottom of that in the years to come. But what’s clear now is the state has learnt nothing about the peril of using the modelling discipline. Look at the modelling paper used to justify Rishi Sunak’s smoking ban. That model spits out four scenarios. Each of them show a reduction in smoking, with three out of the four seeing smoking completely eradicated by 2050. A brilliantly successful policy then? But then look at the modelling assumptions: ‘In all scenarios, the model assumes smoking instigation rates reduce year-on-year to reflect ongoing increases in the age of sale.’ They spell it out in black and white. The key assumption for the model – the input to the black box – is that the policy works. There is no data you could put into that model that would even allow you to consider the possibility of no reduction in smoking and a massively illiberal policy failure.

The smoking models are trivial compared to those produced during the pandemic years – but they highlight the need for security. As more and more evidence from the inquiry comes out it’s clear how influential models are too, even if just to confirm what seemed obvious from more basic data. It’s hard to blame advisors, ministers and scientists for decisions made early on. Locking down while we figure things out was a perfectly reasonable approach. To look at and build the evidence. But was that time used to scrutinise the models? Consider less harmful approaches? The inquiry should try to find out.

Got something to add? Join the discussion and comment below.


Comments

Don't miss out

Join the conversation with other Spectator Australia readers. Subscribe to leave a comment.

Already a subscriber? Log in

Close