Government scientists must be transparent about flawed Covid models
The strength of the economic recovery as Britain emerges from lockdown is a hotly contested subject among economists. Some believe there will be a massive surge in demand as consumers celebrate their freedom, others argue it will take time to claw back confidence.
Economic forecasts are subject to the same faults as any projections, as we have seen over the pandemic, and will differ for two main reasons. The same model will give different outcomes depending on what assumptions are made about key variables, for example, how long it will take trade patterns with the EU to return to normal.
The other point of discrepancy is more fundamental: assumptions about how the economy actually works. Even with the same assumptions, different models will give different results.
At the moment, those who think inflation is primarily a monetary phenomenon are projecting quite sharp upturns in inflation not just in the UK but across the West as a whole. Other economists place less weight on money as a cause of inflation and so see less of an increase.
Economists have appreciated this for a long time. A substantial amount of research effort has been devoted to comparing different models so that the differences between them can be better understood. The first systematic steps on this were taken as long ago as the 1970s.
In just the same way that economic forecasts differ, the pandemic has revealed that different groups of epidemiological modellers also produce varying forecasts about how many cases of Covid there will be, the rate of hospitalisations, and the number of deaths.
What is missing in the countless scientific models used to justify decision making during the pandemic is transparency over how heavily ministers are relying on any particular model.
For example, at the end of October 2020 Patrick Vallance, the government’s chief scientist, predicted there would be 4,000 deaths a day by the end of the year. At the same time, other modelling groups were projecting daily numbers of between 1,000 and 2,000.
As it happens, they were all too pessimistic. Even though the new virulent Kent variant had taken hold, which the forecasts made in October may well have not taken into account, the highest rate was observed on New Year’s Eve itself, with some 750 deaths.
This is by no means the only example of substantially inaccurate forecasts made by epidemiologists during the pandemic.
But the point here is not the forecasting errors made by epidemiologists. It is that their forecasts differ for exactly the same reasons as economic ones. Different groups both have different models and make different assumptions, such as the effectiveness of vaccines.
Epidemiology is not like Newtonian physics applied to straightforward everyday problems. With the latter, all physicists will give the same answer to a question. The model is agreed upon, and has been subjected to stringent empirical tests of validation. This is not the case with epidemiological models.
There is an important policy implication which follows from this. When politicians say they will “follow the science”, the question is: which science. Which model, and which set of assumptions will be followed?
Economists have already set the example. We need a proper audit of the epidemiological models. Their black boxes need to be opened so that the “science” behind which they have sheltered can be made public.