Economic forecasts have become a political hot potato. The Office for Budget Responsibility’s (OBR) predictions, presented as part of the chancellor’s Autumn Statement, have put the government under pressure. The OBR has revised down its forecast for GDP growth over the next four years by 1.4 percentage points.
The real controversy is that their gloomy projections for GDP and government finances have been put down to Brexit. In the simple phrase of the OBR: “Any likely Brexit outcome would lead to lower potential output”. Lower output leads to lower tax receipts, and worse government finances.
To be fair, the OBR does say that “in current circumstances the uncertainty around the forecasts is even greater than it would be in normal times”. But just how great is this uncertainty?
Studies are published from time to time about the accuracy of economic forecasts. The best set of records is kept in America, though less systematic evidence for the UK shows that the track records are very similar in the two countries.
The Survey of Professional Forecasters (SPF) collects the forecasts on variables such as GDP growth and inflation from a wide range of forecasters. Its database goes back almost 50 years to 1968. Just one quarter ahead, the predictions are on average completely accurate. “One quarter ahead” means the next three months, so it would currently refer to the period January to March 2017.
This average accuracy conceals errors in most forecasts for any particular quarter, the errors cancel out over time. For example, the quarter from July to September 2008 marked the onset of the major recession of the financial crisis. At an annual rate, GDP fell by 1.9 per cent compared to the previous quarter. But the SPF predictions made in the April to June period for July to September were for growth of 0.7 per cent.
The SPF predictions account for only 25 per cent of the variability around the average. When we go four quarters ahead – just one year – the predictions are even worse. Negative growth, for example, has never been predicted, even though there have been 26 quarters of negative growth since 1968.
The track record, which has not got any better over time, shows that in relatively calm times, forecasts just one year ahead have a reasonable degree of accuracy. But when major changes are taking place, just when they are really needed, they have none.
The OBR cannot be blamed for producing predictions four years ahead when the track record of the forecasting community shows them to be of no value. That is what George Osborne mandated it to do when he set the independent body up in 2010. But four years ahead, almost any set of predictions is just as good – or bad – as another.
It would be much better to abolish the OBR and restore responsibility to the Treasury and, ultimately, to the politicians. If they get it wrong and are too optimistic, we can at least kick them out.
As published in City AM on Wednesday 30th November
Asked what “forward guidance” meant, he answered smoothly: “The thing about forward guidance is that it is guidance that is forward. Which is not to say it is meant to be in any way accurate. Indeed, it would be surprising if it were. The most important thing about forward guidance is that the underlying economic determinants should be correct, not that it should be helpful.” Cue collective bafflement of the assembled MPs!
But the statement actually tells us a great deal about how mainstream macroeconomists believe the economy operates.
“Forward guidance” has been the key element in policy-making by the Bank since Carney himself introduced it in the summer of 2013. It is meant to give guidance about the economic circumstances in which the Monetary Policy Committee (MPC) will start to raise interest rates.
The first attempt was certainly not in any way accurate. The governor stated that the MPC would not consider raising interest rates until unemployment fell to 7 per cent, which he predicted would take about three years. It took less than six months. By January 2014, the rate of unemployment had fallen to 6.9 per cent.
This just seems to have been a piece of poor analysis by the Bank. But it does not detract from the more fundamental reason economists think that forward guidance will not usually turn out to be accurate.
The forward guidance is deliberately based on the assumption that behaviour will not change. Yet the mere fact that the central bank makes a pronouncement about the future might induce people to alter their behaviour. And if behaviour changes, the forward guidance might very well prove to be inaccurate.
It is actually a sensible addition to the Bank’s armoury of policy levers. Properly managed, it might enable the Bank to nudge behaviour in directions which it believes will give a better outcome than would otherwise be the case.
The final part of Carney’s statement appears the most gnomic: “The most important thing about forward guidance is that the underlying economic determinants should be correct, not that it should be helpful”.
The governor meant that forward guidance should be given on the basis of a model of the economy which is correct.
In each of the various different macroeconomic models which exist, the assumption is made that consumers and firms form expectations about the future as if their particular model, and no-one else’s, were correct. Yet despite many years of intensive research, macroeconomists still do not agree on what constitutes the model of how the economy works.
There is a challenging academic literature on the theory of how people go about learning the correct model of the economy. But in practice economists are unable to apply it to themselves. We might reasonably conclude that it is the theory which is wrong. Forward guidance is just the latest technocratic delusion foisted on us by mainstream macroeconomics.
As Published in City AM on Wednesday 23rd November
So the pollsters got it wrong again. After the general election last year and then Brexit, it is perhaps not surprising. What is surprising is just how wrong they were. The real problem is the enormous confidence with which they pronounced that Clinton would win.
The Princeton Election Consortium was probably top of the class, stating that Clinton had a 98 to 99 per cent chance of winning. Even the top Bayesian statistician, Nate Silver, who shot to fame by calling all 50 states correctly in 2012, gave Hillary a 71.4 per cent probability of victory.
Economists have been suspicious of opinions elicited by surveys for a long time. A fundamental concept in economic theory is that of “revealed preference”. The idea goes back much further than Adam Smith, the 18th century founder of modern economics. In the Bible, we find the phrase “by their deeds, ye shall know them”. In other words, it is not what people say what matters, it is what they do. If someone says repeatedly that he prefers Pepsi to Coke, but never buys Pepsi and always buys Coke, we can reasonably infer that, despite his words, he does in fact prefer Coke. His actions reveal his preference.
Readers above a certain age will recall the 1980s. Then, pollster after pollster reported that public opinion was firmly in favour of both more public spending, and higher taxes to pay for it. Yet in election after election, voters just as firmly returned Mrs Thatcher and the Conservatives to power. They revealed a preference for lower spending and lower taxes.
A great deal of environmental policy is guided by hypothetical questions in surveys of what people would be willing to pay to, say, preserve a species of newt or prevent an oil spillage. This approach even has its own name, that of “contingent valuation”. Peter Diamond is an MIT economist who has won the Nobel Prize. Jerry Hausman, also of MIT, might very well get one. Referring to a paper they co-authored in the early 1990s, in 2012 Hausman wrote “at the time Peter’s view was that contingent valuation was hopeless. I was merely dubious. But 20 years later, after millions of dollars of government funded research, I have concluded that Peter was correct”.
A fundamental problem is that people overstate how much they would be willing to pay in such surveys, compared to how much they will pay when they really have to – just like the British electorate in the 1980s.
A great deal of expertise has been built up over the years in how to put together carefully constructed surveys to find out what voters and consumers think. But their useful life is at an end. Instead, social media conversations have the potential to discover what people really do prefer. For all their chaotic and often incoherent nature, these unstructured conversations can reveal what people really are thinking and doing. Economists, with their concept of revealed preference, need to make common cause with computer scientists.
As published in City AM on Wednesday 15th November
Image: Trump with supporters by Gage Skidmore licensed under CC BY 2.0
A current headache for the government is the performance of the NHS, and whether it is running out of money. This was making the front pages until the judges’ decision on Brexit pushed it off.
Successive governments have discovered that the finances of the health service are a potentially bottomless pit. A key policy issue has been how to make the NHS more productive, to get it to deliver a better service for a given amount of money.
A paper in the latest American Economic Review provides strong evidence that extending patient choice is an effective way of getting better outcomes. In 2006, the Blair government mandated that patients in the English NHS had to be offered a choice of five hospitals when referred by their physician to a hospital for treatment. Prior to this reform, there was no requirement that patients be offered choice.
Economic theory regards choice as a Good Thing, but also recognises that, in complex areas like health, things might not be completely straightforward. For example, information on quality might be imperfect. Very difficult cases might be sent disproportionately to one of the very best surgeons, who, because of this, has a relatively low success rate. Understanding technical information might itself be difficult.
Even so, the authors show that the introduction of choice had unequivocally positive results. Patients became more responsive to clinical quality in deciding where to go. In turn, hospitals responded to this demand by improving the overall quality of the service. There was a small but very definite reduction in mortality. And, in the dry language of economics, there was a “substantial increase in patient welfare”.
Gaynor and colleagues make appropriate qualifications about the accuracy of their calculations, but they work out that the monetary value of the improvements in service to each patient in their sample was $6,226. The average value of each of the small number of lives saved was $300,900.
There were fears prior to the reforms that only the better off would benefit. On the contrary, those who were either more severely ill or from low-income areas benefited the most.
The importance of this evidence goes considerably beyond its immediate sphere of a single area of elective surgery in the health system. It has become an article of faith among the liberal, educated elite that ordinary people lack the ability to process information properly when making decisions about complex issues.
Whether on Brexit, on making choices about hospitals, or choices about schools for their children, the broad masses are deemed too stupid to understand. It follows that choice is bad for them and, instead, they should simply do what their so-called betters decide for them. But even in a complex area like elective surgery, given the opportunity, people can make good decisions and improve their lives.
As published in City AM on Wednesday 9th November
The scenes as the migrant camp was cleared in Calais once again provoked bitter divisions in British society. Metropolitan luvvies and liberals tweeted their virtue and called for no restrictions on immigration. In more traditional areas, there is active resentment at the possibility of even further inflows of foreigners.
When New Labour decided in the early 2000s to allow large-scale immigration from new EU member states, we were seriously invited to believe that an influx of immigrants on a scale unprecedented in our history would only have positive economic effects and would boost economic growth.
Economics certainly suggests that an increase in labour supply can increase growth in output. But in the so-called neoclassical growth theory of economics, even in the post-endogenous variety made notorious by Ed Balls in his previous incarnation, by far the most important source of sustained growth is innovation.
A truly modern economy does not rely on more and more capital and labour being fuelled into the machinery of production. That was the old Soviet model.
A modern economy relies instead on innovation. So there are at best limited benefits from importing more and more labour. True, immigrants can bring new skills, found innovative new businesses and, as they tend to be younger, they can slow down the ageing of society. But they, too, get older eventually, so this is not a long-term solution.
The anxieties about immigration are not couched in the arcane language of economic theory. But a fuller appreciation of the theory does enable us to understand why people worry so much. Underlying the theory are the assumptions that supply and demand balance in labour markets, and that the prices of the various kinds of labour – in other words, wages – are set at appropriate levels.
A recent paper in the Journal of Economic Perspectives by Christian Dustmann and Uta Schönberg of University College London shows that large-scale migration in fact creates serious imbalances and mismatches in labour markets.
They provide extensive evidence of what economists call “downgrading”. “Downgrading” occurs when the position of immigrants in the labour market is systematically lower than the position of natives with similar education and experience levels. The authors calculate that, in Germany, recent immigrants have wages which are on average 17.9 per cent below those received by natives with similar age and skill profiles. In the US, the figure is 15.5 per cent and in the UK 12.9 per cent.
Dustmann and Schönberg illustrate the disruption which mass migration can cause even more starkly. They calculate that while 69.7 per cent of immigrants in their samples can be classed as high skilled in terms of their education, only 24.6 per cent are in high skilled jobs. In their dry terminology, this means that “immigrant arrivals to the United Kingdom were a supply shock in the market for low-skilled workers”.
Mass migration has not simply meant more people competing for jobs. It has meant that people with higher skill levels are competing for your job. In other words, the people of Burnley and Bradford have been right all along, and the metropolitan liberal elite completely wrong.
As published in CITY AM on Tuesday 1st November
Image: Calais Jungle by malachybrowne is licensed under CC by 2.0
The rise of artificial intelligence (AI) continues to generate concerns. The latest furore emerged at the start of this week. Researchers in the top ranked University College London computer science department claimed that an AI algorithm correctly predicts the outcome of 79 per cent of cases heard at the European Court of Human Rights.
The current fear of AI, certainly among the arts graduates who write the editorials in the national quality press, is such that the study was firmly denounced. Computers can never replace human knowledge and experience in these matters.
But in real life, algorithms are increasingly being used by law firms. The law is essentially a series of rules which have been developed over time. Many areas of civil law are enormously complex. Computers can sift through huge amounts of material and save a great deal of expensive human time.
The use of AI is proliferating rapidly in many diverse areas, from the early identification of diseases and the reduction of energy costs for data centres, to the decision on whether or not to grant a loan. An article in the latest issue of the august scientific journal Nature by Kate Crawford and Ryan Calo shows that investment in technologies that use AI in the United States has soared from some $400m in 2011 to well over $2bn last year. They quote IBM’s chief executive, Ginni Rometty, saying that she sees a $2 trillion opportunity in AI systems over the next decade.
Earlier this month, the White House published its report on the future of AI, based on four workshops with leading specialists held across America on how AI will change the way we live.
The US government recognises that this highly disruptive new technology creates new risks in many ways. But so, too, did the railways.
During the opening ceremony of the Liverpool to Manchester line in 1830, the engine Rocket hit and killed a cabinet minister, William Huskisson. Serious suggestions were made that men with red flags should walk in front of trains, which would have defeated the whole point of the technology. But these risks did not stop railways from spreading across the world. In the same way, the White House report concludes that “AI holds the potential to be a major driver of economic growth and social progress”.
The report is packed full with both interesting information and perspectives on AI. But it is also a case study in why the United States continues to be by far the most innovative economy in the world. By and large, the Americans leave innovation to commercial companies. But where the national interest is concerned, the public sector works in symbiosis with the private. They plan a huge programme of basic research in AI, but with a firm eye to its practical application. Just as the US did with biotech, the aim is to develop a critical mass of money, skills and ideas funded by the government, which companies then build on. America is once again embracing the future.
As published in CITY AM on Wednesday 27th October
Tempers are fraying at the highest levels of economic policy-making in the UK.
Theresa May, at the Conservative Party conference, emphasised the “bad side effects” for savers of the Bank of England’s policy of near-zero interest rates, a position reinforced by former Tory leader William Hague in the Telegraph this week. A few days ago, Mark Carney, the governor of the Bank, hit back by saying he would not take instructions from politicians.
He went on to discuss inflation. The fall in sterling puts up the price of imports, and some economists predict that inflation will hit 3 per cent next year, up from its current (still low) level of 1 per cent. The Bank’s Monetary Policy Committee (MPC) has an official remit of maintaining inflation at 2 per cent. Carney stated that he would allow inflation to run “a bit” above this to protect growth and employment.
Just how much power does the MPC have to control inflation in such a precise way? At first glance, the work of the MPC has been brilliant. Some years the inflation rate has been higher than the 2 per cent target, like in 2011 when it was above 4 per cent, and some years lower, as last year when it was zero. But over the past 15 years, inflation in the UK has averaged 2 per cent a year, almost exactly in line with the target.
But the average inflation rate has been very close to 2 per cent averaged across the 28 member states of the European Union. And the United States also registered the same average of around 2 per cent over the past 15 years.
The fact that inflation has averaged more or less the same rate across the major economies for well over a decade – only in Japan has it been substantially different – strongly suggests that there is a common factor at work. It could be the collective skills of central bankers, or it could be the effect of plain, old fashioned competition.
Competition in markets for goods and services means that it is hard to make price rises stick, and competition for labour means it is difficult to secure substantial wage increases. Competition in the global economy is the main reason inflation has both been low and very similar across the developed world.
The MPC controls the short-term rate of interest, and the theory is that a rate increase, say, reduces demand in the economy as a whole. This in turn has a stable and predictable impact on inflation, with lower demand leading to lower inflation. The trouble is that the facts do not fit the theory. Inflation dropped to zero after peaking in 2011, but unemployment has effectively halved and the economy has grown at a decent rate.
We do owe central bankers in the UK and the US a massive vote of thanks for preventing the crisis of the late 2000s from becoming a repeat of the Great Depression of the 1930s. But even they do not have magic powers. Inflation is low because of competition, not central bankers.
As published in CITY AM on Wednesday 18th October 2016
It had been an article of faith among economists and policy-makers that free trade is a Good Thing. Trade liberalisation was a key feature of the world economic order enforced by the United States after the Second World War. For decades, the trend of removing trade barriers led to world trade growing around twice as rapidly as world GDP.
All this now seems under threat. Calls for greater protectionism have been a feature of the US presidential race, emanating from both Donald Trump and the Democrat hopeful, ultimately defeated by Hillary Clinton, Bernie Sanders. In Britain, the Brexit vote puts the spotlight on trade policy for the first time for many years. Even the International Monetary Fund (IMF), the high priest of economic orthodoxy, has joined in the debate, with a warning at the end of September that free trade is increasingly seen as benefiting only the well off.
The economic principles which support free trade go back over 200 years, almost to the beginning of economic theory itself. Adam Smith, in his great book the Wealth of Nations, set out the basic arguments in the late eighteenth century. Trade enabled countries to take advantage of specialisation. If Portugal produced, say, wine more efficiently than Britain, and we made cloth better, by concentrating resources on what each were good at and trading the outputs, both benefited. Production of both commodities would be concentrated in the country which was most efficient at each.
David Ricardo, writing in the early nineteenth century, was not just an economist but a self-made multi-millionaire – at a time when a million pounds was a million pounds – and a Member of Parliament. He took Smith’s arguments further, and showed that trade was beneficial even if one country could literally make everything more efficiently than everyone else. Countries should specialise in what they were comparatively best at.
This, the so-called principle of comparative advantage, remains at the heart of the economic theory of trade to this day. Ricardo’s theory was tremendously influential. It was used by the Lancashire politicians John Bright and Richard Cobden to secure the repeal of the Corn Laws. These put high tariffs on the import of corn into the UK. Their abolition meant cheap food for the industrial working class, and was the most important social reform of the entire nineteenth century.
But all theories make assumptions. Ricardo made it clear that he was assuming that capital did not move across borders. It stayed put in its country of origin. In the early nineteenth century, this was a reasonable assumption to make: international capital flows did exist, but not on a massive scale.
If capital can flow freely, Ricardo’s theory needs to be heavily qualified. So, when the Berlin Wall fell, German companies built factories in Poland and the Czech Republic, destroying German jobs. In the long run, trade may still be beneficial, but there will be many losers along the way. Next year sees the two hundredth anniversary of Ricardo’s great book. The IMF has just rediscovered something which good economists knew all along.
As Published in CITY AM on Wednesday 12th October 2016
The economic data on post-Brexit Britain is beginning to emerge. We discovered last month that employment in May to July grew by 174,000 compared to the previous three months. Last week, the Office for National Statistics published its estimate for the output of the service sector of the economy in July. This shows a 0.4 per cent rise on June, and a growth of 2.9 per cent since July last year. Both are very good figures.
Official data, even for employment, is notoriously prone to subsequent revisions. Is there any harder evidence that the economy is prospering and that Project Fear, so prominent in the referendum campaign, was wrong?
The key to a growing economy is of course confidence. This was the great insight of Keynes. The economy is driven much more by psychological factors – by his memorable phrase “animal spirits” – than by objective economic ones. If confidence becomes depressed, no amount of stimulus will persuade businesses to spend on their investment plans or hire more people.
My colleague Rickard Nyman has been analysing tweets in the London area every day from the beginning of June. Now, there is an awful lot of rubbish on Twitter, but the latest machine learning algorithms enable you to dive into the mud and come up with pretty polished estimates of overall sentiment. The day after the vote, 24 June, stands out as one huge hangover. The balance of London sentiment went very sharply negative. Yet by the end of June, it was back to where it stood at the start of the month. Sentiment wobbles along for the rest of the summer, but since the start of September a strong positive upward trend has set in.
Most tweets of course are about the fortunes of Arsenal, going on holiday, what was on TV, and not directly about the economy. But the overall mood of Londoners has become much more positive over the last few weeks.
More evidence of positive feelings was provided at a seminar organised last week by the law firm Linklaters and the property outfit Strutt and Parker. The focus was on commercial property, which is notoriously sensitive to the state of the economy. There is no doubt that the sector took a hit immediately after the Brexit vote. But every speaker, from quite different backgrounds, struck a decidedly optimistic note about both commercial property in particular and the UK in general.
Andy Martin of Strutts noted that the value of deals in 2016 is on track to be close to the levels of 2007, the pre-recession peak year for the economy, despite the sharp pause in the summer. Both Zach Vaughan from Brookfield, one of the largest investors in global real estate, and Chris Morrish, recently retired as head of European real estate for Singapore’s sovereign wealth fund GIC, confirmed the continued strong attraction of the UK for overseas investors.
A coherent picture emerges from this diverse mix of official statistics, social media conversations and global commercial property perspectives. The UK economy is thriving.
As Published in City AM on Wednesday 5th October 2016
Image: Twitter by Christopher is licensed under CC by 2.0
It’s an exciting time of the year for many young people, with some setting off to university for the first time and others starting to polish their applications for next year.
Good news if you have been accepted to read economics at Cambridge, say, or business studies at Oxford. A survey by the Sunday Times shows that the average salary, just six months after graduating, is over £40,000. If instead you are off to Worcester to do drama and dance or Liverpool Hope for psychology, you can expect around £13,000, just under half the value of average earnings across the workforce as a whole.
Figures like these raise the question of whether it is worthwhile studying many of the courses which are on offer. It is a question which is increasingly pressing. Last year, a report commissioned by the Chartered Institute of Personnel and Development (CIPD) claimed that no fewer than 58 per cent of the UK’s graduates are in non-graduate jobs compared to only 10 per cent in Germany. The growth in graduates is outstripping the growth in high skilled jobs across the EU, but especially in Britain.
Successive governments have made a fetish of higher education. The Conservatives elevated a whole raft of polytechnics to university status in 1992, followed by a second wave under New Labour in the 2000s. Tony Blair was insistent that his target be met of 50 per cent of each year group of young people going to university.
The mismatch between the supply of and demand for graduates is not something new. It was already well known when Blair invented his mantra of “education, education, education”. Peter Dolton and Anna Vignoles, both then at Newcastle University, published a famous paper 20 years ago on over-education in the graduate labour market. Scientists measure the value of an academic paper by the number of citations it receives from other scholars. On this criterion, this one is a star.
They looked at a very large sample of graduates and their conclusion was stark. “We find that 38 per cent of graduates were overeducated for their first job and, even six years later 30 per cent of the sample were overeducated.” So the current estimate of 58 per cent by the CIPD, 20 years later, startling though it may seem, may not be too far off the mark. To be fair, other studies do come up with lower numbers. But they all demonstrate the same point. Lots of graduates end up in jobs which do not require a degree.
This is bad news for economic theory, which predicts that even if over-education is observed, it will only be a temporary phenomenon. Companies are assumed to adapt their production techniques to fully utilise the increased supply of skills.
Is it bad news for the students? A quantitative degree from a good university still commands a huge premium in terms of lifetime earnings. But estimates of the average amount extra that a graduate will earn conceal massive differences in outcomes. Increasingly, studying weak courses at weak institutions is simply not worthwhile.
As published in CITY AM on Wednesday 29th September 2016
Image:Graduation by Amy is licensed under CC BY 2.0