The Autumn Spending Review announced by the chancellor Sajid Javid barely raised a ripple last week.
Yet the increase planned in 2020/21 for what the Treasury calls “day-to-day departmental spending” is the highest for 15 years, no less than 4.1 per cent in real terms.
This spending pays the running costs of public services, the main component of course being public sector wages. An increase of this size ought to mean better services, although the Gordon Brown years demonstrated quite clearly that more spending need not mean an improved service.
This number only represents 37 per cent of total public spending. Considerably more is spent on welfare benefits, pensions, and interest on the national debt. The squeeze is still on here, so the overall rise in total current public spending is more modest, at just 2.0 per cent after allowing for inflation.
Nevertheless, Javid’s plan does represent a step up in the move away from austerity envisaged by the former chancellor Philip Hammond in last year’s autumn Spending Review.
Still, this pales in comparison to what is envisaged for the public finances under the current Republican administration in the US. The Congressional Budget Office (CBO) there notes that the federal budget deficit for 2019 will be $960bn. Budget deficits are projected to average $1.2 trillion a year between 2020 and 2029.
The CBO calculates that this will push up federal government debt to 95 per cent of GDP, the highest level since the late 1940s.
On both sides of the Atlantic, the move away from austerity represents a major shift in the narrative around public sector debt. It is now, it seems, okay to feel relaxed about government borrowing.
The mood in mainstream academic macroeconomics has also shifted. Ken Rogoff, a former chief economist at the IMF, said in 2010 that a debt-to-GDP ratio above 90 per cent risked a substantial reduction in the long-term growth rate (a view shared by many in developed countries), triggering a wave of austerity.
Yet in February this year, he changed his tune, saying that the steady decline in global real interest rates meant that the debt-to-GDP ratio was no longer a concern.
Olivier Blanchard, another former IMF chief economist, made a similar point earlier this year, when he argued that, as the real rate of interest is lower than the real growth rate, future interest payments on debt could be met out of the proceeds of growth.
While this is not necessarily unusual (such a state of affairs has been the case often enough in the last 150 years), the argument that governments should use it as an excuse to build up debt very definitely is.
The shift in attitude has implications in politics, too. For years, right-wing parties have painted the left as being as being spendthrift and irresponsible.
With an election seemingly inevitable in the UK, it will be interesting to see whether the Conservatives – having turned on the taps – can make that narrative stick to Jeremy Corbyn.
As published in City AM Wednesday 11th September 2019
Image: Taps via PxHere is licensed under CC0 1.0
The expulsion of Bury FC from the English Football League last week continues to generate a huge amount of sound and fury.
Regardless of the apparently dodgy nature of some of Bury’s transactions, the simple fact is that the club overspent massively in order to gain promotion from League Two last season.
The surge in the costs involved of running a football club has been dramatic. Over the summer, for example, Premier League clubs were involved in transfer deals worth a collective £1.4bn. Marcus Rashford signed a new contract with Manchester United in July worth £250,000 a week, and quite a few players get even more.
Professional sporting clubs are an unusual sort of beast from the perspective of economic theory.
Economists agree that companies act to maximise profit. The concept is not completely clear-cut – a pricing policy, for example, which gouges customers and increases profits this year may eventually prove disastrous.
But generally sustainable profit is the aim. But sporting clubs do not even attempt to maximise profits. Their principal motivation is to maximise costs. Spending more money means getting better players. And better players mean more success on the field.
The correlation between the total amount a team spends on its players and its league position is not perfect, but it is high. It is the principal reason for success. So there is an inherent tendency for clubs to live beyond their means, unlike almost all other businesses. It is performance on the field which matters.
The behaviour of clubs, however, nicely illustrates another concept in economics. This is the potential conflict between individual and collective rationality. It is the collective interests of top soccer clubs to scale down the payments to players. The problem for clubs is that if they offered stars laughably small amounts, say a mere £100,000 a week, other top clubs both here and in Europe would entice them away.
It is not in the individual interests of the club to allow this to happen.
One possible solution is for the regulatory body of a game to impose a binding salary cap, limiting the total amount which can be spent on a team.
This works well in American football, for example. As an approximation, all the teams spend the full amount. So unlike soccer, where a handful of clubs dominate, success rotates around.
There was, in fact, a maximum wage in force in soccer until 1961. It was only twice average earnings, around £1,200 a week in today’s money, and was ended by the threat of a players’ strike.
Nowadays, of course, players able to perform in the Premier League are part of a global market. American footballers are not. The game is hardly played anywhere else.
Players with Premier League skills thus are exported to and imported from abroad – what economics describes as a tradeable market. In the lower divisions, however, the players are non-tradeable in this sense.
A salary cap, no matter how tightly drawn up, is always open to creative abuse. But economics suggests that it is the way forward for teams in divisions below the Premier League.
As published in City AM Wednesday 4th September 2019
Image: In Memory of Bury FC by David Dixon via Geograph is licensed under CC BY-SA 2.0
A report published by Deloitte a couple of weeks ago will have enhanced the feeling of holiday wellbeing for many people.
The median annual pay for bosses of FTSE 100 companies fell in 2018 to £3.4m, compared to £4m in 2017.
This is the lowest level since 2014, when the UK brought in rules which require firms to report a single figure for chief executive pay.
Criticism of the remuneration of top corporate executives has been growing strongly for some time. In June, for example, the shareholders of Netflix voted down – albeit by a very narrow margin – the firm’s executive officer compensation plan.
Netflix grew from nothing in 1997 to a current value of around $150bn, and over the last four years its share price has almost trebled. But shareholders still did not like the chief executive’s proposed package.
Top executives may feel rather aggrieved at this mounting unease over their “emoluments” – a much more suitable word for these grandiose packages.
After all, does not basic economics provide a sound justification for their pay? In the textbooks, prices are set by the interaction of supply and demand. If something is in short supply, such as the skills of executives, the price will be bid up.
Remarkably, a more sophisticated version of this argument is advanced by some leading members of the economics profession.
Greg Mankiw, a top Harvard economist, is one of the biggest cheerleaders. Technological change, he argues, usually increases the demand for skilled labour. As such, unless society is able to educate and train people so that the supply of skilled labour increases at least as much as the demand, the earnings of skilled workers will rise relative to the rest of the labour force.
Technology is further invoked by some to justify the pay of those at the top. Because of a truly dramatic increase in the level of connectivity in society, highly talented individuals have been able to leverage their talents across global markets and capture rewards that would have been unimaginable in earlier times.
This is certainly the case with stars of popular culture and sport. A hundred years ago, for example, the only people who could have any direct experience of Manchester United playing football live were those present in the stadium during the game. Now, the team can be watched by literally billions around the world, using a variety of delivery channels, and the players reap huge amounts as a result.
However, it is not at all apparent that the same argument applies to corporate executives.
The huge growth in business schools in recent decades, for example, has presumably led to a substantial increase in the supply of people capable of filling top executive roles.
The fact is that, in many situations, there is an inherent indeterminacy around a price – or a pay package – when it is being set. The Oxford economist Francis Edgeworth argued over a century ago that “in pure economics there is only one theorem, but that it is a very difficult one: the theory of bargain”.
Corporate executives have certainly exhibited great bargaining skills in recent decades. But it seems that, at last, their bluff is being called.
As published in City AM Wednesday 28th August 2019
Image: Handshake via pxhere licensed under CC0 1.0
August is traditionally the silly season. Brexit makes this year slightly different, of course, but it is good to see a fine British tradition still being preserved. Silly stories abound.
Sajid Javid was linked (erroneously, he now claims) with the idea of fixing the housing market by making sellers pay the stamp duty rather than the buyers.
Sentiment around the housing market has been weakening for some time. This is certainly the case in London, where the large amounts of duty charged on expensive sales have acted as an additional deterrent.
But we might reasonably wonder how shifting the tax from the buyer to the seller will make any difference at all.
An activity, the sale of a house, is being taxed. At one extreme, the buyer could offer the asking price less the full amount of the tax. At the other, the seller could add the tax to the price. Or the two could strike a bargain around what proportion each will pay. It really does not matter who is legally responsible for the tax – its existence will still have an impact on people’s desire and ability to buy and sell.
An even dafter policy idea emerged from Bright Blue, ostensibly a “pressure group for liberal conservatism”. The think tank seems to have forgotten the most basic principles of how economic incentives operate.
The policy wonks proposed higher fines for motorists who leave their engines idling. They went on to suggest that a proportion of the fines should be paid to the people who reported offenders to the police. So for the cost of a phone call or the time spent composing an email, you could trouser around £50.
That beats working as far as most people are concerned. The police would be swamped, not just with genuine incidents, but with scores being settled.
And Prince Harry seems to be having a silly season all of his own. He suggested that people can be “unconsciously” racist. He himself demonstrates how “unconsciously” one can win the Monty Python Upper Class Twit of the Year award.
Not content with lecturing a Google gathering in Sicily, attended by hundreds of private planes, on the evils of climate change, he and Meghan Markle flew for a six-day break to Ibiza, again on a private jet. Only 48 hours after their return, off they went again on an “Uber for billionaires” to Nice.
It is in these trying times that the house Bible of the metropolitan liberal, the Guardian newspaper, reliably provides light relief. The travel pages eulogise “eco-lodges” in places halfway across the world like Cambodia and Peru.
A piece last week created mental agony for the writer. A social enterprise, Beyond Food, is helping homeless people turn their lives around by teaching them a skill– a Good Thing not just in itself, but because it is not run by wicked capitalists.
But the skill is how to barbecue meat – boo, hiss. At least, we are assured, the meat is “sustainable”.
Markets, incentives, and social norms are the standard meat and drink of this column – and normal service on them will be resumed next week, when silly season is at last drawing to an end.
As published in City AM Wednesday 21st August 2019
National Grid is getting a kicking in the aftermath of last Friday’s electricity blackout.
Potential explanations swirl around both social and mainstream media. The system cannot cope with too much wind-generated electricity. The Russians hacked into the computers.
A puzzling aspect is that the initial shock to the National Grid was a very small one. The gas-fired station at Little Barford in Bedfordshire went down. Within minutes, a massive power outage had taken place.
Rebecca Long-Bailey, Labour’s energy secretary, has honed in on this. The fact that a small outage had such huge consequences is, to her, clear evidence of under-investment, and makes the case for public ownership. But scientific advances over the past 20 years provide a quite different perspective.
The National Grid is, by definition, a network. Power stations receive supplies from various sources, and then the energy is transmitted from them to businesses and households via power lines.
A key discovery in the maths of how things spread across networks is that in any networked system, any shock, no matter how small, has the potential to create a cascade across the system as a whole.
Duncan Watts was at Columbia University when he published a groundbreaking paper on this in 2002 with the austere title “A simple model of global cascades on random networks”. He was subsequently snapped up by first Yahoo, then Microsoft.
Watts set up a highly abstract model of nodes, all of equal importance, connected on a network. Initially, all of these were, putting it into the Grid context, working well. He investigated the consequences of what happens when a very small number of them malfunctioned.
The results were surprising. Most of the time, the shocks – made deliberately small by assumption – were contained and the network continued to function well. But occasionally, a small shock triggered a system-wide collapse.
Watts coined the phrase “robust yet fragile” to describe this phenomenon. Most of the time, a network is robust when it is given a small shock. But a shock of the same size can, from time to time, percolate through the system.
In the mid-2000s, the academic Rich Colbaugh was commissioned by the Department of Defense to look into the US power grid.
The physical connectivity of the network had increased substantially due to advances in communication and control technology.
The total number of outages had fallen – when one plant failed, it was easier to activate a back-up. But the frequency of very large outages, while still rare, had increased.
I collaborated with Colbaugh on this seeming paradox. We showed that it is in fact an inherent property of networked systems. Increasing the number of connections causes an improvement in the performance of the system, yet at the same time, it makes it more vulnerable to catastrophic failures on a system-wide scale.
There may still prove to be a simple explanation of the sort loved by decision-makers the world over. But the science of networks may shed more light than theories based on conspiracy and incompetence.
As published in City AM Wednesday 14th August 2019
Image: Electric Pylons via PublicDomainPictures licensed under CC0 1.0
Another week, another retailer biting the dust. The baked potato specialist Spudulike has closed all 37 of its branches, with a loss of nearly 300 jobs.
Shopping centres are undergoing a sudden and dramatic squeeze, with many retailers only able to stay in business if granted a dramatic rent reduction.
Last week, Intu Properties, owners of the prestigious Lakeside and Trafford centres, announced a loss of £840m pre-tax. Net rental income fell by 18 per cent in the first half of this year.
Local authorities have become big owners of shopping centres to try to revive their town centres. But in most cases, the council taxpayer is taking a big hit. Shropshire Council, for example, bought three shopping centres in Shrewsbury early last year.
Their value has since fallen by over 20 per cent. The main reason is well known: more and more consumers are switching to the internet.
The latest estimates from the Office for National Statistics show that online sales now account for 18 per cent of all retail sales – and this is rising rapidly. In the year to October 2018, online sales grew 12.6 per cent. The only sector resisting the internet revolution is food, where the growth in online was only 1.8 per cent. The internet itself has been around long enough for a whole generation to have grown up unable to imagine life without it.
What is new is the surge in retail online activity in the past few years. The potential has been there for some time, but it is only now having a real impact. Why should this be?
Some 50 years ago, a larger-than-life Texan business school academic, Frank Bass, offered an explanation. He formulated a simple differential equation – and in differential equation terms it certainly is simple – which describes how new products get adopted in a population.
Bass made millions from his discovery, and it is still widely used in marketing circles today.
The basic idea is that people adopting a new product – in this case, shopping on the internet – can be classified into two groups. A fairly small set are innovators, those who are willing to experiment with something new. Most people are imitators, who wait and see how the innovators get on.
The speed of adoption, if it happens at all, of any new product is determined by the interactions between the two types of consumer and the degree to which they are willing to innovate or imitate.
Remarkably, given its simplicity, the model gives a very good account of the growth of a whole range of products.
In the early stages of a new product, growth is always slow. Almost all those buying are innovators. Then, suddenly, a critical point is reached. The imitators start to swarm in and growth becomes rapid.
Modern network theory offers a more sophisticated approach, but it is still essentially based on the motivations described by Bass.
Either way, the future for both retail and shopping centres looks bleak, unless they themselves find some dramatic way to innovate and alter their offer.
As published in City AM Wednesday 7th August 2019
Image: Empty High Street via Geograph licensed under CC BY-SA 2.0
There has been much discussion on the gender and ethnic composition of Boris Johnson’s cabinet.
The Channel 4 Fact Check site calculates that 33 MPs are entitled to attend cabinet. Of these, six – 18 per cent – are from an ethnic minority background.
According to the 2011 Census, 14 per cent of the UK population were not white, potentially making the Boris cabinet more ethnically diverse than the country as a whole.
However, on gender diversity, the cabinet appears to fall short. Although it includes eight women compared to the six under Theresa May, this is still only 24 per cent, compared to the 51 per cent of the total population who are female.
The under-representation of women at the highest levels is not only an issue in politics, of course. And a scientific paper published earlier this month on the Cornell University archive site offers an intriguing explanation for why this is.
Albert-Laslo Barabasi, one of the world’s leading experts on the science of networks, and his colleagues examine gender inequality in scientific careers, across both disciplines and countries.
The team investigates the publication records of more than 1.5m academics since the 1950s. And the work is of considerable practical relevance: publication is a key way by which academics get promoted.
The paper’s main focus is on the so-called stem disciplines – science, technology, engineering and maths – where gender inequalities are the most marked. For example, in physics only 15 per cent of all active authors are female.
Just as important are the persistent productivity and impact differences between the genders. In the stem subjects, on average male scientists publish 13.2 papers during their careers, while female authors publish only 9.6.
The differences are even more marked when looking at quality as well as quantity. The scientific importance of a paper is measured by how many other scholars cite it in their own published work. Male authors in the top 20 per cent in terms of career impact receive 36 per cent more citations than women do.
The gradual increase in the number of women in the sciences compared to 60 years ago has actually led to a widening of the productivity and impact gaps.
Yet Barabasi and his colleagues find that, even at the top level, there is no difference in the annual productivity rates and impact of male and female scientists. The gaps only arise when these are cumulated over the course of a career.
Essentially, men and women produce work of equal quantity and quality. But men have longer research careers, and as a result, get to pick up more of the plum jobs.
Significantly, the drop-out rate from active publishing is higher among women than men at every stage of their career, from post-doctoral student to professor.
From this, we can conclude that the key reason for gender inequality of outcome is not the processes by which scientists do research, but how long they keep it up for. Policy interventions must focus on retaining women in science at every stage. Perhaps that is something to think about in
politics too, when considering the gender composition of the next cabinet.
As published in City AM Wednesday 31st July 2019
Citizens’ assemblies have become the height of fashion.
The London borough of Camden is currently holding one on how to reduce carbon emissions in the area.
Last month, Nicola Sturgeon announced plans to set one up to consider constitutional issues in Scotland.
The Irish government’s one on abortion featured in the 2018 referendum on the matter.
The basic idea is that a small number of citizens, reflecting the socio-demographic characteristics of the population, are selected at random.
The assembly considers a particular topic. Members get the chance to discuss the matter in much more depth than they would usually do. At the close, recommendations are made to the political authority which set the assembly up.
It all sounds plausible. But economic theory gives us good reasons to be very suspicious of the concept.
One feature of the assemblies is that they are addressed by “experts”, who can be questioned by members. This is intended to raise the level of both debate and understanding among the ordinary people who make up the assembly.
Imagine, however, that a citizens’ assembly had been used in 2016 instead of the Brexit referendum to decide the UK’s position in Europe.
The overwhelming majority of UK economists were opposed to Brexit. The so-called experts would have spoken on the Treasury’s economic forecasts in Project Fear. The hapless assembly members would have been assured that a deep and immediate recession would follow any decision to leave the EU.
Of course, after the event, everyone now knows that this expertise was misplaced.
But an important concept in behavioural economics, supported by a lot of empirical evidence, is that of “authority bias”. People in authoritative positions tend to be trusted. It would have been very difficult for assembly members to go against the expert advice in a Brexit assembly.
A famous experiment by Stanley Milgram in 1963 showed that many people were willing to administer painful electric shocks to others when instructed by a doctor. The shocks, of course, were imaginary, but the participants supposedly administering them to an unseen stranger did not know this. The findings have been repeated many times.
Experts in the social sciences increasingly share a set of metropolitan liberal values. It is these experts who will be presented to assemblies.
Economic theory is in essence about how agents – people, firms, governments – decide to allocate scarce resources. An assembly would simply not be properly equipped to consider many policy issues without first of all being given a thorough understanding of the fundamental principles of economics.
One in particular, namely opportunity cost, is essential. When an option is chosen, this is the “cost” incurred by not enjoying the benefit associated with the best alternative choice. The concept would have to play a major role in any discussion of climate change, for example.
Strange as it may seem, this idea never seems to be put forward by advocates of the assemblies.
Representative democracy, for all its faults, remains a much better way of making decisions than handing yet more power to so-called experts.
As published in City AM Wednesday 24h July 2019
Image: School Children Protesting by Goran H via Pixabay licensed under CC0 1.0
Immediate fears of a recession in the UK economy were eased last week with the latest Office for National Statistics (ONS) estimate of monthly GDP.
The economy had shrunk in April, but growth resumed in May.
This has not prevented widespread conjecture that a recession is imminent. The Resolution Foundation claimed last weekend that the risk of a recession is at its highest since 2007, the year immediately before the financial crisis.
The most serious recessions are caused by the debts of the private sector – households and firms – growing too big. Repayments become challenging, and fears grow among lenders that the debt will not be repaid.
At the end of 2007, for example, household debt in the UK was 93 per cent of GDP. Two decades previously, in 1987, the ratio of debt to GDP was only 49 per cent. This crept up to 57 per cent at the end of 1997. But the opening years of the twenty-first century saw a surge in debt levels.
The same is true of corporate debt. This was 95 per cent of GDP at the end of 2007, having been only 39 per cent 20 years previously.
Debt remained high at the end of 2018, the latest date for which the Bank for International Settlements data is available. Household debt was 87 per cent of GDP and corporate debt 84 per cent.
But the ratios are lower than they were at the start of the financial crisis of the late 2000s. The trend over the past five years is broadly flat. There is no sign of the rapid accumulation of debt which characterised the 2000s.
With my UCL colleague Rickard Nyman, I have been using artificial intelligence techniques to measure daily levels of sentiment on social media in the Greater London area since June 2016, and the general level of sentiment among individuals shows no sign of collapse either.
Official forecasts insisted that a sharp recession would take place in the UK in the second half of 2016 if the electorate voted to leave the EU. But the social media based sentiment measure showed no signs at all of collapse at the time.
We could see in real time that it became more positive after the referendum, even in the Remain stronghold of London. And, of course, there was no recession.
Over the past three months, sentiment shows no change on its level in the same period in 2018. Admittedly, the latter was definitely lower than in 2017, a slowdown which ONS data, appearing several months later, confirmed.
None of this means that the economy is roaring away. Growth has been modest, and while debt levels are being controlled, their height from a historical perspective means that they act as a constraint on spending plans.
Ironically, perhaps the biggest threat of a recession comes the EU, and specifically from Germany, the Remainers’ paradise. It is much more dependent on manufacturing than the UK, and these exports have been hit by US-China trade tensions. The warnings from economists in Germany are not about a mere recession, but of a potentially severe one.
As published in City AM Wednesday 17th July 2019
Image: Shopping Max Pixel licensed under CC0 1.0
Boris Johnson created a furore last week by announcing that he was considering getting rid of the so-called sugar tax.
Was he right to question the levy, or does it serve a purpose?
Introduced in April 2018, manufacturers now have to pay more tax if their drinks contain a high amount of sugar.
The producers can still make high sugar drinks and pass the extra cost onto the customers, but over 50 per cent of them seem to have responded by reformulating and cutting back the sugar content of their products.
Now, we know that well-intentioned policies such as the sugar tax can have unforeseen consequences.
For instance, an important paper in the American Economic Review in 2006 by Jerome Adda and Francesca Cornaglia, then at UCL, examined the impact of the different tax rates on cigarettes imposed across various American states.
They found that the higher the tax, the fewer cigarettes were bought. But smokers compensated by both switching to brands with higher tar content and by smoking further down the stub.
If anything, higher taxes led to a more damaging health outcome.
There’s also the – admittedly less firmly based – anecdotal evidence of a rise in shoplifting in Scotland after the minimum pricing law on alcohol was introduced last year. The incentive to steal has certainly been created: a two-litre bottle of strong cider that could be bought for just £2.50 now costs at least £7.50.
On the sugar tax, however, Boris is not on such strong ground.
A 2013 study published in the well-regarded PLOS ONE journal found a clear positive relationship – using evidence across 175 countries – between sugar consumption and national diabetes rates.
Similarly, I published a paper last December in Palgrave Communications with Alex Bentley and Damian Ruck, two anthropologists at the University of Tennessee, looking at obesity and diabetes rates over time in the American states and counties (the subdivisions of the states).
The growth in obesity (and with it, diabetes) in America has been both rapid and frightening.
In 1990, Mississippi had the highest obesity rate of any state, at 15 per cent of the population. But by 2015, such a population would have looked exceptionally svelte – the lowest obesity rate was 22 per cent in Colorado, and several states had rates over 35 per cent.
In 1990, there was no correlation between household income and obesity or diabetes rates. Yet by 2015, a strong negative correlation existed both across the states and across the counties within each state. Poor people had become hugely and disproportionately fat.
The emergence of so-called food deserts – areas where the population has difficulty in accessing affordable and nutritious food – is an important determinant. The evidence also suggests that the growth of high fructose corn syrup in the food economy is another.
There is a definite role for public policy in combating obesity and diabetes. Both the products on the shelves of supermarkets and the content of those products are legitimate concerns.
Of course, the negative link between obesity and income suggests that relatively modest gains in alleviating poverty could yield substantial reductions in obesity and diabetes rates, and the temptation to mock the nanny state is always strong.
But in the case of sugar, nanny sometimes does know best.