The new Italian government looks set to cause shock waves across Europe.
The two parties promise mass deportations of immigrants and huge increases in public spending.
Both the social and the economic policies of the Italian coalition clash directly with those of the European Commission, and Germany and France. They represent a decisive break with the consensual approach of the past.
The performance of the UK economy since the financial crisis of the late 2000s has been disappointing. But it has positively boomed in comparison with that of Italy.
Italian GDP, according to the OECD’s database, peaked in the first quarter of 2008. By the spring of 2009, it had collapsed by eight per cent. There was a feeble recovery, before it started to fall again in late 2011. Even now, GDP remains over five per cent below its value of 10 years ago.
It not only looks dramatic – it is dramatic. The failure of the Italian economy to recover for a whole decade breaks all records, not just in Italy itself, but across the western economies as a whole.
Angus Maddison spent many years at the OECD constructing estimates of GDP in the western economies going back to 1870. His database puts the recent performance of the Italian economy squarely in the spotlight.
Some capitalist economies have experienced truly devastating collapses: Austria, for example, when it was overrun by the Red Army in 1945, and Japan when it was subjected to massive nuclear and conventional bombing attacks in the same year.
But leaving the World War years and their immediate aftermath aside, we can identify, prior to the recent financial crisis, 191 instances of peacetime recessions in the western economies since 1870 from the Maddison database.
Across some 20 capitalist countries, GDP has fallen for a year (the data is annual) 191 times.
Out of these 191 examples, on 113 occasions GDP bounced back above its previous peak value the following year. Two years after a fall, the peak had been regained no less than 151 times. So most recessions are very short. Capitalism is a very resilient system.
The previous longest recession on record across the west as a whole was that of the United States. The collapse during the Great Depression of the early 1930s was so severe that, even with a boom later in the decade, it took until 1939 to recover the peak 1929 level.
That is the context in which we should view Italy’s decade-long recession. It is hardly surprising in the circumstances that the Italian electorate has supported parties which are pledged to overthrow the status quo. If they do what they say they will, the impact will be greater than that of Brexit.
Greece and Portugal are in the same position as Italy, with GDP in both economies still being below pre-crisis levels. But they are small.
The bureaucrats in Brussels, aided by Germany, have allowed Italy to be crushed by the longest peacetime recession in the history of capitalism. No wonder they may now get their comeuppance.
As published in City AM Wednesday 24th May 2018
Image: Vatican Sunset by Giorgio Galeotti is licensed under CC Attribution 4.0
The Windrush scandal still bubbles away.
The bureaucrats at the Home Office are being condemned for their harsh behaviour. But it is scarcely their fault – they are simply reacting in a way entirely compatible with the economic theory of rational choice.
It emerged during the saga of Amber Rudd’s resignation that targets had been set within the Home Office for the number of illegal immigrants to be deported each year.
The relevant officials could have spent a lot of time and effort, for example, tracking down members of eastern European criminal gangs.
Instead, they focused on the much easier task of deporting elderly people who have lived in Britain for decades. Many of these immigrants, understandably, regarded themselves as British and had never bothered to hunt down and fill in the complicated forms which would have guaranteed their residency.
The bureaucrats reacted to targets exactly as rational choice theory predicts, seeking to meet them in a way which minimises the costs and effort to them. The problem lies not with the individuals trying to meet the targets, but with those who set the targets in the first place.
A great deal of the behaviour of the police can be accounted for in the same way.
You could try to tackle knife crime, at personal risk to yourself. But it is much easier and safer to investigate and “solve” hate crimes, such as when Tony Blair himself was interrogated for allegedly abusing the Welsh.
An interesting new Princeton University book by the American historian Jerry Z Muller provides many such examples from the United States. The Tyranny of Metrics is about the unintended consequences which often follow when targets are set to measure performance.
More precisely, it is about the adverse effects which can be created when standardised measures of performance are substituted for personal judgement based on experience.
Unsurprisingly to economists, if the rewards which agents receive become dependent on meeting quantitative targets, they will game the system to try and achieve them.
Muller’s book was inspired by the excellent TV series The Wire, set in Baltimore and based on real life. A central theme was the baleful influence which targets to cut crime had on the police department. They spent a lot of time arresting low-level drug dealers instead of going for the top villains.
A key point which Muller makes is that a decline in trust leads to an increased demand for measured accountability. A lack of trust can work two ways, with both the governors and the governed becoming suspicious of each other’s abilities or motives.
For example, it was under Margaret Thatcher that the first of a disastrous series of exam performance targets was introduced into British schools, because teachers were seen as being hopelessly and uselessly left-wing.
It is easy to introduce a target. It is much harder to generate a culture within an organisation in which everyone is trusted to do a good job. But, ultimately, it is the latter which works.
As published in City AM Wednesday 16th May 2018
Image: Home Office by Steve Cadman is licensed under CC BY-SA 2.0
The Trojans had to beware of Greeks bearing gifts.
In the same way, politicians need to be suspicious of petitions signed by economists.
The vast majority of the UK economics profession backed Project Fear, which predicted a rise in unemployment of half a million by the end of 2016. Instead, unemployment has fallen almost continuously since the Leave vote in June of that year.
In 1981, 364 economists signed up to urge Margaret Thatcher’s chancellor, Geoffrey Howe, to end austerity. No sooner was the ink dry than the economy started to boom.
The latest petition, on the face of it at least, should be taken more seriously. Over 1,000 American economists, including 14 Nobel Prize winners, have written to President Donald Trump. His trade policies, they claim, repeat the mistakes of the 1930s and threaten to plunge the world into another Great Depression.
It is a big claim to make. The financial crisis recession of the late 2000s was a mere blip by comparison – GDP in the US fell by four per cent. In the early 1930s, it dropped by over 20 per cent.
These economists cite the Smoot-Hawley Tariff Act of June 1930 as being a major cause of the massive recession. Output was already falling sharply in America. The claim is that the Act exacerbated the problem. It increased tariffs on over 20,000 types of products imported into the US, and was followed by a string of retaliatory measures across the world.
But the 1,140 economists – at the last count – who have signed the petition ignore a very well established result in economic theory. This is the so-called “theory of the second best”, published by Richard Lipsey and Kelvin Lancaster in 1956.
The economies of the west owe much of their success to the fact that they are market based. But they are not entirely the free market ideal of the economics textbooks. It might be thought that making them a bit more free market would make them even better. Conversely, taking them further away from the ideal, by imposing a trade tariff for example, would make things worse.
Lipsey and Lancaster showed that in general this result could not be demonstrated theoretically. It might be true. But only empirical evidence could show whether it was or not.
The petitioners should also look at a paper just published in the American Economic Association’s prestigious Journal of Economic Perspectives (JEP). Arnaud Costinot and Andre Rodriguez-Clare, of MIT and UCLA at Berkeley respectively, pose the question: what if America abolished all trade? Not just impose a tariff, but no trade at all.
Their detailed empirical evidence suggests that the effects are rather small. GDP would be between two and eight per cent less. A fall, it is true, but hardly one to generate such a furore over a policy, not of abolishing trade, but just making it that bit more expensive.
The JEP paper also shows, not surprisingly, that trade tends to widen inequality. The poor might lose out, even if the economy overall benefits.
President Trump seems to grasp the political importance of this.
As published in City AM Wednesday 9th May 2018
Image: Great Depression via Wikimedia Commons is licensed under CC0.0
British Gas is putting up the price of its dual fuel tariff by an average of 5.5 per cent at the end of this month. EDF, whose standard tariff is already one of the most expensive, will raise it by a further 1.4 per cent next month.
In the longer run, the widespread hope is that we will be saved by alternative energy sources, such as solar and wind. And indeed, these have become much more efficient because of major technological advances.
For example, in the US, since 2009 the prices of solar panels and wind turbines per watt of energy generated fell by a massive 75 and 50 per cent respectively.
This must surely be an unequivocal Good Thing. But, like many things in economics, there are unintended consequences.
A greater reliance on solar and wind power has led in general to higher, not lower, electricity prices.
Michael Shellenberger of the California-based Environmental Progress think tank sets out the evidence in a couple of fascinating columns in Forbes magazine.
In 2017, the share of electricity coming from solar and wind was 26 per cent in Germany and as much as 53 per cent in Denmark. Yet these two countries have the most expensive electricity in Europe. In 2017, Germany spent €24.3bn above market electricity prices for its renewable energy feed-in tariffs.
Using evidence from across the individual states in America, Shellenberger shows a very strong and positive correlation between increases in the importance of the two alternative energy sources, and the rise in electricity prices over the 2009-17 period.
For the US as a whole, the share of solar and wind rose six percentage points, from just two to eight per cent, and electricity prices went up by seven per cent. In North Dakota, for example, the share increased by 18 percentage points, and electricity prices by 40 per cent. In California, the figures are 20 and 22 respectively. These are just two examples to illustrate the point.
The correlations have not simply been discovered after the event. They were predicted, as Shellenberger points out, in a paper in Energy Policy by the young German economist Leon Hirth in 2013.
The basic problem is the fundamentally unreliable nature of both solar and wind. They produce too much energy when societies do not need it, and not enough when they do. So other forms of electricity production need to be kept ready and idle, so they can be switched on when the sun stops shining or the wind stops blowing.
The major exception to the trend of rising prices in America is Texas, which has exploited fracking on a large scale. Natural gas can be turned on and off very easily. Over the 2009-17 period, electricity prices fell by 14 per cent.
Yet here in the UK, Scotland has already banned fracking, and Labour wants to stop it across the rest of the country. Another example of ideology triumphing over empirical evidence.
As published in City AM Wednesday 3rd May 2018
The hostility towards the virtual monopolies enjoyed by tech giants such as Google and Facebook reveals some strange bedfellows.
The European Commission is well known for its enthusiasm for regulation. No surprise, then, that last year the Commission fined Google €2.4bn – billion! – for giving its own services preferential treatment in search results. No surprise either that, last month, the European commissioner for competition, Margrethe Vestager, said that the threat to split Google into smaller companies was being “kept open”.
What was perhaps more surprising was that last summer Steve Bannon, when he was still in the White House as President Trump’s top adviser, argued that Google and Facebook have become so dominant and essential that they should be regulated like public utilities.
It is clear that lots of politicians, both in Europe and America, dislike the dramatic changes brought about by the internet. What is equally clear is that many of them have an underdeveloped understanding of what we might term “cyber society”.
In his hearing before the US Senate committee, Mark Zuckerberg was asked how Facebook made its money – a point which should be obvious to anyone who has ever used it. Another senator asked him whether Twitter was “the same as what you do”.
In many ways, this is understandable. Revolutionary technologies take time for their implications both to emerge in full and to be grasped. William Huskisson, an MP and member of the cabinet, was famously run over and killed by Stephenson’s Rocket at the locomotive trials of 1830. He simply did not appreciate how fast the miraculous new machine went.
Cyber society also creates fundamental challenges for economic theory. Consumers are assumed to be able to gather and process sufficient information to make a rational choice among the available alternatives.
In the context of, say, the supermarket, empirical evidence suggests this is a reasonable assumption to make. But I recently googled the term “mobile phones”. I received “about 155m” results. It is simply not possible to process the information from more than a minuscule fraction of these.
Even as far back as the 1950s, Nobel Laureate Herbert Simon believed the same point to be true in many situations. The model of rational choice had to be “replaced”. In essence, Simon argued that a good decision rule to use in such complex situations was to choose things which were already popular.
This sets up a positive feedback loop. The more popular your product is, the more popular it will become, simply because it is already popular.
This means that the basic market structure encountered in cyber society is monopoly. It is the opposite extreme from the economics textbook, where the core model is one of a large number of small firms.
The most effective way of undermining these monopolies is by encouraging even more innovation – the exact opposite of the top-heavy regulation of the European Commission. Regulation of market structure worked with the American oil giants in the early 1900s. The twenty-first century demands a different approach.
As published in City AM Wednesday 26th April 2018
Mark Carney, the governor of the Bank of England, hit the headlines at the weekend, claiming that Marxism could once again become a prominent political force in the west.
Automation, it seems, may not just destroy millions of jobs. For all except a privileged minority of high-tech workers, the collapse in the demand for labour could hold down living standards for decades. In such a climate, Communism may seem an attractive political option.
Karl Marx as an economist is a bit of a curate’s egg, good in parts. In the late eighteenth and early nineteenth centuries, it was obvious that the system of factory production was dramatically different to anything which had ever existed, but it was thought that might disappear just as suddenly as it had emerged.
Marx was the first major economist to see that the accumulation of capital in factories represented a new, permanent structure of the economy: capitalism. He developed a theory of the business cycle, the short-term fluctuations in economic growth, which is much more persuasive than the equilibrium-based theories which dominate academic macroeconomics today.
But he was completely wrong on a fundamental issue. Marx thought, correctly, that the build-up of capital and the advance of technology would create long-term growth in the economy. However, he believed that the capitalist class would expropriate all the gains. Wages would remain close to subsistence levels – the “immiseration of the working class” as he called it.
In fact, living standards have boomed for everyone in the west since the mid-nineteenth century. Leisure hours have increased dramatically and, far from being sent up chimneys at the age of three, young people today do not enter the labour force until at least 18.
Marx made the very frequent forecasting mistake of simply extrapolating the trend of the recent past.
In the early decades of the Industrial Revolution, just before he wrote, real wages were indeed held down, as the charts in Carney’s speech show. The benefits of growth accrued to those who owned the new machines. Marxists call this the phase of “primitive accumulation”.
But such a phase has characterised every single instance of an economy which enters into the sustained economic growth of the market-oriented capitalist economies, from early nineteenth century England to late twentieth century China.
Once this is over, the fruits of growth become widely shared.
In fact, Carney’s own charts give grounds for optimism and contradict the lurid headlines around his speech. One is headed “Technology driving labour share down globally”. In other words, the share of wages and salaries in national income has been falling. In the advanced economies, this was some 56 percent in the mid-1970s and is 51 percent now. But all the drop took place before the mid-2000s. If anything, the labour share has risen slightly since.
Similarly, inequality has increased over the past 40 years, but almost all the increase took place in the 1980s. Depending which measure we take, it has either stabilised or fallen since 1990.
The future looks more optimistic than either Marx or Carney suggest.
As published in City AM Wednesday 19th April 2018
Image: Car Factory by Jens Mahnke is licensed under CC0.0
One of George Osborne’s last acts as chancellor in 2016 was to announce the so-called sugar tax. This came into force last week, in line with the original timetable.
Drinks manufacturers are taxed according to the volume of sugar-sweetened beverages they produce or import. The tax increases with the sugar content.
The aim is to combat the rise in obesity. The rise has been rapid, and there could be worse to come. The UK tends to lag behind the US, where the spread of obesity has been truly dramatic.
There is no doubt goodwill behind the motives of the sugar tax: a desire to save others from potentially harmful actions. Obesity, for example, shortens lives and is a major cause of diabetes.
But the economic rationale is based on the more austere concept of negative externalities.
Externalities are a key topic in both economy theory and practice. They arise whenever someone’s actions create consequences for others.
An obese person is likely to need expensive healthcare. This generates costs – the “negative” bit – for taxpayers, who are called upon to provide the finance for the public health system to treat the obese (although, of course, the lower life expectancy of the obese may to some extent offset their higher health costs).
It is fashionable in liberal circles to portray the obese as being in some way victims. It is not their fault that they are fat.
In contrast, economics places the responsibility for choices which are made squarely on the individual. It is the individual who acts with purpose and intent in selecting a particular alternative from the ones which are on offer.
It would be just as plausible in theory to assign the tax directly to the obese. Anyone with a Body Mass Index of, say, more than 40 – which is huge – could have to pay for any health costs which arise. In practice, of course, most of them would be unable to afford it.
So will the sugar tax work?
At one level, the answer is yes. Some manufacturers are already reducing the sugar content of drinks, for example, though this may simply switch consumption to brands which retain high sugar content.
Price increases will deter consumption, of course. But there is a large amount of empirical evidence which shows that the immediate impact of any tax like this tends to fall away over a couple of years. The eventual effect is considerably weaker than in the first few months.
A neat recent study by Pierre Dubois and colleagues at the respected Institute for Fiscal Studies offers an even less upbeat view of the efficacy of the tax.
Consumers with high-sugar diets are less sensitive to price changes than people with lower sugar habits. The tax is likely to reduce sugar consumption in the latter group even further, while having little impact on the ones who really need to.
Osborne does have things to be proud of, such as succeeding in creating the impression in financial markets that the coalition government was fiscally prudent. But the sugar tax is unlikely to be one of his best remembered initiatives.
As published in City AM Wednesday 11th April 2018
Image: Sodas by Marlith is licensed under CC by ShareAlike 3.0
Politicians have an irresistible urge to meddle. The latest example is the fanfare orchestrated just before Easter by Chris Grayling, the transport secretary.
He wrote to the Competition and Markets Authority (CMA) to criticise the price of fuel at motorway service stations.
Grayling called for the UK’s three biggest operators – Moto, Welcome Break, and RoadChef – to be investigated. He was concerned that the prices charged at motorway forecourts might exploit the motorist.
In a flash of genius, Grayling pointed out that there is less choice and competition on motorways than on other roads.
There is no question that fuel prices are higher on the motorway, with a recent industry survey showing an average of 137.7p a litre, compared to a UK average of 120.1p.
But we have been here before.
Just five years ago, the CMA (then called the Office of Fair Trading), carried out a similar investigation.
Of course, action had to follow to justify the study, and the Office for Fair Trading duly called for increased information to be supplied to motorists. The government body claimed that drivers were unaware of the prices until they approached a service station, so special signs should let them know in advance.
But a trial run of the signs found that they made no difference at all to behaviour, and the plan was scrapped.
The idea that consumers have insufficient information to make a rational choice is a key theme in a great deal of regulatory activity.
It goes back to the work of Nobel Laureates, George Akerlof and Joe Stiglitz, in the 1970s on imperfect information. Until then, economic theory had been based on the assumption that consumers had full information about the attributes of the alternative choices available.
But Akerlof and Stiglitz introduced the related idea of “asymmetric information”. Economists love grandiose phrases, but this concept simply means that different agents may have different amounts of information. In the case of fuel prices, the forecourt operator knows the high prices which are charged, but motorists might not.
On this view, regulation is needed to increase information to consumers.
Ironically, economics – the discipline which studies free markets – has provided the intellectual rationale for much of the massive increase in regulatory activity which we have seen in recent decades.
With fuel prices, motorists clearly know that motorway fuel is more expensive. That is why the signs about this had no effect on their actions. They have full information.
Another fundamental concept in economics is preference. People reveal their preferences not by filling in surveys, but by their actions.
Motorists choose to start a long journey knowing they will need fuel en route. They choose to put fuel in at motorway services rather than divert off for cheaper fuel – information which is now readily available on satnavs.
Grayling got some cheap headlines, but basic economic theory shows that his call is unwarranted.
As published in City AM Wednesday 4th April 2018
Image: M25 service station by Arriva436 is licensed under CC by ShareAlike 3.0v
The liquidation of Carillion continues to feature prominently in the news.
Last week, the story was the fees being charged by PwC, the accountancy firm tasked with salvaging money from the wreckage.
It emerged that PwC’s fees, which take priority in terms of being paid over the various creditors and pensioners, amounted to £20.4m for the first eight weeks’ work. The special manager with overall responsibility has a rate of £865 an hour.
The fees were described in parliament as “superhuman”.
We might reasonably ask how these rates are determined. Why are they not half their actual value? The remuneration would still be very substantial. Equally, why are they not double?
The answer from a basic economics textbook would be that it is a matter of supply and demand. The wage – if such a proletarian term is acceptable in these elevated circles – is set at a level which ensures that there are just enough bean counters to carry out the amount of work which exists.
But it is hard to sustain this argument. The Big Four accountancy firms in the UK take on around 5,000 graduates every year. They receive almost 100,000 applications. There is a long process of whittling down. Eventually, nearly 10,000 get to the final stage, an interview with a partner.
Clearly, many graduates have very strong qualifications. But the wage rate is not bid down by this over-supply.
The argument about how prices, in this case the hourly PwC rate, are set goes back a long way in economic theory.
In the decades just before the First World War, two highly accomplished mathematicians who occupied the top chairs in economics, Alfred Marshall at Cambridge and Francis Edgeworth at Oxford, wrangled over the issue.
Appropriately enough in the week following Cambridge’s triumph in both the men’s and women’s boat races, Marshall was the victor at the time. But, to employ another sporting analogy very much in the news, there was a certain amount of ball tampering along the way.
Edgeworth thought that, in most situations, there was an inherent indeterminacy about the price which emerged. He wrote: “it may be said that in pure economics there is only one theorem, but that it is a very difficult one: the theory of bargain”.
Marshall simplified matters dramatically. He assumed that there are so many economic agents in a market that no single one of them can influence the price. This enabled him to draw, in his own best-selling textbook, the supply and demand curve diagrams familiar to generations of students.
But it was a simplification too far. If no one can influence the price, how is it set?
A lot of modern economic theory is about developing Edgeworth’s view that economics is essentially about bargaining. It makes it much more difficult, but more realistic.
There is no inherent economic justification for the hourly rates which the Big Four accountants charge. They have simply got the best of the bargaining process. Companies need to wake up and start to insist on lower fees.
As published in City AM Wednesday 28th March 2018
Image: PWC by Bjørn Erik Pedersen is licensed under CC by ShareAlike 3.0
Last week’s Spring Statement by chancellor Philip Hammond has led to predictable calls to “abandon austerity”.
With massive hyperbole, Labour accused him of “astounding complacency” in the face of what they claimed to be the worst ever public funding crisis.
The facts are rather different. Far from being squeezed, after allowing for inflation, current spending by the public sector has risen almost eight per cent since the depths of the recession in the middle of 2009.
True, the economy as a whole has grown faster, with GDP now almost 18 per cent up from its low point almost eight years ago. But public spending is up, not down.
The recovery has been driven by the private sector. Companies are investing 33 per cent more on new capital equipment now than in 2009.
Personal consumer spending, the single biggest component of GDP, is up by 15 per cent. So living standards have risen more or less in line with the economy as a whole. The growth rate is slightly less, but this is good news. Resources are moving, albeit slowly, into investment, the foundation of growth in the future.
The myth that austerity prevails in the UK is potentially a dangerous one. The simple fact is that Britain is at full employment. Over three million net new jobs have been created, and the total number of people in work stands at a record 32.1m.
The numbers claiming unemployment benefit amount to just two per cent of the total population aged between 16 and 64.
Of the 650 parliamentary constituencies, there are only 33 where the rate is four per cent or more, and in almost all of these it is between four and five per cent. Of the 84 constituencies in the south east, there are only nine where it is even above the national average of two per cent.
A basic premise of economics is that, at full employment, an increase in government spending financed by extra borrowing will create inflation. The economy will not expand, because existing resources are fully utilised. The stimulus will create excess demand for them, which will bid up prices.
We need not rely on conventional economics for this argument. Those who invoke John Maynard Keynes’ name to support “abandoning austerity” ought to familiarise themselves with the words of the great man himself.
Keynes’ magnum opus, the General Theory of Employment, Interest and Money, was published in 1936, shortly after the deepest recession the western economies have ever seen.
Keynes did indeed recommend extra government spending to boost the economy when unemployment is high. Economists have argued ever since as to what extent, if at all, he was correct. But in a key section of his book – chapter 10, for all you fellow trainspotters out there – Keynes writes: “When full employment is reached, any attempt to increase expenditure further will set up a tendency for prices to rise without limit.”
The chancellor should follow the sound advice of Keynes, and stick with his current plans.