Guess which of the 964 jobs listed in the widely used Occupational Information Network online database is the least susceptible to replacement by artificial intelligence (AI).
The unsurprising answer is that of “massage therapist”.
This is one of the findings of a paper in the latest issue of the American Economic Review by Erik Brynjolfsson and colleagues at MIT’s Sloan School of Management.
But, while this answer might seem obvious, the study itself is a serious and innovative attempt to analyse the potential impact of AI on occupations across the economy.
A key point is that AI technology itself is going through a period of revolutionary progress.
The success of Google’s Deep Mind team in defeating the world champion at the immensely complex game of Go received wide publicity.
Unlike the algorithms which vanquished chess some years previously, the latest AlphaGo programme – improved since its annihilation of the Go champion less than two years ago – does not simply rely on pure computing power to outperform humans. The algorithm starts by knowing absolutely nothing about the game. It becomes stronger by playing against itself and learning as it goes along.
In short, it teaches itself, remembering both its mistakes and its successes. This type of algorithm is very new, and is known as deep learning. The programmes automatically improve their performance at a task through experience.
Brynjolfsson and colleagues regard this as so significant that they describe deep learning as a “general purpose technology” (GPT).
GPTs are technologies which become pervasive throughout the economy, improve over time, and generate further innovations which are complementary.
Historically, they are few and far between. Steam and electricity are examples. If they disappeared tomorrow, we would rapidly be driven back to the living standard which existed several centuries ago.
Deep learning will take years – or even several decades – before anything like its full effects are realised. But we will then look back and find that it is just as hard to imagine a world without deep learning as it is a world without electricity.
What will that look like? The authors analyse 2,069 work activities and 18,156 tasks in the 964 occupations. From this, they build “suitability for machine learning” (SML) measures for labour inputs in the US economy. They find that most occupations in most industries have at least some tasks that are SML. Pretty obvious. But few, if any, occupations have all tasks that are SML.
This latter point certainly is surprising – and from it the MIT team derives a positive message: very few jobs can be fully automated using this new technology.
A fundamental shift is needed in the debate about the effects of AI on work. Instead of the common concerns about the full automation of many jobs and pervasive occupational replacement, we should be thinking about the redesign of jobs and reengineering of business processes.
Economics is often described as the dismal science. But Brynjolfsson’s paper certainly provides very positive food for thought.
As published in City AM Wednesday 6th June 2018
Image: Robots by By Kai Schreiber is licensed under CC2.0
The governor of the Bank of England, Mark Carney, is up to his usual tricks.
Last week, he claimed in front of the Treasury Committee of the House of Commons that British households are now more than £900 worse off after the vote to leave the EU.
The figure was obtained by comparing a forecast made by the Bank in May 2016 on the assumption of a Remain victory with the situation as it actually is today.
In other words, the so-called evidence cited by the governor consists of the difference between where the economy stands right now, and a forecast made by the Bank two years ago.
It is no exaggeration to say that this has no scientific standing at all.
Predictions of the economy are notoriously unreliable. The Survey of Professional Forecasters in the US publishes a 50-year track record of one-year-ahead predictions of the growth of the economy. The correlation between the forecasts and what actually happened is – literally – zero, with no sign of it improving during the course of the five decades. And this is just looking one year ahead, not two.
The UK does not have such an impressive body of evidence to assess forecasting accuracy, but the studies which have been published show that the track record of our economists is no better than that of the Americans.
It is worth pointing out time and again that the Project Fear forecasts, also made in May 2016 like the Bank’s ones referred to by the governor, were for the next six months, not the next two years. Yet they were shown to be completely wrong.
Far from the predicted rise in unemployment of half a million by the end of 2016 on a Leave vote, unemployment has fallen almost continuously ever since, and now is lower than it has been since the mid-1970s.
We might usefully recall one of Carney’s first public pronouncements after taking up the post of governor in July 2013.
Interest rates, he said, would not be raised until unemployment fell from its level of 7.8 per cent to below seven per cent. He stated that this process would take three years. In fact, unemployment dropped to below seven per cent just six months later, at the start of 2014.
Rather than grandstanding about Brexit and currying favour with the global liberal elite, there are more pressing issues to occupy Carney’s time.
The primary concern of the Bank of England should be the stability of the financial system. Yet there has been a worrying rise in the amount of debt in the economy.
Figures from the Bank of International Settlements show that total credit to the private non-financial sector – the debts of households and companies in everyday language – peaked at 196 per cent of GDP in 2008, the year of the crash. This had fallen to 163 per cent by 2015. But it has now risen to 170 per cent.
This is by no means yet another crisis, but this – not Brexit – is where the governor’s mind should be focused.
As published in City AM Wednesday 30th May 2018
Image: Mark Carney by Bank of England is licensed under CC2.0
The new Italian government looks set to cause shock waves across Europe.
The two parties promise mass deportations of immigrants and huge increases in public spending.
Both the social and the economic policies of the Italian coalition clash directly with those of the European Commission, and Germany and France. They represent a decisive break with the consensual approach of the past.
The performance of the UK economy since the financial crisis of the late 2000s has been disappointing. But it has positively boomed in comparison with that of Italy.
Italian GDP, according to the OECD’s database, peaked in the first quarter of 2008. By the spring of 2009, it had collapsed by eight per cent. There was a feeble recovery, before it started to fall again in late 2011. Even now, GDP remains over five per cent below its value of 10 years ago.
It not only looks dramatic – it is dramatic. The failure of the Italian economy to recover for a whole decade breaks all records, not just in Italy itself, but across the western economies as a whole.
Angus Maddison spent many years at the OECD constructing estimates of GDP in the western economies going back to 1870. His database puts the recent performance of the Italian economy squarely in the spotlight.
Some capitalist economies have experienced truly devastating collapses: Austria, for example, when it was overrun by the Red Army in 1945, and Japan when it was subjected to massive nuclear and conventional bombing attacks in the same year.
But leaving the World War years and their immediate aftermath aside, we can identify, prior to the recent financial crisis, 191 instances of peacetime recessions in the western economies since 1870 from the Maddison database.
Across some 20 capitalist countries, GDP has fallen for a year (the data is annual) 191 times.
Out of these 191 examples, on 113 occasions GDP bounced back above its previous peak value the following year. Two years after a fall, the peak had been regained no less than 151 times. So most recessions are very short. Capitalism is a very resilient system.
The previous longest recession on record across the west as a whole was that of the United States. The collapse during the Great Depression of the early 1930s was so severe that, even with a boom later in the decade, it took until 1939 to recover the peak 1929 level.
That is the context in which we should view Italy’s decade-long recession. It is hardly surprising in the circumstances that the Italian electorate has supported parties which are pledged to overthrow the status quo. If they do what they say they will, the impact will be greater than that of Brexit.
Greece and Portugal are in the same position as Italy, with GDP in both economies still being below pre-crisis levels. But they are small.
The bureaucrats in Brussels, aided by Germany, have allowed Italy to be crushed by the longest peacetime recession in the history of capitalism. No wonder they may now get their comeuppance.
As published in City AM Wednesday 24th May 2018
Image: Vatican Sunset by Giorgio Galeotti is licensed under CC Attribution 4.0
The Windrush scandal still bubbles away.
The bureaucrats at the Home Office are being condemned for their harsh behaviour. But it is scarcely their fault – they are simply reacting in a way entirely compatible with the economic theory of rational choice.
It emerged during the saga of Amber Rudd’s resignation that targets had been set within the Home Office for the number of illegal immigrants to be deported each year.
The relevant officials could have spent a lot of time and effort, for example, tracking down members of eastern European criminal gangs.
Instead, they focused on the much easier task of deporting elderly people who have lived in Britain for decades. Many of these immigrants, understandably, regarded themselves as British and had never bothered to hunt down and fill in the complicated forms which would have guaranteed their residency.
The bureaucrats reacted to targets exactly as rational choice theory predicts, seeking to meet them in a way which minimises the costs and effort to them. The problem lies not with the individuals trying to meet the targets, but with those who set the targets in the first place.
A great deal of the behaviour of the police can be accounted for in the same way.
You could try to tackle knife crime, at personal risk to yourself. But it is much easier and safer to investigate and “solve” hate crimes, such as when Tony Blair himself was interrogated for allegedly abusing the Welsh.
An interesting new Princeton University book by the American historian Jerry Z Muller provides many such examples from the United States. The Tyranny of Metrics is about the unintended consequences which often follow when targets are set to measure performance.
More precisely, it is about the adverse effects which can be created when standardised measures of performance are substituted for personal judgement based on experience.
Unsurprisingly to economists, if the rewards which agents receive become dependent on meeting quantitative targets, they will game the system to try and achieve them.
Muller’s book was inspired by the excellent TV series The Wire, set in Baltimore and based on real life. A central theme was the baleful influence which targets to cut crime had on the police department. They spent a lot of time arresting low-level drug dealers instead of going for the top villains.
A key point which Muller makes is that a decline in trust leads to an increased demand for measured accountability. A lack of trust can work two ways, with both the governors and the governed becoming suspicious of each other’s abilities or motives.
For example, it was under Margaret Thatcher that the first of a disastrous series of exam performance targets was introduced into British schools, because teachers were seen as being hopelessly and uselessly left-wing.
It is easy to introduce a target. It is much harder to generate a culture within an organisation in which everyone is trusted to do a good job. But, ultimately, it is the latter which works.
As published in City AM Wednesday 16th May 2018
Image: Home Office by Steve Cadman is licensed under CC BY-SA 2.0
The Trojans had to beware of Greeks bearing gifts.
In the same way, politicians need to be suspicious of petitions signed by economists.
The vast majority of the UK economics profession backed Project Fear, which predicted a rise in unemployment of half a million by the end of 2016. Instead, unemployment has fallen almost continuously since the Leave vote in June of that year.
In 1981, 364 economists signed up to urge Margaret Thatcher’s chancellor, Geoffrey Howe, to end austerity. No sooner was the ink dry than the economy started to boom.
The latest petition, on the face of it at least, should be taken more seriously. Over 1,000 American economists, including 14 Nobel Prize winners, have written to President Donald Trump. His trade policies, they claim, repeat the mistakes of the 1930s and threaten to plunge the world into another Great Depression.
It is a big claim to make. The financial crisis recession of the late 2000s was a mere blip by comparison – GDP in the US fell by four per cent. In the early 1930s, it dropped by over 20 per cent.
These economists cite the Smoot-Hawley Tariff Act of June 1930 as being a major cause of the massive recession. Output was already falling sharply in America. The claim is that the Act exacerbated the problem. It increased tariffs on over 20,000 types of products imported into the US, and was followed by a string of retaliatory measures across the world.
But the 1,140 economists – at the last count – who have signed the petition ignore a very well established result in economic theory. This is the so-called “theory of the second best”, published by Richard Lipsey and Kelvin Lancaster in 1956.
The economies of the west owe much of their success to the fact that they are market based. But they are not entirely the free market ideal of the economics textbooks. It might be thought that making them a bit more free market would make them even better. Conversely, taking them further away from the ideal, by imposing a trade tariff for example, would make things worse.
Lipsey and Lancaster showed that in general this result could not be demonstrated theoretically. It might be true. But only empirical evidence could show whether it was or not.
The petitioners should also look at a paper just published in the American Economic Association’s prestigious Journal of Economic Perspectives (JEP). Arnaud Costinot and Andre Rodriguez-Clare, of MIT and UCLA at Berkeley respectively, pose the question: what if America abolished all trade? Not just impose a tariff, but no trade at all.
Their detailed empirical evidence suggests that the effects are rather small. GDP would be between two and eight per cent less. A fall, it is true, but hardly one to generate such a furore over a policy, not of abolishing trade, but just making it that bit more expensive.
The JEP paper also shows, not surprisingly, that trade tends to widen inequality. The poor might lose out, even if the economy overall benefits.
President Trump seems to grasp the political importance of this.
As published in City AM Wednesday 9th May 2018
Image: Great Depression via Wikimedia Commons is licensed under CC0.0
British Gas is putting up the price of its dual fuel tariff by an average of 5.5 per cent at the end of this month. EDF, whose standard tariff is already one of the most expensive, will raise it by a further 1.4 per cent next month.
In the longer run, the widespread hope is that we will be saved by alternative energy sources, such as solar and wind. And indeed, these have become much more efficient because of major technological advances.
For example, in the US, since 2009 the prices of solar panels and wind turbines per watt of energy generated fell by a massive 75 and 50 per cent respectively.
This must surely be an unequivocal Good Thing. But, like many things in economics, there are unintended consequences.
A greater reliance on solar and wind power has led in general to higher, not lower, electricity prices.
Michael Shellenberger of the California-based Environmental Progress think tank sets out the evidence in a couple of fascinating columns in Forbes magazine.
In 2017, the share of electricity coming from solar and wind was 26 per cent in Germany and as much as 53 per cent in Denmark. Yet these two countries have the most expensive electricity in Europe. In 2017, Germany spent €24.3bn above market electricity prices for its renewable energy feed-in tariffs.
Using evidence from across the individual states in America, Shellenberger shows a very strong and positive correlation between increases in the importance of the two alternative energy sources, and the rise in electricity prices over the 2009-17 period.
For the US as a whole, the share of solar and wind rose six percentage points, from just two to eight per cent, and electricity prices went up by seven per cent. In North Dakota, for example, the share increased by 18 percentage points, and electricity prices by 40 per cent. In California, the figures are 20 and 22 respectively. These are just two examples to illustrate the point.
The correlations have not simply been discovered after the event. They were predicted, as Shellenberger points out, in a paper in Energy Policy by the young German economist Leon Hirth in 2013.
The basic problem is the fundamentally unreliable nature of both solar and wind. They produce too much energy when societies do not need it, and not enough when they do. So other forms of electricity production need to be kept ready and idle, so they can be switched on when the sun stops shining or the wind stops blowing.
The major exception to the trend of rising prices in America is Texas, which has exploited fracking on a large scale. Natural gas can be turned on and off very easily. Over the 2009-17 period, electricity prices fell by 14 per cent.
Yet here in the UK, Scotland has already banned fracking, and Labour wants to stop it across the rest of the country. Another example of ideology triumphing over empirical evidence.
As published in City AM Wednesday 3rd May 2018
The hostility towards the virtual monopolies enjoyed by tech giants such as Google and Facebook reveals some strange bedfellows.
The European Commission is well known for its enthusiasm for regulation. No surprise, then, that last year the Commission fined Google €2.4bn – billion! – for giving its own services preferential treatment in search results. No surprise either that, last month, the European commissioner for competition, Margrethe Vestager, said that the threat to split Google into smaller companies was being “kept open”.
What was perhaps more surprising was that last summer Steve Bannon, when he was still in the White House as President Trump’s top adviser, argued that Google and Facebook have become so dominant and essential that they should be regulated like public utilities.
It is clear that lots of politicians, both in Europe and America, dislike the dramatic changes brought about by the internet. What is equally clear is that many of them have an underdeveloped understanding of what we might term “cyber society”.
In his hearing before the US Senate committee, Mark Zuckerberg was asked how Facebook made its money – a point which should be obvious to anyone who has ever used it. Another senator asked him whether Twitter was “the same as what you do”.
In many ways, this is understandable. Revolutionary technologies take time for their implications both to emerge in full and to be grasped. William Huskisson, an MP and member of the cabinet, was famously run over and killed by Stephenson’s Rocket at the locomotive trials of 1830. He simply did not appreciate how fast the miraculous new machine went.
Cyber society also creates fundamental challenges for economic theory. Consumers are assumed to be able to gather and process sufficient information to make a rational choice among the available alternatives.
In the context of, say, the supermarket, empirical evidence suggests this is a reasonable assumption to make. But I recently googled the term “mobile phones”. I received “about 155m” results. It is simply not possible to process the information from more than a minuscule fraction of these.
Even as far back as the 1950s, Nobel Laureate Herbert Simon believed the same point to be true in many situations. The model of rational choice had to be “replaced”. In essence, Simon argued that a good decision rule to use in such complex situations was to choose things which were already popular.
This sets up a positive feedback loop. The more popular your product is, the more popular it will become, simply because it is already popular.
This means that the basic market structure encountered in cyber society is monopoly. It is the opposite extreme from the economics textbook, where the core model is one of a large number of small firms.
The most effective way of undermining these monopolies is by encouraging even more innovation – the exact opposite of the top-heavy regulation of the European Commission. Regulation of market structure worked with the American oil giants in the early 1900s. The twenty-first century demands a different approach.
As published in City AM Wednesday 26th April 2018
Mark Carney, the governor of the Bank of England, hit the headlines at the weekend, claiming that Marxism could once again become a prominent political force in the west.
Automation, it seems, may not just destroy millions of jobs. For all except a privileged minority of high-tech workers, the collapse in the demand for labour could hold down living standards for decades. In such a climate, Communism may seem an attractive political option.
Karl Marx as an economist is a bit of a curate’s egg, good in parts. In the late eighteenth and early nineteenth centuries, it was obvious that the system of factory production was dramatically different to anything which had ever existed, but it was thought that might disappear just as suddenly as it had emerged.
Marx was the first major economist to see that the accumulation of capital in factories represented a new, permanent structure of the economy: capitalism. He developed a theory of the business cycle, the short-term fluctuations in economic growth, which is much more persuasive than the equilibrium-based theories which dominate academic macroeconomics today.
But he was completely wrong on a fundamental issue. Marx thought, correctly, that the build-up of capital and the advance of technology would create long-term growth in the economy. However, he believed that the capitalist class would expropriate all the gains. Wages would remain close to subsistence levels – the “immiseration of the working class” as he called it.
In fact, living standards have boomed for everyone in the west since the mid-nineteenth century. Leisure hours have increased dramatically and, far from being sent up chimneys at the age of three, young people today do not enter the labour force until at least 18.
Marx made the very frequent forecasting mistake of simply extrapolating the trend of the recent past.
In the early decades of the Industrial Revolution, just before he wrote, real wages were indeed held down, as the charts in Carney’s speech show. The benefits of growth accrued to those who owned the new machines. Marxists call this the phase of “primitive accumulation”.
But such a phase has characterised every single instance of an economy which enters into the sustained economic growth of the market-oriented capitalist economies, from early nineteenth century England to late twentieth century China.
Once this is over, the fruits of growth become widely shared.
In fact, Carney’s own charts give grounds for optimism and contradict the lurid headlines around his speech. One is headed “Technology driving labour share down globally”. In other words, the share of wages and salaries in national income has been falling. In the advanced economies, this was some 56 percent in the mid-1970s and is 51 percent now. But all the drop took place before the mid-2000s. If anything, the labour share has risen slightly since.
Similarly, inequality has increased over the past 40 years, but almost all the increase took place in the 1980s. Depending which measure we take, it has either stabilised or fallen since 1990.
The future looks more optimistic than either Marx or Carney suggest.
As published in City AM Wednesday 19th April 2018
Image: Car Factory by Jens Mahnke is licensed under CC0.0
One of George Osborne’s last acts as chancellor in 2016 was to announce the so-called sugar tax. This came into force last week, in line with the original timetable.
Drinks manufacturers are taxed according to the volume of sugar-sweetened beverages they produce or import. The tax increases with the sugar content.
The aim is to combat the rise in obesity. The rise has been rapid, and there could be worse to come. The UK tends to lag behind the US, where the spread of obesity has been truly dramatic.
There is no doubt goodwill behind the motives of the sugar tax: a desire to save others from potentially harmful actions. Obesity, for example, shortens lives and is a major cause of diabetes.
But the economic rationale is based on the more austere concept of negative externalities.
Externalities are a key topic in both economy theory and practice. They arise whenever someone’s actions create consequences for others.
An obese person is likely to need expensive healthcare. This generates costs – the “negative” bit – for taxpayers, who are called upon to provide the finance for the public health system to treat the obese (although, of course, the lower life expectancy of the obese may to some extent offset their higher health costs).
It is fashionable in liberal circles to portray the obese as being in some way victims. It is not their fault that they are fat.
In contrast, economics places the responsibility for choices which are made squarely on the individual. It is the individual who acts with purpose and intent in selecting a particular alternative from the ones which are on offer.
It would be just as plausible in theory to assign the tax directly to the obese. Anyone with a Body Mass Index of, say, more than 40 – which is huge – could have to pay for any health costs which arise. In practice, of course, most of them would be unable to afford it.
So will the sugar tax work?
At one level, the answer is yes. Some manufacturers are already reducing the sugar content of drinks, for example, though this may simply switch consumption to brands which retain high sugar content.
Price increases will deter consumption, of course. But there is a large amount of empirical evidence which shows that the immediate impact of any tax like this tends to fall away over a couple of years. The eventual effect is considerably weaker than in the first few months.
A neat recent study by Pierre Dubois and colleagues at the respected Institute for Fiscal Studies offers an even less upbeat view of the efficacy of the tax.
Consumers with high-sugar diets are less sensitive to price changes than people with lower sugar habits. The tax is likely to reduce sugar consumption in the latter group even further, while having little impact on the ones who really need to.
Osborne does have things to be proud of, such as succeeding in creating the impression in financial markets that the coalition government was fiscally prudent. But the sugar tax is unlikely to be one of his best remembered initiatives.
As published in City AM Wednesday 11th April 2018
Image: Sodas by Marlith is licensed under CC by ShareAlike 3.0
Politicians have an irresistible urge to meddle. The latest example is the fanfare orchestrated just before Easter by Chris Grayling, the transport secretary.
He wrote to the Competition and Markets Authority (CMA) to criticise the price of fuel at motorway service stations.
Grayling called for the UK’s three biggest operators – Moto, Welcome Break, and RoadChef – to be investigated. He was concerned that the prices charged at motorway forecourts might exploit the motorist.
In a flash of genius, Grayling pointed out that there is less choice and competition on motorways than on other roads.
There is no question that fuel prices are higher on the motorway, with a recent industry survey showing an average of 137.7p a litre, compared to a UK average of 120.1p.
But we have been here before.
Just five years ago, the CMA (then called the Office of Fair Trading), carried out a similar investigation.
Of course, action had to follow to justify the study, and the Office for Fair Trading duly called for increased information to be supplied to motorists. The government body claimed that drivers were unaware of the prices until they approached a service station, so special signs should let them know in advance.
But a trial run of the signs found that they made no difference at all to behaviour, and the plan was scrapped.
The idea that consumers have insufficient information to make a rational choice is a key theme in a great deal of regulatory activity.
It goes back to the work of Nobel Laureates, George Akerlof and Joe Stiglitz, in the 1970s on imperfect information. Until then, economic theory had been based on the assumption that consumers had full information about the attributes of the alternative choices available.
But Akerlof and Stiglitz introduced the related idea of “asymmetric information”. Economists love grandiose phrases, but this concept simply means that different agents may have different amounts of information. In the case of fuel prices, the forecourt operator knows the high prices which are charged, but motorists might not.
On this view, regulation is needed to increase information to consumers.
Ironically, economics – the discipline which studies free markets – has provided the intellectual rationale for much of the massive increase in regulatory activity which we have seen in recent decades.
With fuel prices, motorists clearly know that motorway fuel is more expensive. That is why the signs about this had no effect on their actions. They have full information.
Another fundamental concept in economics is preference. People reveal their preferences not by filling in surveys, but by their actions.
Motorists choose to start a long journey knowing they will need fuel en route. They choose to put fuel in at motorway services rather than divert off for cheaper fuel – information which is now readily available on satnavs.
Grayling got some cheap headlines, but basic economic theory shows that his call is unwarranted.