Paste your Google Webmaster Tools verification code here

AI has not yet spurred a productivity boom, but just you wait

AI has not yet spurred a productivity boom, but just you wait

Nobel laureate Bob Solow pronounced 30 years ago that “you can see the computer age everywhere but in the productivity statistics”.

At the start of the 1980s, the world entered the digital age. Fax machines transformed communications. The introduction of personal computers made high-powered computing available to all.

But it took time to work out how to make best use of these major changes in technology. In the 1980s, output per worker in the US grew by only 1.4 per cent a year. But between 1995 and 2005, this had accelerated to 2.1 per cent.

We are on the cusp of another acceleration in productivity growth, due to artificial intelligence (AI).

Even the mention of AI strikes fear into many hearts. Surely this will cause massive job losses? That is one way to boost productivity, but it’s hardly desirable.

In fact, to date most of the applications of AI in companies have not replaced workers.

Rather, they have supplemented what employees do, enabling them to be more productive.

Two recent pieces in the Harvard Business Review provide firm evidence for this. Satya Ramswamy found that the most common use of AI and data analytics was in back-office functions, particularly IT, finance and accounting, where the processes were already at least partly automated.

Thomas H Davenport and Rajeev Ronanki came to the same conclusion in a detailed survey of 152 companies. AI was used, for example, to read contracts or to extract information from emails to update customer contact information or changes to orders.

Developments within the techniques of AI itself suggest that practical applications of the concept are about to spread much more widely.

There was a surge of research interest in AI in the 1980s and 1990s. It did not lead to much.

Essentially, in this phase of development, people tried to get machines to think like humans. If you wanted a translation, for example, your algorithm had to try to learn spelling, the correct use of grammar, and so on. But this proved too hard.

The real breakthrough was through the 2000s. Researchers realised that algorithms were much better than humans at one particular task: namely, matching patterns.

To develop a good translator, you give the machine some documents in English, say, and the same ones translated into French. The algorithm learns how to match the patterns. It does not know any grammar. It does not even know that it is “reading” English and French. So at one level, it is stupid, not intelligent. But it exceptionally good at matching up the patterns.

In the jargon, this is “supervised machine learning”.

At the same time, a new study in the MIT Technology Review shows that purely scientific advances in this field are slowing down markedly. In other words, in the space of a single decade, this has become a mature analytical technology – one that can be used with confidence in practical applications, in the knowledge that it is unlikely to be made obsolete by new developments.

Productivity looks set to boom in the 2020s.

As published in City AM Wednesday 30th January 2019
Image: AI via vpnusrus.com by Mike MacKenzie under CC BY 2.0
Read More

Artificial intelligence will dominate every aspect of our lives, but it won’t replace us

Artificial intelligence will dominate every aspect of our lives, but it won’t replace us

Guess which of the 964 jobs listed in the widely used Occupational Information Network online database is the least susceptible to replacement by artificial intelligence (AI).

The unsurprising answer is that of “massage therapist”.

This is one of the findings of a paper in the latest issue of the American Economic Review by Erik Brynjolfsson and colleagues at MIT’s Sloan School of Management.

But, while this answer might seem obvious, the study itself is a serious and innovative attempt to analyse the potential impact of AI on occupations across the economy.

A key point is that AI technology itself is going through a period of revolutionary progress.

The success of Google’s Deep Mind team in defeating the world champion at the immensely complex game of Go received wide publicity.

Unlike the algorithms which vanquished chess some years previously, the latest AlphaGo programme – improved since its annihilation of the Go champion less than two years ago – does not simply rely on pure computing power to outperform humans. The algorithm starts by knowing absolutely nothing about the game. It becomes stronger by playing against itself and learning as it goes along.

In short, it teaches itself, remembering both its mistakes and its successes. This type of algorithm is very new, and is known as deep learning. The programmes automatically improve their performance at a task through experience.

Brynjolfsson and colleagues regard this as so significant that they describe deep learning as a “general purpose technology” (GPT).

GPTs are technologies which become pervasive throughout the economy, improve over time, and generate further innovations which are complementary.

Historically, they are few and far between. Steam and electricity are examples. If they disappeared tomorrow, we would rapidly be driven back to the living standard which existed several centuries ago.

Deep learning will take years – or even several decades – before anything like its full effects are realised. But we will then look back and find that it is just as hard to imagine a world without deep learning as it is a world without electricity.

What will that look like? The authors analyse 2,069 work activities and 18,156 tasks in the 964 occupations. From this, they build “suitability for machine learning” (SML) measures for labour inputs in the US economy. They find that most occupations in most industries have at least some tasks that are SML. Pretty obvious. But few, if any, occupations have all tasks that are SML.

This latter point certainly is surprising – and from it the MIT team derives a positive message: very few jobs can be fully automated using this new technology.

A fundamental shift is needed in the debate about the effects of AI on work. Instead of the common concerns about the full automation of many jobs and pervasive occupational replacement, we should be thinking about the redesign of jobs and reengineering of business processes.

Economics is often described as the dismal science. But Brynjolfsson’s paper certainly provides very positive food for thought.

As published in City AM Wednesday 6th June 2018

Image: Robots by By Kai Schreiber is licensed under CC2.0
Read More