Paste your Google Webmaster Tools verification code here

A ray of light in these dark days: Living standards have risen far more than we think

A ray of light in these dark days: Living standards have risen far more than we think

The media seems full of gloom at the moment. Chaos over Brexit, Saudi Arabia, potential nuclear escalation between the US and Russia – you name it, people are worried about it.

A ray of light is shone – an apt phrase as you will see – by the work of Bill Nordhaus, a Yale economist who was the co-winner of this year’s Nobel prize in economics, along with Paul Romer.

Over the past two decades or so, Nordhaus has worked mainly on integrating climate change into macroeconomic models, and was awarded the accolade for this research. He is no knee-jerk lefty in this respect. For example, he was a prominent critic of Nick Stern’s report on climate change, which was commissioned by Gordon Brown.

But in my view, Nordhaus should have been awarded the Nobel prize years ago for his brilliant work on measuring how well-off we all are.

The conventional measure of GDP per capita is widely criticised these days. But instead of just whinging from the sidelines about how economics is wicked and useless – sadly a common feature in modern critiques – Nordhaus actually tried to do something about it.

In 1972, he and James Tobin (another future Nobel laureate) developed the Measure of Economic Welfare. The two economists took GDP as their starting point. They adjusted it to include, for example, an assessment of the value of leisure time and the amount of unpaid work in an economy.

Taking these factors into account means we are better off than the conventional GDP measure suggests.

The most dramatic paper by Nordhaus, published in 1996, is on the seemingly obscure topic of the history of lighting. He analysed the topic over a vast time span, from the first sources of artificial light – the fires used by humanity around one million years ago – to the modern fluorescent bulb.

The focus of the paper was not the technology as such, but whether the standard ways of measuring the price of lighting captured the massive improvements in quality which have taken place, particularly in the twentieth century.

Nordhaus concludes that the traditional price indexes of lighting vastly overstate the increase in lighting prices over the last two centuries relative to quality. So the true rise in living standards has consequently been significantly understated.

The magnitude of the difference is vast. Nordhaus estimates that the price measured in the conventional way rose by a factor of between 900 and 1,600 more than the true price.

Bodies such as the Office for National Statistics receive information about the economy in current prices. If output in any particular sector has increased, a key task for them is to decide how much of that is due to a rise in prices and how much to a genuine increase in output.

Rapid quality change means that the conventional ways of doing this simply cannot cope. Price rises are overstated, and in consequence “real” changes in output and living standards are understated.

The implication of the apparently esoteric work Nordhaus did on lighting is that modern technology such as the internet has increased living standards far more than the official statistics indicate. Finally some news to be cheerful about.

As published in City AM Wednesday 25th October 2018

Image: Lightbulb by lenavasilevs via Pixaby is licensed under CC0 1.0 Universal
Read More

Can we innovate better outside the EU? Economic lessons from the Nobel prize winner

Can we innovate better outside the EU? Economic lessons from the Nobel prize winner

Gordon Brown’s time as chancellor will be remembered for many things.

A sense of humour would be conspicuously absent from this list.

But he provoked a great deal of mirth unintentionally in a speech shortly before the 1997 General Election on the theme of “post-neoclassical endogenous growth theory”.

Perhaps the last laugh is with Brown. The person who invented the concept, the New York professor Paul Romer, is a joint recipient of the 2018 Nobel prize for his work in this area.

The standard economic theory of growth was set out over 60 years ago in a brilliant paper by the MIT economist Bob Solow.

Solow’s theory was not concerned with the short-run fluctuations in GDP growth over the course of the business cycle. He set up a framework for thinking about what determines growth in the longer run.

Solow argued that the growth in output was related to the growth of inputs of labour and capital into the productive process.

This seems obvious. But there was an extra ingredient: innovation.

This embraces a wide range of concepts, from becoming more efficient at producing what you already do, to major scientific breakthroughs.

Economists quickly used Solow’s model to estimate empirically what was really driving economic growth. In western economies, the answer was almost always the same. The amounts of labour and capital used had risen, but nowhere near enough to account for how much growth had taken place.

So the key factor in economic growth in the longer run is the amount of innovation which is carried out.

This insight is directly relevant to the debate over Brexit. Over a 10 or 20 year horizon, the key question is not the terms under which we leave – it is whether we will be able to innovate more effectively in or out of the EU.

The basic shortfall of the approach is that innovation itself is not explained by Solow’s model. Innovation is, in the jargon, “exogenous”. In other words, it is determined externally to the model.

This is where Romer enters the stage. His seminal paper in the Journal of Political Economy in 1986 is full of heavy-duty maths. The crucial difference with Solow is that the rate of innovation is determined within the theoretical model itself – hence the phrase “endogenous” – by profit-maximising firms.

Physical capital such as machinery, warehouses, and roads play a role in both the Solow and Romer theories of growth. But Romer introduced the key concept of knowledge as the basic form of capital.

Policymakers across the west in the past two decades have been obsessed by the “knowledge economy”. This is not, as Tony Blair and many others believed, simply a matter of sending more and more people to university. It is about how to encourage innovation.

Both the Solow and the Romer models are highly abstract – Solow, for example, began his article with the phrase “all theory depends on assumptions which are not quite true”. But both have been highly influential with policymakers, and illustrate the vital economic importance of ideas.

As published in City AM Wednesday 18th October 2018

Image: Gordon Brown by World Economic Forum via Wikimedia is licensed under CC-BY SA_2.0
Read More

Meet the engineers of economic theory: Market design has become a full-time job

Meet the engineers of economic theory: Market design has become a full-time job

What does someone with the job title of “chief economist” actually do?

The most well-known in the UK is probably Andy Haldane at the Bank of England, but his role is not typical. So what do the others do?

Nobel Laureate Alvin Roth’s paper in the latest issue of the American Economic Review describes the rapid evolution of the role of chief economists. Their main activity in commercial companies, banks, and investment firms was in macroeconomic forecasting – inflation, GDP growth – with perhaps some specific market commentary on the side.

These still exist, though there are fewer of them. And they are a cost to the business, rather than a revenue generator.

But the role is changing. Now, the major tech and internet companies employ chief economists – Airbnb, Facebook, Microsoft, and Google all have them. And the content of the work is completely different.

As Roth writes, “market design has opened up new ways for economists to earn a living”. Instead of being a cost centre, this new breed of chief economist and their teams make money for the company by designing the marketplace it hosts.

For example, in the short time between you clicking on the web and the site appearing on your screen, auctions have taken place.

Google search auctions determine which ads to show for each word that someone is searching. As Roth puts it, these are auctions for “eyeballs” – for attention. Auctions for banner ads on websites may involve bids based on the cookies that reveal data about the previous web activity of the eyeballs being auctioned.

These auctions have to be very fast and very efficient. The design of such markets at the hands of chief economists has become a lucrative business in its own right.

Roth’s paper of course covers much wider ground. As its title, “Marketplaces, Markets, and Market Design” suggests, it illustrates the important advances in economics recently in understanding how markets actually work.

Markets such as those for commodities conform closely to the simplified ideal presented in the basic textbooks. They allow trade to be conducted with relatively anonymous counter-parties, with prices doing all the work of deciding who gets what.

But even here the process by which prices are set needs to be specified, as does the definition of what the commodity actually is. The Chicago Board of Trade, for example, deals in commodities like US soft red winter wheat, not just “wheat”. And there need to be people whose job it is to specify such things.

Many of the features of price changes in financial markets, such as there being far more very large changes than a standard approach suggests, appear to arise from the price-setting mechanism, the double auction. We do not yet understand why. But according to Roth, “practical market design must often proceed in advance of reliable theory”.

John Maynard Keynes looked forward to the day when economists would be regarded in the same way as dentists, people doing a useful practical job. But Roth sees the recent developments in market design as making economists more like engineers. Surely a good thing, given that their bridges usually stay up.

As published in City AM Wednesday 4th July 2018

Image: I’m Feeling Lucky by Christopher is licensed under CC-BY-2.0
Read More

Economics is doing just fine, thank you, without adopting psychology’s blunders

Economics is doing just fine, thank you, without adopting psychology’s blunders

Criticisms of economics have abounded since the financial crisis.

Even Nobel Prize winners like George Akerlof of Berkeley have got in on the act. A key demand is for economics to adopt a more recognisably human portrait of behaviour in its theories than the rational calculating machine of the textbooks.

Psychology rather than pure economic theory is needed, apparently.

The simple fact is that economics has moved on a great deal in recent years. Much of the success of behavioural economics is based upon incorporating insights from psychology. But economists have done this in their own way.

As top behavioural economist and Nobel laureate Richard Thaler notes in his book Misbehaving: “behavioural economics has turned out to be primarily a field in which economists read the work of psychologists and then go about their business of doing research independently”.

It turns out that this approach seems to have been a very sensible one. Famous psychological experiments have recently been shown to be without foundation.

The most glaring example is the 1971 Stanford Prison experiment, one of the most influential psychology studies of all time.

Students were randomly assigned to be either guards or prisoners within a mock prison. The objective was to observe the interaction within and between the two groups.

The results proved shocking, with the abuse handed out to the prisoners by the guards so brutal that the study had to be terminated after just six days.

There were already doubts about the results. Other psychologists had found them difficult to replicate. But it has emerged this month, from analysis of previously unpublished records and interviews with some of the participants, that results were simply faked.

Another famous study, the so-called marshmallow test, has also been debunked.

In the original research in the 1960s and 1970s, children aged between three and five were given a marshmallow that they could eat immediately, but told that if they resisted eating it for 10 minutes, they would be rewarded with two marshmallows. More than a decade later, in their late teens, it was claimed that the children who had resisted exhibited advanced traits of intelligence and behaviour far above those who caved in to temptation.

But by the straightforward expedient of taking into account the economic and family backgrounds of the children, almost all the differences claimed for the ability to delay gratification disappear.

Ironically, it is economists themselves who have shown that western societies as a whole, not just particular groups, have great difficulty in deferring gratification.

The Chicago economist David Laibson established the idea back in 1997 in a famous paper with the rather gnomic title of “Golden Eggs and Hyperbolic Discounting”. The obscure phrase “hyperbolic discounting” means that people assign a great deal of weight to costs and benefits incurred in the present and very near future, and very little weight to anything beyond that.

Economics may have its faults, but much of psychology seems to be built on sand. Perhaps it is psychology that can learn from economics.

As published in City AM Wednesday 20th June 2018

Image: Experimental Psychology by Interactive Archive Book Images is licensed under CC0.0
Read More

Comparison sites are forcing businesses and economists to rethink price theories

Comparison sites are forcing businesses and economists to rethink price theories

The competition and Markets Authority (CMA) published a report about Price comparison sites at the end of last month. They seem simple enough, but these straightforward sites raise interesting issues for economics.

Overall, the CMA was pretty positive about the DCTs – digital comparison tools, to give them their Sunday best name. The conclusion was that “they make it easier for people to shop around, and improve competition – which is a spur to lower prices, higher quality, innovation and efficiency”.

DCTs offer two main benefits. First, they save time and effort for people by making searches and comparisons easier. Second, they make suppliers compete harder to provide lower prices and better choices to consumers. In short, they bring the real world closer to the perfectly informed consumers and perfectly competing firms in the ideal world of economic theory.

But even in this market, there is an issue which goes right to the heart of much of the information which can be accessed through the internet: how do we know whether we can trust it?

The main problem is that the comparison sites typically provide their services free of charge to consumers. They make money by charging a commission to suppliers.

This creates an incentive for a DCT to promote those suppliers which pay it the most commission. An effective way of doing this on the internet is by the order in which the information on the various suppliers is presented.

It is not that DCTs deliberately provide misleading information, or even that a site leaves off a major supplier which does not pay the particular website enough. But they can put those that pay the most at the top of the list.

Notoriously with Google searches, people rarely click through to anything which is not in the top three results of the search.

Allegedly, 60 per cent of the time, only the site which comes at the very top of is accessed.

Obviously on a DCT, consumers are likely to look at more. That is the whole point of using the site. But although the CMA does not provide hard data on this, it expresses a clear concern about the ways in which the sites rank the suppliers.

How the DCTs themselves set their prices raises a more general question for economics. The basic rule, described in the textbooks since time immemorial, is to set price equal to marginal cost – in other words, at the point where the revenue from one extra sale equals the cost of producing that extra item.

The standard assumption made in teaching generations of students their introductory course to economics is that as the level of output increases, marginal cost first of all falls but eventually rises.

But on the internet, once the site is set up, the cost of dealing with an extra user is effectively zero. The time-hallowed formula of economics is a recipe for bankruptcy.

The internet is forcing companies to innovate in their pricing strategies. And it is forcing economists to rethink some of their theories.

As published in City AM Wednesday 18th October 2017

Image: Man and Laptop by Pexels is licensed under CC by 0.0
Read More

Behavioural economics has had its Nobel moment, but take it with a pinch of salt

Behavioural economics has received the ultimate accolade.

Richard Thaler of the University of Chicago Business School has been awarded the Nobel Prize in economics for his work in this area.

Economics over the past 20 to 30 years has become far more empirical. Leading academic journals do still carry purely theoretical articles, but far less than they once did.

This shift towards the empirical takes two forms. Major advances have taken place in the heavy duty statistical theory of analysing large scale databases containing information on individuals and their decisions. This was recognised when James Heckman and Daniel McFadden were awarded the Nobel Prize in 2000.

Behavioural economics is much less technical. In any given situation, the decision which a purely rational person would take is identified. We then look how people actually behave, and see if there are any deviations from the rational way of doing things.

Perhaps the main finding of behavioural economics is so-called prospect theory, first set out nearly 40 years ago by Daniel Kahneman. In essence, prospect theory says that people dislike making losses more than they like making gains of the same amount.

Another important discovery is that, when weighing up how to value future costs and benefits, people often place much more weight on the present and very immediate future than standard economic theory assumes. Last month I wrote about how this helps to explain the reluctance of electorates to deal with climate change.

These two results are backed by large amounts of evidence obtained in a range of different contexts. So now they are being integrated into economic theory.

But many economists are altogether less sure about much of the rest of behavioural economics. One of the issues is that it often gives the impression of being rather ad hoc. No reason is given as to why people in one situation appear to behave rationally, but in another they do not. Very few guidelines have emerged as to when we can expect to see deviations from rationality.

Another issue is that many economists are prepared to accept that non-rational behaviour might be observed at a point in time. But in a reasonably stable situation, people will learn over time to be rational.

Behavioural economics is not just about advancing knowledge on the workings of the economy. Policy-makers have become interested.

Cass Sunstein, Thaler’s colleague, served in the Obama administration as head of the Office of Information and Regulatory Affairs. David Cameron set up the so-called “Nudge Unit” in his government based on Thaler’s ideas. Thaler claimed 10 years ago that a “nudge” could lead to “better investments for everyone, more savings for retirement, less obesity, more charitable giving, a cleaner planet, and an improved educational system”. In his 2016 book Misbehaving, he has backed off the extravagance of these claims.

Still, whatever the doubts and qualifications, behavioural economics has made a big impact. An economist can no longer be said to have a good training if he or she is not familiar with its main themes.

As published in City AM Wednesday 11th October 2017

Image: Richard Thaler by Chatham House is licensed under CC by 2.0
Read More