Paste your Google Webmaster Tools verification code here

Was Michael Gove right? Have we had enough of experts?

Was Michael Gove right? Have we had enough of experts?

Experts are finding it harder to be heard. But is that because of how they communicate? And how solid is their much-vaunted evidence base anyway?

Using evidence to assess the outcomes of policies is a vital part of good governance. Whether it is examining how a Budget will affect those on low incomes, or how well fishing quotas are managing stocks, no one but the most bumptious ideologue would deny it. The plastering of demonstrably dodgy statistics on the side of the Brexit battle bus last year stoked indignation on the part of many who think of themselves as rational and well-informed. The arrival of Donald Trump, an American president who feels no compunction about disseminating falsehood, has further darkened the mood among the liberal intelligentsia. There is a strong sense that the forces of reason must now rise up and see off the purveyors of the “post-truth” world.

We must, however, also grapple with one other contemporary reality. Underlying the great turmoil of politics at the moment is precisely the view that “the experts” are less trustworthy and objective than they purport to be. Rather, their considered opinions are seen as a self-reinforcing apparatus for putting themselves beyond challenge—to advance their holders’ status, their careers or, most damaging of all, their political views over those of the less-educated classes. The great popular suspicion is that an elite deploys its long years of schooling and “the evidence base” to make itself sound more knowledgeable as it rationalises the policies it was going to prefer all along.

Is that a fair charge? Well, that is an empirical question, and definitive evidence for answering it is in short supply. What we can usefully do, however, is interrogate where the “evidence base” comes from, and how solid it is.


“Agreeing to referee academic papers
yields neither monetary reward nor esteem,
but it subjects you to a range of human temptations”


Back in 2010 we wrote a piece arguing that an over-emphasis on empirical evidence in political rhetoric was alienating the public. The increasing reliance on the expert stamp of authority was eroding a sense of shared values between governors and the governed. Unless you were familiar with the latest nuance in academic evidence, we warned, you were automatically unqualified to have a valid opinion.

We thus see the current defenestration of experts as a reaction to long-term trends in public life. If it is true, as Michael Gove said during the European Union referendum debate, that people “have had enough of experts,” it is because empiricism locks non-experts out of discussions that impact on, but may not capture, their day-to-day experience. Last year, many members of the public formed an impression, whether fairly or not, of experts attempting to settle an important and emotive matter over their heads. A fault line between “the people” and those who think they know what’s good for them, which has been there for some time, became apparent. The June election was another reminder of this, as certain policies that many experts felt didn’t stack up—universal pensioner perks, free university education for students and costly nationalisations—turned out to be rather popular.

As paid-up members of the quantitative-expert class we share some of the current foreboding that a dystopian future awaits, where objective truth is not respected. There are many good examples of evidence influencing policy. But there are bad examples too, and if deference towards expert opinion goes too far, democracy ceases to operate as it should. Experts may see it as their role to uphold truth, facts and evidence, but they can only do so if they maintain public trust. That implies many things—better communication, for example. But before anything else it implies experts adopt a reflective approach to their own work, and open it up to outside scrutiny too.

There is a particular onus on social scientists here, because there is often more subjective judgment and interpretation in their fields than there is in measuring physical reality, leaving more scope for biases to entrench established views. Many social scientists are meticulous; but there are others who need to get their own house in order where “the evidence” is concerned. If it is going to be used to close down arguments it needs to be rock-solid, but how often is that the case?

Scarcely a day goes by without the press featuring some research, polished by a university PR team, purporting not only to establish that sausages cause cancer or that the people of Basingstoke are happier than the people of Burnley, but also that Something Must Be Done About It.

Academic papers from the social sciences and health are now an important foundation of what has come to be called the “evidence base.” Who could be against evidence? But this is a rather telling phrase. The “base,” when you stop and think about it, is logically superfluous; its function is purely rhetorical—suggesting that the evidence in question (unlike any other) is rested on something that shores it up. But, as we shall see, there are often question marks around its solidity, especially in the social sciences.

The magic concept invoked to define it—and to separate the priesthood from the laity has become that of “peer review.” Peer review is the process by which submissions to academic journals are scrutinised by the academic peers of the authors—the “referees.” Only papers deemed suitable by referees will be published.

The process of this scrutiny, of peer review, may conjure up images of scholars carefully examining the article line by line, checking every single piece of analysis verifying its claims. Very occasionally, this Platonic ideal may exist. When, for example, Andrew Wiles claimed to have proved Fermat’s Last Theorem, his manuscript was subjected to the most thorough investigation imaginable by the world’s leading experts in the relevant areas of maths. An error was indeed discovered, one which Wiles was happily able to fix after months of wrestling with the problem. As a result of the peer review process in this case, we can be entirely confident that Wiles proved Fermat’s Last Theorem.

In almost all other cases, at least within the social sciences, the reality of peer review is rather different. We should think of a harassed academic, pressured by the need to do his or her own research, by the demands of both students and the university administrators and being pestered by the journal editors to submit the review.

Refereeing is both unpaid and anonymous. The referee receives neither monetary reward nor the esteem that comes with getting one’s name in print. The task is seen as a tedious chore, and procrastination is widespread. In the social sciences, there are frequently delays of a year, and more occasionally two, between submitting the manuscript and receiving the referee reports.

One might ask why academics agree to referee papers at all? In part, it is convention: it is simply part of the everyday life of being an academic. But once a year many journals will publish a list of the names of their referees. This incremental addition to your CV, just might—perhaps, eventually—be part of the package that lands you a promotion or a job at a better university.

But serving as a referee under these conditions subjects you to a range of human temptations. Does the paper support or undermine one’s own work, for example, or does it appear to be written by a rival? Does it cite enough of the papers of the reviewers and his or her friends, because the number of citations of your own work by other academics is an important metric by which you are judged? Here, at long last, is the chance to slap down, under the cloak of anonymity, the smartarse who slapped you down at that conference five years ago.

Then there is the question of who chooses the referee. Enter the editorial board, which is made up—once again—of academics typically paid little or nothing. Again, the human factor creeps in. Years ago, one of the present authors submitted a paper to a leading American economics journal, a critique of a published article that had gained a certain kudos. One of the authors of the criticised piece was an editor at that journal—and, as was discovered by chance a few years later, he gave it to his co-author to referee. Needless to say, the negative article wasn’t accepted.

Once a paper is published, the chances of it being subjected to further scrutiny are remote. A tiny handful of articles become famous, and are downloaded thousands of times. Many receive no attention, and most will be read by very few scholars. Yet the mere fact that a paper has gone through peer review confers on it an authority in debate, which the lay person cannot challenge. So, all too often, there is no post-publication challenge within the academy, and no licence for challenge from outside. Locked out by the experts, some laypeople may start to feel like they have had enough.

So how might we improve peer review, and build “the evidence” on a firmer foundation? Economics has rightly been subjected to many criticisms, especially since the financial crisis. But the discipline has one extremely powerful insight, perhaps the only general law in the whole of the social sciences: people react to incentives. They may not always do so with the complete rationality described in economics textbooks. But thinking through the rewards on offer in any given situation helps to understand why people behave as they do.

Ideally, the incentives around research should be structured so as to maximise constructive scrutiny of every claim that is made. Instead, the rising pressures on academics to publish has created a set of incentives that exacerbates the need to negotiate the peer review process and appear in academic journals. The rising demand to publish has been met by a large increase in the supply of academic journals. One recent estimate is that there were 35,000 peer-reviewed journals at the end of 2014, many of them of decidedly doubtful quality. Why? Because incentives are everywhere.

A paper in the 23rd March edition of Nature by a group of Polish academics mercilessly exposes the problem. The title neatly captures the content of the article: “Predatory journals recruit fake editor.” The authors begin in an uncompromising manner: “Thousands of academic journals do not aspire to quality. They exist primarily to extract fees from authors.” They go on: “These predatory journals follow lax or non-existent peer-review procedures… researchers eager to publish (lest they perish) may submit papers without verifying the journal’s reputability.”

They adopted the brilliant strategy of creating a profile for a fictitious academic Anna O Szust, and applied on her behalf to be an editor of 360 journals. Szust is the Polish word for “a fraud.” Her profile was “dismally inadequate for a role as editor,” yet 48 of the journals offered to make her one, often conditional on her recruiting paid submissions to the journals.

This new study follows on from a 2013 piece by the journalist John Bohannon in which his purposefully flawed article was accepted for publication by 157 of the 304 open-access journals to which it was submitted, contingent on payment of author fees. That was a warning sign, and things have got worse since. The Nature authors state that “the number of predatory journals has increased at an alarming rate. By 2015, more than half a million papers had been published in them.”


“Once a paper is published,
the chances of it being subjected
to further scrutiny are remote”


None of this means that academic journals have moved into a post-truth world. There are clearly journals where high standards apply. The Polish academics approached 120 journals on the respected Journal Citation Reports directory as part of the 360 in their experiment. None of them accepted “Mrs Fraud” as an editor. And one can imagine specific reforms to get rid of those sorts of journals that are profiting through the equivalent of vanity publishing.

Even in serious journals, however, and even where referees do try their best, the scrutiny of just one or two people provides scant security. The mere fact that a paper has been peer reviewed is no guarantee of its quality or, indeed, its reliability.

The problem is nicely illustrated by a paper that appeared in Science at the end of 2015, in which a team of no fewer than 270 authors and co-authors attempted to replicate the results of 100 other experiments that they had published in leading psychology journals. The involvement of the original authors should have made it easier to reproduce the results. Only 36 per cent of the attempted replications led to results that were sizeable enough that one could be confident they had not arisen by chance. In other words, almost two-thirds of the attempts to replicate published, peer-reviewed results of papers in the top psychology journals failed completely.

The veneration of peer review has simply gone too far. The connected concept of “evidence based” has permeated policy discourse, and is sometimes used to lock out non-experts. But in psychology at least, as we have seen, there are papers whose findings could not be replicated that could have been flourished as part of an evidence base in support of one policy stance or another. The evidence is not “based” on any firm foundations; it rests on sand.

So conventional review is flawed; but fortunately, there are alternatives—some of them already in use. One other test of academic papers is by their ability to make successful predictions. This is not infallible. Someone may strike lucky and carry out the scientific equivalent of successfully calling heads 10 times in a row. But consider, say, coronary heart disease. A tiny handful of the thousands of papers on the topic published each year may eventually lead to the development of drugs that successfully pass all the stringent tests set out by the authorities and be licensed for use. To get that far, their insights about what makes the condition better or worse has to be borne out in clinical testing in the case histories of real patients. They do real good, and we can be confident they have some validity.


“Experts need to show some humility:
they can’t diagnose and prescribe
for all of society’s ills”


Another recent alternative is to open up the peer review process, so that it actively invites challenge, by letting scientific merit be determined by the esteem of the peer group as a whole, not just by two or three selected referees. One example is the physics e-print archive arXiv.org (pronounced “archive”). Authors can post their papers here prior to publication in a journal if they like, though some feel no need. The site has grown to embrace not just physics but maths and computer science, and, in a small way, quantitative finance.

To post a paper, an author must merely be endorsed by someone who has already published on arXiv. Moderators refuse papers which are obviously not science at all. But scientific importance emerges from the process of downloading and citation. So peer review really is carried out collectively by the relevant scientific community. The more downloads and citations, the chances of an error going undetected become very low indeed. The context is different, of course, but there is an echo here of the logic with which Google has conquered the world.

It is, however, only in the harder sciences where there has been a serious embrace of something approaching the marketplace—of the consumers, other academics in the field, deciding on the worth of a paper. In most disciplines the only model remains a monopoly supplier—the prestigious journal and its editorial board.

But it is in the social sciences that the suppression of challenge can have most political effect. A paper may be brandished purporting to show that all family structures are of equal merit, or that mass immigration does not reduce real wages, perhaps conflicting with religious convictions, personal experience or vernacular conceptions of how society functions. Whatever one’s views, the impression created that expert findings on such contentious political issues are immutable fact is bound to breed cynicism and “expert fatigue.”

An over-emphasis on expert opinion has already had insidious effects on democracy. One of these is a view among some in the intelligentsia, as described in Tom Clark’s Prospect piece (“Voting Out,” February), that the fundamental purpose of democracy is optimal, rational decision-making; if the electorate cannot manage this they—and by implication the democratic system—are at fault.

There are two obvious problems with this. Firstly, in order to rationally optimise society, someone would need to decide what the objectives are. And that is clearly a matter of political opinion. Secondly, it is flagrant mission creep. Democracy is, first and foremost, a mechanism for managing disagreement in society without bloodshed, chaos or repression. To boot, it allows the people to peacefully throw out those in power if they’re doing a bad job.

Expert analysis is of limited use in these tasks. Its recommendations cannot capture what the polymath Michael Polanyi called “tacit knowledge”—knowledge that is based on experience, which shapes people’s habits and beliefs without being codified. This doesn’t get a look-in.

Indeed, in modern social science it is very often only that which gets counted that is deemed to count. And who decides on that? It is, overwhelmingly, the “experts” who get to write the surveys that feed so much social science its raw material. If, for example, they are more interested in what someone’s ethnicity does to their views than they are in whether the respondent lives in the countryside rather than the city, then that is what scarce slots in the questionnaire will be used to find out. Through such means, priors and prejudices about what merits counting can colour the data, even before it has been crunched.

Democracy is a very crude system for giving decision-makers feedback about the quality of our lives. But this most basic process of consultation can never be replaced by data. For quantitative metrics are often very “lossy”—some things are not counted, and thus cease to count. Where experts imagine they can settle a fundamentally political argument through such empirical evidence, the consequences can fast become absurd.

In the UK, the Office for National Statistics has, encouraged by David Cameron in his early tieless phase, measured “well-being” and “happiness,” to guide public policy. This sort of data conflates a very great number of causal factors, which dilutes its value in guiding public policy decisions. And yet one commentator even suggested that, because well-being data showed high levels of contentment in Britain, the vote for Brexit need not have happened.

More generally, the result of putting empirical analysis on a pedestal can be intolerance towards others who start with different views. That was in evidence in some of last year’s sneering at “Leave” voters as dupes who couldn’t understand the arguments. Furthermore, if evidence is everything, but many don’t have the training to process it properly, then unscrupulous characters will spot a chance to make up the odd little, self-serving “fact” of their own—after all, only a minority will know the difference. The rise of empiricism in a world where we are bombarded with information might thus have actually contributed to the post-truth phenomenon.

Why did experts become so prominent in the decades before the crash? The narrowing of disagreement in politics after the end of the Cold War was surely important, as was the associated rise in managerialism. For example, central banks, which make hugely political decisions that shape the relative fortunes of borrowers and savers, were suddenly held to be above politics, and given independence. Huge faith was vested in their predictions, and those of associated technocrats at institutions like the International Monetary Fund, until the crash showed these could not be always depended on.

In the more austere times that have followed, the fundamental conflicts over resources and priorities—between natives and foreigners, between social classes—that probably never truly went away, are now back with a vengeance. The experts certainly have misgivings about all forms of populism and especially about Jeremy Corbyn’s Labour Party, with its cavalier assumptions about how much revenue it could easily raise.

But the backlash against “experts” is, nonetheless, still principally associated with the right. The more educated, liberal-leaning section of society needs to understand why this is. It is not because, as is commonly assumed, the right is simply the political wing of the dark side.

The right’s great insight is that the left can create a political apparatus with good intentions but the wrong incentives, and that this apparatus can become impervious to challenge. It argues that political choice is based on economic self-interest, and that this can apply, perhaps unconsciously, even to people apparently motivated by the public interest. These suspicions, articulated as “public choice theory” by the Nobel Prize winner James Buchanan, have most often been applied to bureaucracies with noble theoretical aims that go awry in practice, but the same analysis can be extended to universities and research institutes too—or indeed “the evidence base.” The Buchanan analysis can easily morph into an intransigent view that pursuing practically any collective goal will lead to empire-building bureaucracies, which also fall prey to “capture” by self-serving lobbyists. Taken to extremes, it promotes a profoundly destructive, atomistic worldview that leaves society paralysed in the face of the most serious moral questions. One only has to look across the Atlantic at the way the American right is responding to climate change and healthcare to see that.

Those who reasonably resist this worldview can counter it in two ways: either through bitter “with us or against us” polarisation, or by having the foresight to avoid the charges that public choice theory would lay at the academy’s door in the first place. That means at least examining the possibility that policies that come blessed with an expert stamp are serving the interests of those who put them forward, rather than dismissing it out of hand.

Truth and evidence must obviously be upheld. But there is a real danger in expert elites studying the electorate at arm’s length and seeking a kind of proxy influence without having to worry about gaining political support. We must not denigrate evidence-based thinking, a bad habit of thuggish regimes, but we must subject it to more “sense-checking,” and in communicating it must pause and give thought to what the broader public will make of it. The alternative is a dialogue of the deaf between the know-all minority and a general populace which some may caricature as know-nothings. In such a stand-off, real evidence soon becomes devoid of all currency.

To avert it, the experts need to show some humility: we can’t diagnose and prescribe for all of society’s ills. We also need to recognise that to be persuasive we must actually persuade—and not simply hector. The great mass of voters are not, after all, under any obligation to accept expert authority. We need to reflect critically on the problems in academia that can block the testing of ideas on the inside, and dismiss all challenge from outside our walls. And we need to show self-awareness: deep intimacy with a subject can, on occasion, lapse into a tunnel vision that blanks out culturally-rooted perceptions and the lived experience of voters. Those things can’t be ignored. They are, after all, the lifeblood and raison d’être of politics, and can only be gauged by asking people, unschooled as well as schooled, for their opinions, and ultimately relying on their decisions.

As published in August 2017 edition of Prospect Magazine
by Helen Jackson and Paul Ormerod

Image: Michael Gove by Policy Exchange is licensed under CC by 2.0
Read More

Forward guidance is just another delusion foisted on us by mainstream macro

Forward guidance is just another delusion foisted on us by mainstream macro

The governor of the Bank of EnglandMark Carney, was on good form last week when he appeared at the Treasury Committee of the House of Commons.

Asked what “forward guidance” meant, he answered smoothly: “The thing about forward guidance is that it is guidance that is forward. Which is not to say it is meant to be in any way accurate. Indeed, it would be surprising if it were. The most important thing about forward guidance is that the underlying economic determinants should be correct, not that it should be helpful.” Cue collective bafflement of the assembled MPs!

But the statement actually tells us a great deal about how mainstream macroeconomists believe the economy operates.

“Forward guidance” has been the key element in policy-making by the Bank since Carney himself introduced it in the summer of 2013. It is meant to give guidance about the economic circumstances in which the Monetary Policy Committee (MPC) will start to raise interest rates.

The first attempt was certainly not in any way accurate. The governor stated that the MPC would not consider raising interest rates until unemployment fell to 7 per cent, which he predicted would take about three years. It took less than six months. By January 2014, the rate of unemployment had fallen to 6.9 per cent.

This just seems to have been a piece of poor analysis by the Bank. But it does not detract from the more fundamental reason economists think that forward guidance will not usually turn out to be accurate.

The forward guidance is deliberately based on the assumption that behaviour will not change. Yet the mere fact that the central bank makes a pronouncement about the future might induce people to alter their behaviour. And if behaviour changes, the forward guidance might very well prove to be inaccurate.

It is actually a sensible addition to the Bank’s armoury of policy levers. Properly managed, it might enable the Bank to nudge behaviour in directions which it believes will give a better outcome than would otherwise be the case.

The final part of Carney’s statement appears the most gnomic: “The most important thing about forward guidance is that the underlying economic determinants should be correct, not that it should be helpful”.

The governor meant that forward guidance should be given on the basis of a model of the economy which is correct.

In each of the various different macroeconomic models which exist, the assumption is made that consumers and firms form expectations about the future as if their particular model, and no-one else’s, were correct. Yet despite many years of intensive research, macroeconomists still do not agree on what constitutes the model of how the economy works.

There is a challenging academic literature on the theory of how people go about learning the correct model of the economy. But in practice economists are unable to apply it to themselves. We might reasonably conclude that it is the theory which is wrong. Forward guidance is just the latest technocratic delusion foisted on us by mainstream macroeconomics.

As Published in City AM on Wednesday 23rd November

Image: Mark Carney by The Financial Stability Board is licensed under CC BY 2.0

Read More

Dump opinion polls for social media to understand people’s real preferences

Dump opinion polls for social media to understand people’s real preferences

So the pollsters got it wrong again.  After the general election last year and then Brexit, it is perhaps not surprising.  What is surprising is just how wrong they were.  The real problem is the enormous confidence with which they pronounced that Clinton would win.

The Princeton Election Consortium was probably top of the class, stating that Clinton had a 98 to 99 per cent chance of winning.  Even the top Bayesian statistician, Nate Silver, who shot to fame by calling all 50 states correctly in 2012, gave Hillary a 71.4 per cent probability of victory.

Economists have been suspicious of opinions elicited by surveys for a long time.  A fundamental concept in economic theory is that of “revealed preference”.  The idea goes back much further than Adam Smith, the 18th century founder of modern economics.  In the Bible, we find the phrase “by their deeds, ye shall know them”.  In other words, it is not what people say what matters, it is what they do.  If someone says repeatedly that he prefers Pepsi to Coke, but never buys Pepsi and always buys Coke, we can reasonably infer that, despite his words, he does in fact prefer Coke.  His actions reveal his preference.

Readers above a certain age will recall the 1980s. Then, pollster after pollster reported that public opinion was firmly in favour of both more public spending, and higher taxes to pay for it.  Yet in election after election, voters just as firmly returned Mrs Thatcher and the Conservatives to power. They revealed a preference for lower spending and lower taxes.

A great deal of environmental policy is guided by hypothetical questions in surveys of what people would be willing to pay to, say, preserve a species of newt or prevent an oil spillage.  This approach even has its own name, that of “contingent valuation”.  Peter Diamond is an MIT economist who has won the Nobel Prize.   Jerry Hausman, also of MIT, might very well get one.  Referring to a paper they co-authored in the early 1990s, in 2012 Hausman wrote “at the time Peter’s view was that contingent valuation was hopeless.  I was merely dubious.  But 20 years later, after millions of dollars of government funded research, I have concluded that Peter was correct”.

A fundamental problem is that people overstate how much they would be willing to pay in such surveys, compared to how much they will pay when they really have to – just like the British electorate in the 1980s.

A great deal of expertise has been built up over the years in how to put together carefully constructed surveys to find out what voters and consumers think.  But their useful life is at an end.  Instead, social media conversations have the potential to discover what people really do prefer.  For all their chaotic and often incoherent nature, these unstructured conversations can reveal what people really are thinking and doing.  Economists, with their concept of revealed preference, need to make common cause with computer scientists.

Paul Ormerod

As published in City AM on Wednesday 15th November

Image: Trump with supporters by Gage Skidmore licensed under CC BY 2.0

Read More

What climate warrior Twisleton-Wykeham-Fiennes teaches us about punishment

What climate warrior Twisleton-Wykeham-Fiennes teaches us about punishment

Natalie Twisleton-Wykeham-Fiennes: don’t you just love her? One of the Black Lives Matter campaigners, our Nat caused chaos by occupying the runway at London City Airport, on the grounds that climate change is racist.

She and eight others, including a former member of the Oxford University Croquet Club, were sentenced by the courts last week. For many, their punishments were derisory: token fines and suspended prison sentences.

Would harsher treatment deter future protests like this and the one which disrupted Heathrow last month? Anecdotal evidence suggests it would.

In the town where I grew up, nestling in the foothills of the Pennines, the police would often drive miscreant youths late at night to remote hamlets up on the moors and make them walk home. It helped if it was raining, which it usually was. The more recalcitrant were likely to discover that the damp made the steps of the local police station unusually slippery. Compared to today, crime was low.

But this is mere causal empiricism, and there is a vast academic literature on whether or not harsher punishments deter crime. As a broad approximation, criminologists themselves tend to be sceptical about the impact of punishment as a deterrent.

A few years ago, I was at a seminar on the topic in which a criminology professor at Middlesex University asserted, without a trace of irony, that crime was caused by capitalism. In contrast, economists, who believe that agents respond to incentives, often claim that deterrence works.

Economists base their conclusions not just on theory, but on statistical analysis of detailed databases. Even so, the results might not be straightforward to interpret. For example, if prison sentences are increased and we see a fall in crime, is this because potential criminals are deterred, or because prolific criminals are in jail and can’t commit crimes?

Francesco Drago and colleagues published an influential paper in the Journal of Political Economy in 2009. They exploited the natural experiment provided by the Collective Clemency Bill passed by the Italian Parliament in July 2006. This provided for an immediate reduction of three years in the sentences of existing inmates, and as a result 22,000 of them were released. But if they re-offended, they had to serve all the suspended time, plus whatever extra they were given.

The study showed decisively that an additional month in expected sentence reduced the propensity to re-commit a crime by 1.24 per cent. Steve Levitt, in his bestseller Freakonomics, described similar results obtained by smart analysis of American data.

Perhaps the way forward is to experiment with another fundamental concept in economics, that of externalities. Twisleton-Wykeham-Fiennes believes that flying, while convenient for the individual, imposes costs on others through its negative impact on the climate. Other people bear these costs, which are external to the benefits to the person flying.

The airport protests inconvenienced many others. So the fines should be in proportion to the external costs created by the crime. The assets of the well-heeled protestors would vanish in a trice. Anyone for this natural experiment? Future Twisleton-Wykeham-Fienneses might prefer croquet instead.

Paul Ormerod

As Published in CITY AM on Wednesday 21st September 2016

Image: Croquet by Aren’tYouAlex-Spencer? as licensed under CC BY 2.0

Read More

Sorry, Prime Minister: Legislation won’t end excess in the boardroom

Sorry, Prime Minister: Legislation won’t end excess in the boardroom

A key platform of our new Prime Minister is to curb what she perceives to be boardroom excesses.  “It is not anti-business to suggest that big business needs to change”, she said.

One of her proposals is to allow employee and worker representatives to sit on company boards, a suggestion which has not gone down well in the corporate world.  The debacle at the Co-op, with its legion of elected directors, has been cited many times as an argument against Mrs May’s idea.  First Group already has employees on the boards not just of its component companies, but of the group plc itself.  However, given that one of the companies is Great Western trains, one of the most notoriously unreliable of all the rail operators, this has not proved to be a panacea.

May is keen on making shareholder votes on executive remuneration legally binding.  True, in the spring, a clear majority, some 60 per cent, rejected BP boss Bob Dudley’s £14 million pay.  But only 33 per cent of shareholders failed to back Martin Sorrell’s package at the WPP AGM last month, even though both Standard Life and Hermes, two of Britain’s most influential fund managers, voted against the pay report.

Board members receive emoluments.  Shop floor workers get pay.  Yet however we describe them, the gap between the two has opened up in dramatic fashion in recent decades, with no obvious economic justification.  In the United States, for example, the average compensation of CEOs in the top 350 firms is some $15 million a year.  This enormous sum is 300 times higher than the amount the companies pay to the typical worker.  In the mid-1970s, the ratio was not 300:1 but only 30:1.  Even in the mid 1990s it was around 100:1.  This latter figure would still hand the average CEO some $5 million today, not a bad sum to have.  The American economy has done well, but it also did well in the decades immediately after the Second World War, when remuneration disparities were much narrower.

Legislation is the brutal option, but there is no guarantee it would work.  The fundamental challenge is to alter the set of values which has become dominant.  The social norm at the top of industry has become one in which it is perfectly acceptable to be paid very large amounts, virtually regardless of performance.  This behaviour has influenced remuneration in non-market sectors of the economy, such as the pay of top management in universities, and in particular that of Vice-Chancellors.  Their average salary in 2014/15 was £274,000.  Incredibly, the Vice-Chancellor of Falmouth University – it does in fact exist – received £285,000.

There is a simple policy which could radically alter attitudes and behaviour.  The Prime Minister should let it be known that no-one who behaves in an ostentatious way on this matter will either be knighted or elevated to the Lords.  Indeed, such honours might be reserved for captains of industry who volunteer for substantial pay reductions.  Miracles might then happen and social norms drastically changed.

Paul Ormerod

As published in CITY AM on Wednesday 20th July

Image: Home Secretary, Theresa May, speaking at the Girl Summit, by DFID – UK Department for International Development  licensed under CC BY 2.0

Read More

How Stalin’s right-hand man could help the UK in EU exit negotiations

How Stalin’s right-hand man could help the UK in EU exit negotiations

The topic of behavioural economics is very fashionable. But many economists remain rather sniffy about it, arguing that it often does not really add to what the discipline already knows. But one of its most distinctive and strongest results from a policy perspective is its emphasis on what is called the “architecture of choice”.

Economists love jargon phrases. But this particular one is in essence very simple. In any given context, the rules which are drawn up for the process of choice can have an absolutely decisive impact on the actual decisions.

For example, before the referendum the government struggled against opposition in the Lords to get through a bill on trade union political funds. At present, the costs of any political fund operated by a union are automatically deducted from a member’s dues. A member has to positively opt out if he or she does not want to pay it. The proposal was to make all members “opt in” so they would only pay into the fund if they take action to do so. The government has partially backed down on the measure, and it will now only apply to new members.

The reason the opposition has been so bitter is because how the choice is put will have a dramatic effect on the outcome. The “architecture of choice” will determine whether most union members pay the political levy or whether most do not. From a purely rational perspective, the only additional cost under “opting in” is trivial. It is the few minutes it would take to fill the form in. But in practice under “opt in”, most people would not bother.

The UK faces a crucial architecture of choice problem with the now notorious Article 50 of the EU’s Lisbon Treaty. In order to leave the EU, a member state has to invoke the article. Once this is done, there is a period of two years under which the terms of exit are negotiated. When the two years are up, the deal is simply what it is at the time. In theory further changes can be made, but since these would require the unanimous consent of all EU member states, it would be highly unlikely to happen.

So once we invoke Article 50, the EU has us over a barrel. The French, say, could simply sit there stalling for time and blocking all our proposals. Of course, they would never stoop so low. But if some other country did, we would just have to take what the EU gave us at the end of the two years. This is why we have to have extensive informal negotiations before Article 50 is triggered, which EU leaders say mustn’t happen. The Swiss drew up their first treaty with the EU in 1972 and are still negotiating.

The only alternative is to adopt the strategy of Molotov, Stalin’s right-hand man, at the United Nations in the middle of the last century. He simply said “no” to virtually everything. Until we get informal talks, we turn up at the Council of Ministers and veto every proposal on any subject whatsoever, regardless of its merit. A suitable job for Michael Gove, perhaps.

As published in CITY AM on Wednesday 6th July 2016

Image: LC-USZ62-135316 by National Museum of the U.S. Navy is licensed under CC 1.0

Read More