The social media battle against fake news has begun – beware your own emotions
Did Donald Tusk, the former Prime Minister of Poland and now president of the European Council, conspire with Vladimir Putin to murder the President of Poland, Lech Kaczynski?
Many Poles believe this preposterous story, I learned last week at a fascinating conference on social influence at the University of Warsaw.
In 2010, a Polish Air Force plane carrying Kaczynski was blown up in Russia. An investigation by both Polish and Russian experts concluded that it was a pure accident. But on social media, where people are influenced by others, Tusk is widely thought to be complicit.
Coincidentally, the largest ever study on fake news was published last week in “Science”, probably the world’s leading scientific journal. Over 100,000 stories tweeted by some three million users were analysed over a 10-year period by a team led by Soroush Vosoughi, a data scientist at MIT.
There are two key ways to measure the spread of a tweet. The first is, quite simply, the number of users who retweet it. The second is the length of the link the tweet passes through. Most tweets are never retweeted at all. But if your tweet is retweeted by a friend, and in turn someone retweets your friend’s retweet, its “length” is two.
The conclusion of Vosoughi’s research is rather depressing. Fake news and rumours spread much faster and reach more people than accurate stories, using both measures of the spread of a tweet.
The academics offer two explanations for their findings. Fake news seems to have more novelty for users than real news. And fake news tweets typically show a much higher level of emotion in their overall content.
The impact of emotion on influence is supported by the work of Serge Moscovici, a French social psychologist who was mentioned a lot at the social influence conference in Warsaw.
Moscovici, who died in 2014, is famous in psychology circles for his research on how minorities can exert influence on the opinions of the majority.
His best known experiment was based on what we now call false news. Participants sat down in a group and were shown a series of slides coloured different shades of blue. They were asked to say out loud the colour. When the game was played straight, everyone answered correctly.
But when Moscovici planted a few people to say very firmly and confidently that a slide was green, they were not only able to change opinions, the majority of the group sometimes ignored the rational evidence and believed the false statement.
The MIT false news study may help lay the foundations for algorithms which could flag up fake news. For example, stories with a high emotion level which also spread rapidly and deeply could be highlighted as being potentially false.
In its early days, email encountered the problem of identifying spam. The spammers and the “defenders” play a complex game with their different algorithms, but it is one which the spammers now usually lose.
So there is hope that, eventually, fake news can be overcome.