How fake news spreads like a real virus – Stanford University School of …

When it comes to real fake news, the kind of disinformation that Russia deployed during the 2016 elections, going viral isnt just a metaphor.

Using the tools for modelling the spread of infectious disease, cyber-risk researchers at Stanford Engineering are analyzing the spread of fake news much as if it were a strain of Ebola. We want to find the most effective way to cut the transmission chains, correct the information if possible and educate the most vulnerable targets, saysElisabeth Pat-Cornell, a professor of management science and engineering. She has long specialized in risk analysis and cybersecurity and is overseeing the research in collaboration with Travis I. Trammell, a doctoral candidate at Stanford. Here are some of the key learnings:

The researchers have adapted a model for understanding diseases that can infect a person more than once. It looks at how many people are susceptible to the disease or in this case, likely to believe a piece of fake news. It also looks at how many have been exposed to it, and how many are actually infected and believe the story; and how many people are likely to spread a piece of fake news.

Much like a virus, the researchers say that over time being exposed to multiple strains of fake news can wear down a persons resistance and make them increasingly susceptible. The more times a person is exposed to a piece of fake news, especially if it comes from an influential source, the more likely they are to become persuaded or infected.

The so-called power law' of social media, a well-documented pattern in social networks, holds that messages replicate most rapidly if they are targeted at relatively small numbers of influential people with large followings.

Researchers are also looking at the relative effectiveness of trolls versus bots. Trammell says bots, which are automated programs that masquerade as people, tend to be particularly good for spreading massive numbers of highly emotional messages with little informational content. Think here of a message with the image of Hillary Clinton behind bars and the words Lock Her Up! That kind of message will spread rapidly within the echo chambers populated by those who already agree with the basic sentiment. Bots have considerable power to inflame people who are already like-minded, though they can be easier to detect and block than trolls.

By contrast, trolls are typically real people who spread provocative stories and memes. Trolls can be better at persuading people who are less convinced and want more information.

Pat-Cornell and Trammell say there is considerable evidence that the elderly, the young and the lesser educated are particularly susceptible to fake news. But in the broadest sense it is partisans at the political extremes, whether liberal or conservative, who are most like to believe a false story in part because of confirmation bias the tendency in all of us to believe stories that reinforce our convictions and the stronger those convictions, the more powerfully the person feels the pull of confirmation bias.

Pat-Cornell and Trammell say that, much like ordinary crime, disinformation will never disappear. But by learning how it is propagated through social media, the researchers say its possible to fight back. Social media platforms could become much quicker at spotting suspect content. They could then attach warnings a form of inoculation or they could quarantine more of it.

The challenge, they say, is that protection has costs financial costs as well as reduced convenience and limitations on free expression. Pat-Cornell says the dangers of fake news should be analyzed as a strategic management risk similar to how we have traditionally analyzed the risks posed by cyberattacks aimed at disabling critical infrastructure. Its an issue of how we can best manage our resources in order to minimize the risk, she says. How much are you willing to spend, and what level of risk are we willing to accept?

Fake news is already a national security issue. But Pat-Cornell and Trammell predict that artificial intelligence will turbocharge fake news in the years ahead. AI will make it much easier to target people with fake news or deep-fake videos videos that appear real but have been fabricated in whole or in part that are finely tailored to what a susceptible viewer is likely to accept and perhaps spread. AI could also make it easy to create armies of more influential bots that appear to share a targets social background, hometown, personal interests or religious beliefs. Such kinds of hyper-targeting would make the messages much more persuasive. AI also shows great potential to counter this scourge by identifying fake content in all forms, but only time will tell who prevails in this new age arms race.

Related |Elisabeth Pat-Cornell, the Burt and Deedee McMurtry Professor in the School of Engineering.

Read the original post:

How fake news spreads like a real virus - Stanford University School of ...

Related Posts

Comments are closed.