Page 46«..1020..45464748..60..»

Category Archives: Fake News

What is Fake News | Center for Information Technology and …

Posted: June 2, 2021 at 5:42 am

Clickbait refers to a headline or the leading words of a social media post (the teaser message) written to attract attention and encourage visitors to click a target link to a longer story on a web page [4]. Clickbait offers odd, amazing, or suspenseful phrases that induce curiosity, and entice people to want to know more. Like this:

Source: Medium.com

They dont need pictures to be clickbait. For example,

Clickbait is a common way that fake news (and any kind of content) is spread. Clickbait depends on creating a curiosity gap, an online cliffhanger of sorts that poses headlines that pique your curiosity and lead you to click the link and read on. The gap between what we know and what we want to know compels us to click. To an extent, the more outrageous a teaser message is, the more successful clickbait may be.

Besides curiosity and outrage, clickbait often uses a number of language characteristics that draw people in. Many clickbait headlines offer a list of some kind these 10 things that will blow your mind about and the titles have a number in them (and usually start with it) [6]. According to a review by Martin Potthast and colleagues [4], clickbait teasers contain strong nouns and adjectives, but use simple, easily readable language. They use these and this a lot.

You see these attention-getting strategies in conventional tabloids, too, like the National Enquirer. Theyre the kind of goofy leads that The Onion likes to parody.

Clickbait motivates further reading, instantly, and further reading promotes advertising for website publishers, so its a widespread practice. Fake news headlines often look this way, just as they did in the fake news peddled by tabloids and the era of yellow journalism.

References

[1] H. Allcott and M. Gentzkow, Social Media and Fake News in the 2016 Election, Journal of Economic Perspectives, vol. 31, no. 2, pp. 211236, May 2017.

[2] D. M. J. Lazer et al., The Science of Fake News, Science, vol. 359, no. 6380, pp. 10941096, Mar. 2018.

[3] E. C. Tandoc, Z. W. Lim, and R. Ling, Defining Fake News: A Typology of Scholarly Definitions, Digital Journalism, vol. 6, no. 2, pp. 137153, Feb. 2018.

[4] M. Potthast, S. Kopsel, B. Stein, and M. Hagen, Clickbait Detection, in Advances in Information Retrieval: 38th European Conference on IR Research, ECIR 2016, Switzerland: Springer, 2016, pp. 810817.

[5] Y. Chen, N. J. Conroy, and V. L. Rubin, Misleading Online Content: Recognizing Clickbait as False News, 2015. [Online]. Available: http://dl.acm.org/citation.cfm?doid=2823465.2823467. [Accessed: 03-Aug-2018].

[6] B. Vijgen, The Listsicle: An Exploring Research on an Interesting Shareable New Media Phenomenon, Studia UBB Ephemerides, vol. 59, no. 1, pp. 103122, Jun. 2014.

See original here:

What is Fake News | Center for Information Technology and ...

Posted in Fake News | Comments Off on What is Fake News | Center for Information Technology and …

Fake News Quotes (152 quotes) – Goodreads

Posted: at 5:42 am

We know from subsequent leaks that the president was indeed presented with information about the seriousness of the virus and its pandemic potential beginning at least in early January 2020. And yet, as documented by the Washington Post, he repeatedly stated that it would go away. On February 10, when there were 12 known cases, he said that he thought the virus would go away by April, with the heat. On February 25, when there were 53 known cases, he said, I think thats a problem thats going to go away. On February 27, when there were 60 cases, he said, famously, We have done an incredible job. Were going to continue. Its going to disappear. One dayits like a miracleit will disappear. On March 6, when there were 278 cases and 14 deaths, again he said, Itll go away. On March 10, when there were 959 cases and 28 deaths, he said, Were prepared, and were doing a great job with it. And it will go away. Just stay calm. It will go away. On March 12, with 1,663 cases and 40 deaths recorded, he said, Its going to go away. On March 30, with 161,807 cases and 2,978 deaths, he was still saying, It will go away. You know ityou know it is going away, and it will go away. And were going to have a great victory. On April 3, with 275,586 cases and 7,087 deaths, he again said, It is going to go away. He continued, repeating himself: It is going away. I said its going away, and it is going away. In remarks on June 23, when the United States had 126,060 deaths and roughly 2.5 million cases, he said, We did so well before the plague, and were doing so well after the plague. Its going away. Such statements continued as both the cases and the deaths kept rising. Neither the virus nor Trumps statements went away. Nicholas A. Christakis, Apollo's Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live

View original post here:

Fake News Quotes (152 quotes) - Goodreads

Posted in Fake News | Comments Off on Fake News Quotes (152 quotes) – Goodreads

The Five Types of Fake News | HuffPost

Posted: at 5:42 am

Not all fake news is the same. To paraphrase George Orwell, some news is more fake than others.

So how do you tell the difference? Here's my quick guide to the five types of fake news you may see in your everyday life:1.100% False. Pope Francis is dead. So is Paul McCartney. At least they were -- if you believed what you saw on social media. The Pope and Sir Paul are just a few of the celebrities whose deaths have been falsely reported online. But even with "RIP Paul McCartney" trending on Twitter, anyone standing next to him would have been able to see that he was clearly still alive.

2.Slanted and Biased. The Washington Times recently published their list of fake news stories -- including the "fake" story that "Climate change will produce more storms like Hurricane Katrina." While it's (thankfully) true that no storm since has matched Katrina's devastation, the Times seems to use this fact as leverage to discount the reality of climate change. Just because A. climate change can lead to major hurricanes and B. there haven't been major hurricanes doesn't mean that C. climate change isn't real.

3.Pure Propaganda. The Washington Post recently reported on a "sophisticated Russian propaganda campaign that created and spread misleading articles online" during the election campaign. While the accusations are still flying back and forth, it's fair to say that some fake news appears specifically designed to influence the reader's opinion in a certain direction.

4.Misusing the Data."Have a Beer, It's Good for Your Brain," reported Inc. But you should wait a minute before you grab a pint (or two). The study was done on mice -- not people. And the amount of beer was the equivalent of 28 kegs in humans. This is a great example of how the media often misinterprets research, offering up eye-catching findings that don't really apply to you, and often aren't supported by the science.

5.Imprecise and Sloppy. "1 in 5 CEOs are Psychopaths, Study Finds." This headline from The Guardian caught my eye, since I am a CEO (and I'm not a psychopath). But the headline is wrong. The research was based on a survey of professionals in the supply chain industry, not CEOs. A headline about supply chain professionals may not be as sexy, but talking about CEOs gives ppeople the wrong impression.

Unless you swear off social media and the Internet, you probably can't avoid fake news. No media organization is immune, although some are better than others. But it's a lot easier to spot it when you understand the different types of fake news you're likely to encounter.

This blog post is collaboration with my Everydata co-author Mike Gluck.

Calling all HuffPost superfans!

Sign up for membership to become a founding member and help shape HuffPost's next chapter

See the rest here:

The Five Types of Fake News | HuffPost

Posted in Fake News | Comments Off on The Five Types of Fake News | HuffPost

Here Are The Real Fake News Sites – Forbes

Posted: at 5:42 am

The internet is teeming with fake news sites. That's not a political statement, but the conclusion of a new study by DomainTools, a security analysts company.

The new study analyzed some of the top media outlets in the U.S. to determine their susceptibility to domain-squatting and spoofed domains. The bogus URLs may spread disinformation or malicious code, according to DomainTools.

As distrust of traditional media continues to grow, protecting the public from disinformation campaigns has become pertinent to the democratic process, says Corin Imai, a senior security advisor of DomainTools.

So which news sites have the highest fake scores? And what does it mean for the average news consumer? You'll probably be surprised by the answers.

Why study fake news sites?

Authenticity and trust are the building blocks of a terrific customer service experience. So, as a consumer advocate, and as a heavy consumer of news, I followstudies like this closely.

DomainTools' research shows how malicious actors use tricks like typosquatting and spoofing on domains as tactics to carry out malicious campaigns.

Typosquatting, also called URL hijacking, relies on mistakes made by Internet users when typing a website address into a web browser. Spoofing happens when a scammer pretends to be a premium publisher. These criminal activities can potentially extract personally identifiable information, download malware to a device, or spoof news sites to spread disinformation.

"Its no secret that disinformation campaigns have been on the rise," Imai told me. "With the uptick in fake news sites in recent years, we were curious about the possible connection between typosquatting campaigns and the dissemination of disinformation. What we found is that domain names of top news outlets have indeed been spoofed, and subject to typosquatting techniques."

These are the most fake news sites

Among the news site rankings, there are a few surprises. The top news site, for example, is not a national newspaper or a computer-security site but has still managed to draw a record 52 "high risk" domains, according to DomainTools. The "safest" of the sites also fits the same description.

Here's the list of publishers with the most high-risk domains:

1. Newsday (52 historical high-risk domains)

2. The New York Times (49 historical high-risk domains)

3. The Washington Post (20 historical high-risk domains)

4. The New York Post (16 historical high-risk domains)

5. Los Angeles Times (13 historical high-risk domains)

6. New York Daily News (10 historical high-risk domains)

7. USA Today (9 historical high-risk domains)

8. The Boston Globe (6 historical high-risk domains)

9. CSO (5 historical high-risk domains)

10. Chicago Tribune (5 historical high-risk domains)

DomainTools chose an initial list of media organizations based on traffic to the legitimate site.

"We had a hunch that the media organizations with the highest readerships were likely to be more lucrative for scammers seeking to spoof domain names," says Imai. "Our team compiled a list of the top media organizations based on audience size. This methodology gave us not only a set of online properties to investigate, but also a sense of the potential pool of the criminals targets."

(Oh, and in case you're wondering -- Forbes didn't make the list. It's squeaky clean.)

Why fake new sites matter

For news consumers, the biggest threat is what's referred to as "typosquatting," according to DomainTools (registering Forbs.it, for example, and posting bogus posts). It's particularly important, considering how frequently users misspell words, and how easy it is to fool even vigilant internet users.

Typosquatters can look legitimate, with legitimate SSL certificates and professional websites, used to trick Internet users into a false sense of security.

The bad guys also re-purpose once valid Internet real-estate, squatting on old, once-legitimate domains. That buys them time to iron out any inconsistencies with their attack infrastructure, allowing them to escape detection, according to DomainTools.

How to avoid fake news sites

Sites that spread disinformation often take advantage of the pace at which users skim the internet and their preferred news sources for breaking news. These campaigns could potentially steal and harvest personally identifiable information, download malware to a device or spoof news sites to spread disinformation to the public, according to DomainTools.

How do you avoid a fake news site?

Think before you click. Hover your mouse over any suspicious domain names or links to find out if theyre legit. "By hovering over a domain name, youll be able to get a glimpse to find out if they are who they say they are," says Imai.

Consider bookmarking your favorite news sources. That allows you to avoid misspelling the domain name when typing into the search bar.

Watch out for domains that have COM-[text] in them. "We're so accustomed to seeing .com that we can easily overlook the extra text appended to it with a dash," says Imai.

Go directly to the news source website. Don't follow a link through a newsletter or email.

Stay security savvy. "Remain educated and up-to-date on the latest scams that circulate through the web," says Imai. "Flagging suspicious emails and sending them straight to spam is also another great method to consider when steering clear from unusual activity."

Use a reliable search tool. Type in the name of the news site into Google search instead of into the address field. This will prevent any typos you may make from pulling up a fake site.

Will this change how people consume news?

As part of my research, I asked regular news consumers if the presence of fake news sites would affect their trust in the news media. Would it surprise you to hear that the answer was "no"?

Roughly one-third of my readers said they don't trust any mainstream media outlets, including all the ones for which I write. Ouch. Another third only trusts established mainstream media outlets like this one. And the balance reflected the sentiments of Patricia Seward, a retired health care executive from Kansas City.

"I dont trust any of the news outlets," she says.

In other words, the DomainTools research, while interesting, is unlikely to change the highly polarized view of the news media in the United States.

Continue reading here:

Here Are The Real Fake News Sites - Forbes

Posted in Fake News | Comments Off on Here Are The Real Fake News Sites – Forbes

From Facts to Fake News: How Information Gets Distorted – Knowledge@Wharton – Knowledge@Wharton

Posted: at 5:42 am

Remember the old childhood game of telephone? One kid whispers a phrase in another kids ear, and it gets passed along until the final child in the chain repeats it out loud. Inevitably, the words change along the way, subject to the cognitive interpretation of the listener.

Retelling stories may be harmless amusement on the playground, but new research from Wharton sounds the alarm on the grown-up version by revealing how news can become more biased as it is repeated from person to person. As information travels farther away from its original source, retellers tend to select facts, offer their own interpretations, and lean toward the negative, according to the study titled The Dynamics of Distortion: How Successive Summarization Alters the Retelling of News.

This paper started because I was interested initially in understanding how we end up with fake news. But quickly I realized that this project was going to be about something much broader, and I think more interesting, which is how do original news stories become distorted as theyre retold sequentially across people, Wharton marketing professor Shiri Melumad said in an interview with Wharton Business Daily on SiriusXM. (Listen to the podcast at the top of this page.)

Social Media Amplifies Distortion

Melumad co-authored the research along with Wharton marketing professor Robert Meyer and Wharton doctoral candidate Yoon Duk Kim. The scholars analyzed data from 11,000 participants across 10 experiments and concluded that news undergoes a stylistic transformation called disagreeable personalization as it is retold. Facts are replaced by opinions as the teller tries to convince the listener of a certain point of view, especially if the teller considers himself more knowledgeable on the topic than his audience.

The effect is amplified on social media. Followers dont always click on shared content to read the original work for themselves, yet they often accept the conclusion or opinion proffered by the person who posted it. Melumad said that finding is both consistent with previous research and pretty scary in its implications.

Whether we like it or not, social media has been a platform that allows for this type of retelling at a really broad scale and at a really fast pace, she noted.

The fragmentation of traditional news media into outlets that have outright bias (think Fox News to the right or The New Yorker to the left), along with the echo chamber effect, has worsened the distortion. Many people neither consume information from outside their small circle nor seek out alternative sources.

Unfortunately, what were seeing is this increased polarization whereby anyone whos existing outside of my echo chamber, Im probably not going to really trust [as a] source of information, Melumad said. Again, I think social media is worsening this matter because its so easy to just operate within our respective echo chambers.

Another disturbing result the researchers found was the trend toward negativity, even if the original story was positive, and stories tend to become more negative with each reiteration.

The further removed a retelling is from the original source again, think of the telephone game the more negative and more opinionated it becomes, Melumad said. Its really hard to turn this effect off, actually.

Whether we like it or not, social media has been a platform that allows for this type of retelling at a really broad scale and at a really fast pace. Shiri Melumad

Nothing but the Truth

Clearing the distortion is difficult. Melumad said the responsibility for the unvarnished truth falls on both the teller to convey accurate information and the recipient to be a critical listener and seek out original content. Of course, she added, it would help if content creators would be more mindful of what they produce.

If somehow you can incentivize writers or journalists to do their best to not sensationalize information as much, but rather relay facts in a more objective or dry manner, hopefully this would reduce this bias towards negativity, she said.

Melumad said the research left her reflecting on her own style of communication and making a few changes. Now, when she tells a friend about something she read in the news, for example, she encourages them to read the original article.

I try to qualify my retelling by saying, You know, this is just my opinion on this. You should read this for yourself, she said.

Read more:

From Facts to Fake News: How Information Gets Distorted - Knowledge@Wharton - Knowledge@Wharton

Posted in Fake News | Comments Off on From Facts to Fake News: How Information Gets Distorted – Knowledge@Wharton – Knowledge@Wharton

We Cant Fight Fake News with More Fake News – OneZero – OneZero

Posted: at 5:42 am

Erik Von Weber via Getty Images

Every piece, no matter how short, offers the writer an opportunity to cross the line to exaggerate, fabricate, or cherry pick facts in a way that ever-so-slightly misrepresents reality for what feels like the greater good. Whether writing an extended essay about the conflict in the Middle East, or a single tweet about Covid policies, theres always a moment where we can choose to press on the truth just a little too hard. It scores an easy hit, generates more reaction, and maybe even gets us to the next rung of social media celebrity.

But at what cost?

Ive watched over the past few months as several of my colleagues have succumbed to the temptation to fight what they see as fake news with what could only be called more fake news. They are transforming from journalists into propagandists, and ultimately undermining not just their own reputations but the entire landscape of public discourse. I mean, if we so-called professionals cant do this with civility and integrity, then who can?

It tends to start on Twitter, where the absurdly low signal-to-noise ratio makes fidelity to truth seem less important than capacity for wit. For example, one respected public intellectual has gotten it into their head that the media is now surreptitiously editing its previously published stories about Covids origins. Now that theres renewed interest in the possibility of an accident at the Wuhan facility, they believe that certain periodicals are trying to make it look like they didnt ban and censor this information last year. So they posted side-by-side versions of the piece before and after editing, saying these are the differences between the piece in March 2020 and now. It turns out the changes were made between March and April, 2020. So while technically thats between then and now, its really between then and then.

Another writer, who has been otherwise rigorous in their reporting on the ways America has mishandled the Covid crisis, nonetheless felt compelled to cross the line. They posted pictures of Americans being subjected to the most severe mask and shield standards, and people from another country enjoying total social freedom. The headline suggested these photos were representative of our nations contrasting approaches to life under Covid. Dozens of people retweeted the photo, enraged at Americas draconian measures. In reality, the other countrys restrictions were more severe than our own, and the photo was cherry picked from one district there with different standards. Photos from a state like Florida could just as easily made the opposite point. Our policies may suck, but this was a reach.

Yet another journalist posted on their blog about how a hospital had started letting whites die in its exercise of an overly woke anti-racist agenda. In reality, the hospital had issued guidance to its doctors to pay extra attention to symptom complaints from Black patients. Studies had shown that Black peoples symptoms were being ignored or perceived as less severe, leading to fewer necessary procedures and poorer outcomes. So it wasnt about denying necessary treatment to white people, but learning to hear better. There may be a debate to be had about whether paying special attention to people of color could lead to unintended consequences, but this post was about stoking rage, and beneath the journalists otherwise high standards.

Factually true but intentionally misleading posts from people who should know better. I get it.

The list goes on. Factually true but intentionally misleading posts from people who should know better, but nonetheless got caught up in the issues theyre championing. Ive been there. I get it. All three of these writers and thinkers have been correct about so much, for so long, yet garnered mostly criticism for their efforts. New York Times, Washington Post, CNN, and other mainstream coverage of these writers beats has been worse than tilted, and in some cases suspiciously resistant to viewpoints that either challenge the interests of their owners and advertisers or could be construed to offer legitimacy to anything that may have been uttered by Donald Trump. I understand how these beleaguered writers might occasionally think up clever, potentially viral comparisons and feel compelled to post them in the heat of the momentlike irresistibly good jokes.

So I did not call them out. (Ive even attempted to shroud the details here.) Instead, I wrote or called them, in private, asking them to consider the inaccurate impressions they were creating. They agreed with my assessments of their truthiness, but instead of removing or adjusting the tweets or blog posts, they all doubled down on them.

One of them explained to me that its just Twitter, not the New York Times, so it doesnt have to rise to the same standards of accuracy. Another said their posts werent intended to inform so much as to inflame to get people activated and angry. Liked Larry Kramers ActUp, these provocations would stoke some necessary rage. But is that really the problem? Theres not enough rage on Twitter, already? (Besides, Larry Kramer used facts and performance art not fake news.)

I admit I may have grown too intolerant. These are just tweets, after all, and I should know as well as anybody that the Twitterverse is not the place to conduct legitimate debate. And all of this fake-ish news is particularly triggering for me because of the friends Ive lost over the past couple of years to Qanon and worse. They start with a few over-the-top tweets like these, and then get into a positive feedback loop of likes and follows from people just as angry as they are, while also receiving pushback from magazines and editors who dont want to publish their vitriol. Then they cry censorship and end up retreating to self-publishing platforms under the premise that their ideas are just too dangerous for the mainstream media. Once siloed, these writers become practically unreachable trapped in filter bubbles of their own making. Their output becomes more strident and less useful.

It ultimately undermines not only their own arguments, but the whole social fabric and our collective quest as human beings to figure out what the heck is really going on here.

Yet we, the egregiously uninformed public, are still depending on people who have chosen to proffer not-quite-factual, ends-justifies-the-means arguments that express whatever axe they have to grind. They draw us into the yes/no, all-or-nothing, by-any-means-necessary culture to which they have succumbed, and distance us even further from any hope of rapprochement or even just honest debate. It ultimately undermines not only their own arguments, but the whole social fabric and our collective quest as human beings to figure out what the heck is really going on here.

The lesson for me and part of why Im here on Medium is to learn to be more careful about this, myself, and maybe help engender a more productive form of engagement in the process. Ill be writing weekly pieces through the summer, and more after that (when Im done with my next book). But Ill be doing so as part of cohort of writers and community of readers who I really hope will have each others backs. I dont mean that were here to defend each others contentions, but to challenge one another to improve the rigor and honesty with which we make them.

Having each others backs means being attentive to one anothers well being, and checking each other when one of us goes off the rails. Not with angry insults, but under the assumption of good faith. We dont usually err because were being intentionally false; its because weve been overwhelmed by our own passion, disgust, or righteous indignation. Thats getting increasingly difficult to avoid as our society itself appears to be disengaging from both reason and understanding. (People cant even agree on who is President.) We need each others help.

Most of all, my work here is going to be about developing better comportment: the bearing with which we engage one another and the world. For its this moment-to-moment approach to people and their ideas that may end up more important to our collective welfare than any of the particular ideas we mean to share.

Douglas Rushkoff writes a weekly column for Medium. You can follow him here. Hes the author of twenty books on media, technology, and society, including Media Virus, Present Shock, and Throwing Rocks at the Google Bus. His latest book, Team Human, is being serialized on Medium in weekly installments. Rushkoff is host of the Team Human podcast, a professor of Media Studies at CUNY/Queens, and a graphic novelist.

Here is the original post:

We Cant Fight Fake News with More Fake News - OneZero - OneZero

Posted in Fake News | Comments Off on We Cant Fight Fake News with More Fake News – OneZero – OneZero

Twitter is testing new warning labels to prevent the spread of fake news – iMore

Posted: at 5:41 am

Twitter is testing three new warning labels that are designed to try and prevent the spread of misinformation. Tweets will include labels including "Get the latest," "Stay informed," and "Misleading" depending on their content.

The new labels were discovered by researcher Jane Munchun Wong with screenshots of all three shared for all to see. Wong had to tweet something that would trigger all three warnings, hence the rather odd content.

Since that tweet went live, Twitter Head of Site Integrity Yoel Roth has confirmed the labels' existence, saying that they're "early experiments" while inviting feedback on their current setup.

Twitter, like other social networks, has come under fire for the ease with which misinformation can spread. These labels are one way that such a problem could be dealt with, at least in part. There's no indication if or when these labels will be made available to all or whether they'll be limited to the official Twitter apps. Whether you're using Twitter or Tweetbot, knowing what information is real and what isn't is vital.

No matter which app you're using, shouldn't you be using it on the best iPhone around? We think so!

Excerpt from:

Twitter is testing new warning labels to prevent the spread of fake news - iMore

Posted in Fake News | Comments Off on Twitter is testing new warning labels to prevent the spread of fake news – iMore

Interview: Sumitra Badrinathan on tackling fake news and the effects of BJPs supply advantage – Scroll.in

Posted: at 5:41 am

Twitter is in the news this week in India for putting a manipulated media tag on posts by leaders from the Bharatiya Janata Party, containing propaganda that fact-checkers found to include misinformation. While the Indian government has turned this into a fight for narrative against the social media network, the development is also a powerful reminder that misinformation and efforts to address it will be closely watched.

Sumitra Badrinathan is a postdoctoral research fellow at the University of Oxfords Reuters Institute, who received a PhD in political science this year from the University of Pennsylvania. Badrinathans work focuses on misinformation and comparative politics, with a focus on India.

In a recent paper based on an experiment in Bihar during the 2019 elections, for example, Badrinathan found that even an hour-long module aimed at improving peoples ability to identify fake news did not necessarily make them any better at it. Even more significantly, the results found that those who identified as supporters of the Bharatiya Janata Party seemed to become worse at identifying fake news after the training module potentially because of a backfire effect in which people tend to hold firmer to their beliefs after being corrected.

I spoke to Badrinathan about the Bihar experiment, what it might tell us about political identities in India, and what further research she would like to see on misinformation.

Tell me a little bit about your academic background.I just finished a PhD in political science at the University of Pennsylvania. And Im about to start a postdoc research position at Oxford. Before my PhD I was born and brought up in Bombay, I grew up there before moving to the US.

Id always been interested in politics, but it was when I moved here to study that it became clear to me that politics could also be about research and good-grounded science. So thats what I have focused on.

How did you come to work on disinformation?First, let me say that, when the 2014 elections were going on, I was in my final year of college, and elections were happening around me for the first time in a way that I was actually able to appreciate them.

As part of that, I worked on a campaign and we went door to door to talk to people trying to get them to go out to vote. It struck me at the time that we knew very little about why a particular person casts their vote in a certain way, at least in terms of systematic data.

So I went back to the folks I was working with and said, is this tabulated? Are we knocking on doors randomly? Or do we have an idea of why were doing this because it seems like people vote for candidates not only because they like them, but because of a whole host of other reasons that might have little to do with a candidates personality or policy ideas.

It became clear to me that that sort of systematic data about these things in India was not easy to come by. Now, it is a lot easier than it was back then. But thats what got me into data and politics.

When I started my PhD, I was still interested in data science and how it could apply to politics. I was taking classes on doing experiments on big data, on advanced statistics, and so on. But I didnt exactly know what I was going to focus on at that time.

This is about 2016-2017. Misinformation was a big deal in the US because [former US President Donald] Trump had just gotten elected. At the time, more and more people in India were getting access to the internet, I was on all of these WhatsApp groups with friends, the extended family and so forth.

I started to see similar patterns in India. In that, when there was a big election or event in the country, there would be a deluge of fake news on my phone. In the US, academic researchers were trying to see whether they could talk to people about this as an issue, and whether that would turn them around. Tech companies like Facebook and Twitter got involved. They started piloting initiatives like putting a disputed tag on a message to see if it had an impact.

There was nothing like that in India. For me, a light struck in my head. It became clear that I had the tools to conduct something like this, and it matters to me, because Ive seen people around me succumb to false information and propaganda.

Putting two and two together, thats where I started, and Ive stuck on that path.

Do you think, in the Western context, we have a good handle on this research area now?Yes, and I can give you examples.

For one, we know that one of the largest vulnerabilities to misinformation in the West is whom you voted for. The way that affects how you come across this information is through a mechanism called motivated reasoning, which in simple words, is to say that humans are motivated to reason in certain ways. And that reasoning, more often that not, coincides with your partisan identity.

You voted for somebody. You will feel cognitive dissonance in your mind if you shy away from trying to support the position that you already took. So we are prone to biases like confirmation bias and disconfirmation bias. And we dont want to do anything that goes against these pre-existing views, because it causes dissonance in our heads.

That concept has been shown time and again, in a variety of contexts, to affect misinformation consumption in such a strong way that not only has it reduced the effect of corrections if somebody is correcting information that is beneficial to my partys cause, Im more likely to not have that correction have an impact on me but in some cases, it also led to a backfire effect.

Which is that previously I might have believed or not believed a piece of information beneficial to my party. But once you correct it, and if youre somebody I dont like, then I double down in such a way that it doesnt matter what I thought before. I am going to say this is definitely true and not going to listen to your correction.

This is just one example. I dont think we have a clear understanding of the drivers of misinformation and the mechanisms to believe them in the Indian context. That is what a bunch of colleagues and I are working towards understanding.

How much does this veer out of political science into interdisciplinary work?A lot of the literature that I cite comes from psychology and cognitive sciences, because we are talking about ultimately how the human mind confirms and believes things. And in general, more than political science. Its political communication. Thats my minor in my PhD also.

Youve said there isnt much work on India, but is there research on other non-Western spaces?Very little. In general, its limited to contexts we would characterise as developed, and where they use more public platforms like Facebook and Twitter. So naturally, the solutions that we come up with will be tailored to those platforms, which is why I keep talking about how its hard to imagine those solutions applying to not just India, but a large majority of the world, that is using WhatsApp or other private applications like Signal or Telegram.

Tell me about the misinformation experiment in Bihar.The 2019 elections were coming up, and I wanted to do something around that, because we know misinformation would start to rise. So it seemed to be a good opportunity to go into the field. But I wasnt sure what I would actually do.

One of the things that had been tried [elsewhere] was telling people beforehand that misinformation was out there and reminding them that they should try to analyse information with the goal of accuracy. And that has led people towards better information processing in the past.

I liked that idea. Before running a study, I talked to a bunch of people knocking on doors, focus groups and found out that a lot of people, especially older folks, were getting on the internet for the first time. People whose families had saved up to buy mobile phones and they had one per household.

And this led to a series of observations that werent in my mind before I went to the field, which is that people because theyre new to the internet werent aware of the concept of misinformation to begin with.

That might seem like a bad thing, but for the study, it was like talking about a blank slate. This is an opportunity to teach people that there is news out there that is not entirely true. And maybe we can teach people to become more careful news consumers.

So that was the premise of the study. We selected a set of households. For each household, we had an enumerator go and talk to, sometimes for close to an hour, about misinformation. For some, the idea itself was a surprise because they said things like, its on my phone, it must be true, because the phone was to them an elite authority source.

So we talked to them about sources, saying you can trust some and distrust others. We talked to people about some fake stories that had gone viral at the time. We printed out four of these stories the original image, and then a small bubble next to it explaining what was wrong.

And the enumerators explained to people that these are just four examples, but we want to show you how the things you come across on the phone may or may not be true. We talked to people about ways they can go about countering these stories, like reverse image searches, or going to fact checking websites. And we left behind a flyer with tips to spot misinformation.

And then people voted in the general election. After that, we went back to the same households to measure whether what we did worked or not.

I dont want to get too technical, but the experiment part was that only some households randomly chosen were talked to about misinformation, some were not. The key thing were interested in finding out is the difference between the houses that were given the treatment and the ones that were not.

Now obviously, we dont know how people voted, but the premise was if misinformation can affect your opinion, that affects your voting behaviour. So we went back after the elections, after voting but before results had been announced, because we didnt want results to affect the way people answered our final questions.

So we went back and measured through a series of questions whether people got better at identifying fake news.

And the results were somewhat surprising to you?I dont know if they were surprising as much as I would be lying if I said they werent disappointing. Obviously you want something to work.

In the literature which Im talking about, people havent done this thing where someone goes and talks to respondents about misinformation, with an up-to-one-hour-long module that combined a bunch of different things that I would call more pedagogical or learning focused. It hasnt been done.

All of the solutions have involved one-line nudges or push notifications, that sort of thing. This was a much more evolved intervention. Just on that basis, I expected it to work.

But second, there are normative implications. If misinformation is such a big problem for peoples opinions, and theyre casting votes on the basis of it, for the health of democracy, you want something like this to work.

Which is why it was disappointing to find that in general, the whole intervention did not work. The difference between the treatment group and the control group was zero. The group that did not get any of the training was not worse at identifying misinformation to the group that did.

There was also a more surprising part. I broke up the sample of respondents into people based on their party or whom they said they liked, which in practice meant people who liked or preferred the BJP or BJP allies at the Centre, and those who said anything else.

Remember the backfire effect, which is when peoples affinities towards their party is so strong that they double down on something that youre telling them is false. That happened here.

Respondents who said they supported the BJP, when they got the training, they became worse at identifying misinformation. They were better before. They significantly decreased their ability to identify misinformation when they got the training.

For people who said they did not support the BJP, they were not very good beforehand meaning in the control group but after the training they were able to improve their information processing.

Essentially, the treatment worked in opposite ways for both of the subgroups, which I had not expected at all. When we talk about parties in India, nothing in the literature says that we should expect party identities to be so strong and consolidated to the point where they affect peoples attitudes and behaviours. Thats not to say that people arent sure who they are voting for. Thats to say that voting may or may not happen on the basis of ideology and identity. People vote for a host of different reasons.

This is what the literature on India in comparative politics has shown. So to find that your identity in terms of who you support politically, as opposed to other identities like religion, caste, and so on, can be so strong that it can condition your responses on a survey, that too only for one set of partisans, thats something that hadnt been found before.

My understanding of the backfire effect is that the research in the US has been muddled it exists in some contexts, but not in others.Thats right. The backfire effect is one of those things weve gone a bit back and forth about. Im using that term in the Indian context because of a lack of a better word. And by this we shouldnt conclude that such an effect definitely exists.

This is the first, and to my knowledge, only field study that has been conducted on this, and we need so many more to understand if this sort of effect is replicated. One of the things that may push us towards thinking that it wont be replicated is that this was conducted during a very contentious election. And we know from previous research, not in India but in other contexts, that peoples identities are stronger during elections and other contentious periods because it is salient.

Everyone around you is talking about BJP, not BJP, people are knocking on your doors asking for votes. Its likely that the salience of that identity pushed people to behave a certain way, and that if you take away the context of a contentious election, it wouldnt have happened.

We dont know whether this is limited to just this particular sample for this particular time. And it is very possible that it is. Whoever is going to read your newsletter, if people are interested in misinformation in India, we need several more people working on this to be able to say that what we know is true for sure and not limited to the context of one study.

One of the other interesting things about the paper is that, before the intervention, it seemed that those who said they supported the BJP were better than others at discerning fake news?Yes, and thats a puzzle. There are a couple of different reasons for this anecdotally. One reason is that, anecdotally, the BJP has a supply-side advantage. When it comes to misinformation, most of the political misinformation out there almost always has the BJP name on it. Either the misinformation is favouring the BJP, or countering it.

But in my experience, the BJP is always referenced. And this is plausible because they have a supply-side advantage. We have heard about them having a war room of people to create stories.

Its possible that respondents who support the BJP are aware that they have a supply-side advantage, and in the absence of treatment, this makes them better off in a survey setting at identifying true or false stories. Thats an anecdotal explanation. That non-BJP participants may or may not be aware of misinformation to the extent that BJP participants are, just because it doesnt favour them.

The second explanation is, if you look at where this better information processing for BJP respondents is coming from this is a smaller sample, since its just the control group you see that the overall better rate of identification comes from their ability to identify pro-BJP stories as true.

Even in the absence of treatment, theyre doing what we would expect any strong partisan to do. For non-BJP supporters, this alignment is not there in this sample. I dont know if thats super convincing, its not to me, but its the extent to which I can go with this data.

For the lay reader, how would you summarise the results of the non-BJP respondents?They were worse off beforehand, but they were able to improve their information processing skills from the treatment.

But one thing I want to say is that the two sides are also very different. One side supports a party. One side is made of people who support a bunch of different parties, but the only thing they have in common is that they dont support a party. Even ex-ante, the sides arent equal. And thats not easy to solve, because of the nature of misinformation in India, which is either pro-BJP or not.

In Bihar, at the time, if you thought of trying to find misinformation that was pro-RJD or pro-JDU, and I scoured the internet for stories like this, there werent any. So by design it had to be like this. And that has created a little bit of an imbalance between the two groups.

We shouldnt expect them to behave the same way because one group is not bound by a common shared cause, the way that the BJP sample is, and I guess thats saying something about Indian politics in general these days.

You also find that those who are more digitally literate did not necessarily discern fake news better.Yes, and thats a tricky one to answer. I created a measure from scratch, because everything that exists to measure digital literacy is focused on the Western context. Mine measured familiarity with WhatsApp. You can think of digital literacy in a bunch of different ways. You can think of it in terms of how someone navigates their phone, which is very difficult to measure because you have to observe people doing it. Maybe if I had gone down that road, answers would be different.

I measured by a series of questions that indicated how familiar someone was with doing different things on WhatsApp how to create a list of people to broadcast a message to, how to mute groups and so on. And the responses were self-reported.

What we find in the Western context is those who are less digitally literate tend to be older people and they are worse at identifying misinformation. In this Bihar context, those who are better at digital literacy are not necessarily worse at identifying misinformation.

One of the reasons for that is, in order to pass along misinformation, you have to have a certain amount of digital literacy to be able to do that. It is plausible that what is being measured in this context is a measure of digital familiarity that correlates with your ability to push messages forward, which may correlate with your ability to push misinformation forward, if youre so inclined.

I dont know that for sure, but thats what might be going on in this context.

So the results seem to suggest that partisan identities, or at least the pro-BJP identity, is stronger than we think. Let me bring in your other paper with Simon Chauchard titled I dont think thats true, bro, which seemed to suggest something slightly different.

The result of that is pretty much the opposite of this. So [the Bihar paper] was a field experiment, or a training experiment. You could think of it as a fact-checking or correction treatment.

This paper was very different. It was purely a correction experiment. The result was also very different.

In the field study, I found that on average, there was no difference between the treatment and the control groups. In this other study, which is an online one, we find that a very subtle treatment is able to move beliefs or that people can get very easily corrected.

But there were a lot of differences in the studies, so its hard to imagine that we should expect the results should be the same.

For one, the second study was entirely online. That meant they were not just regular internet users, but those so experienced with the internet that they are signing up for online panels to take surveys. So a very different sample.

We gave people these hypothetical WhatsApp screenshots, in which two people are having a conversation with each other on a group chat. Theyre talking to each other about something and somebody drops a piece of misinformation, and a second user counters them.

Now they can either choose to counter them or not counter them. And if they do counter them, they can choose to counter them with some evidence or without evidence. In essence, the treatment is that one-line counter message, which acts as the correction. And we tried to play with a bunch of different messages to do this. In some cases it involved a user just simply refuting the message with no proof.

The user would say something like, I dont think thats true, bro, which is where the title of the paper came from. And in some cases, they would refute the message with a tone of information and references.

Its an open question: Does this sort of correction work? Because, as we said before, WhatsApp cant correct messages because of their encrypted nature. So users have to correct each other. And not all of India is a setting where people are new to the internet.

We tried to see whether peer or social corrections can have an effect. And then there was the question of what kinds of corrections work.

In short, we found that any correction works to reduce peoples beliefs in misinformation, and have them process information correction. Anything. So the correction that says, I dont think thats true bro works. The correction that says I dont think this is true, but here is a paragraph on why its not true, works equally well.

I think that was surprising to us. Similar correction experiments have been shown to work in the American context. But what was surprising to us was the type correction didnt seem to matter. Even the short messages without any source worked just as well, relative to the longer messages backed by some evidence.

Now this seemed to suggest that there wasnt such a strong partisan identity or motivated reasoning.Yes. Its not to say they didnt have partisan identities. Everyone has identities. Its to say that the context youre in can bring those identities to the forefront, can make them salient.

In this online experiment, its not a time when people are coming to your door to campaign. Elections themselves make partisanship and political identities salient. In this case, youre going online to make some extra money. Youre not thinking about party politics.

The context is very different. Theres some evidence of this in the American context. Theres a recent paper that shows that its the context that makes identity salient. So in the context of an election, where youre already pitting one party against another, you are naturally motivated to think in such a way that will help or hurt your partys cause.

When you think of the online experience, that happened after the elections, this competition or win-loss framework was not in peoples heads. Thats not to say they didnt have partisan identities, just that the context of what was happening in the world at the time didnt activate these identities.

What other research have you been doing on this front?Im working on a bunch of different things. But one thats interesting me at the moment is a paper my co-author Simon Chauchard and I are working on, which is trying to understand the mechanisms of belief in WhatsApp groups. Why do people believe certain misinformation over others? And what motivates them to correct this misinformation.

One of the things were testing is that WhatsApp groups are common built around common cause society groups, parent-teacher associations, sometimes political groups. More often than not, theyre built with a certain cause and come to assume a certain identity.

Our working theory is that because they come to assume this identity, the members of the group are motivated to more often than not agree with each other. Theres this consensus towards a shared group identity that pushes people towards agreeing, which is why a lot fo misinformation may just get lost or go uncorrected.

But that also means, when somebody does correct something, it can very easily change something because the seed has been sown. That gives other people the opportunity to say, oh yeah, youre right, I dont think this is actually true.

I have a lot of anecdotal evidence to show that this might be one of the mechanisms at play. I talked to a woman in Mumbai who, during Covid, had this piece of information that said vegetarians are immune to the Coronavirus, so eat more vegetarian food.

She forwarded that message to all of her groups. I asked her whether she thought it was true. She said, Im not really sure, but at that point it was 9 am, and I had to send a good morning message. So I sent this.

Which goes to show that in some contexts in India, just because of the nature of our WhatsApp groups and the pressure on people to wake up in the morning and forward something can end up being misinformation, just because of the shared identity or norms of a group.

Were testing whether breaking those norms in some way is the mechanism to lead other members to fall in line. Were testing whether it is shared group identity, not actually belief in the message, but a need to be accepted by the group, as opposed to actually believing the message, which of those is the better mechanism to explain what is going on. Were doing this in the context of Covid misinformation, so look out for that working paper.

Are there others doing interesting work on this front?We have talked about corrections. But theres a second strand of research, not do with correction, but with quantifying the amount thats out there and maybe providing technical or AI-based solutions.

One lab doing really good work is that of Kiran Garimella at MIT. He and his lab are doing some fantastic work on trying to quantify how much misinformation is out there on WhatsApp in India and trying to see what we can do about it.

WhatsApp started public groups recently, where you can go to a link online and join, which takes away some of the privacy. Kiran and his co-authors have been scraping WhatsApp messages in these groups to give us an idea of how much is misinformation, how much comes from one party source versus another, how much is hateful speech, how much encourages Hindu-Muslim polarisation.

Some of his work is really excellent, so thats one person I definitely want to flag in this field whos doing great work.

Whats one misconception you find yourself having to correct all the time, whether from fellow scholars, journalists, lay people?Its funny, theres this meme template floating around on Twitter, called types of academic papers, where people are coming up with common tropes in the field.

One misconception is that people, non academics, have strong opinions on fact-checking. Either fact-checking is awesome, or it doesnt work at all. But the truth is we dont know. We need to run systematic scientific studies to see if that sort of things work, because were interested in understand whether the treatment works.

You cant push a fact-check out there, watch one or two people change their beliefs and conclude that it works. Whether fact checking works is a function of whos doing, in what context its being done, what kinds of fact checks are being done, what the intensity of those fact checks are there are so many sub questions.

Thats not to say that fact checking is not good. We need all of the normative things that we have to fight this problem. But apart from journalists and NGOs working on it, we need more academics to do systematic studies to show under what conditions these kind of interventions can be most effective.

We need more researchers working on this, so we can do more work, and then write about them in more public outlets such as yours. We know the only way to effectively measure intervention, just like a vaccine trial, is to see the difference between those who got the dose and those who didnt.

That knowledge is not there, because there arent enough of us working on it. And the deluge of misinformation, compared to what were doing to counter is theres just such a vast difference, that sometimes it seems that whatever we dont wont be enough.

But thats just to say that if we had 100 people working on it, as opposed to just 10 or 20, that would help.

Here is the original post:

Interview: Sumitra Badrinathan on tackling fake news and the effects of BJPs supply advantage - Scroll.in

Posted in Fake News | Comments Off on Interview: Sumitra Badrinathan on tackling fake news and the effects of BJPs supply advantage – Scroll.in

Facebook and fake news: U.S. tops list of targets of foreign influence operations – Global News

Posted: at 5:41 am

The United States topped a list of the countries most frequently targeted by deceptive foreign influence operations using Facebook between 2017 and 2020, the social media company said in a new report released on Wednesday.

It also came second on a list of countries targeted by domestic influence operations in that same time period. Facebook Inc said one of the top sources of coordinated inauthentic behavior networks targeting the United States in the year leading up to the 2020 presidential election was domestic campaigns originating in the United States itself, as well as foreign operations from Russia and Iran.

The tallies were based on the number of coordinated inauthentic behavior networks removed by Facebook, a term it uses for a type of influence operations that relies on fake accounts to mislead users and manipulate the public debate for strategic ends.

Story continues below advertisement

Facebook began cracking down on these influence operations after 2016, when U.S. intelligence concluded that Russia used the platform as part of a cyber-influence campaign that aimed to help former President Donald Trump win the White House, a claim Moscow has denied.

The company said Russia, followed by Iran, topped the list for sources of coordinated inauthentic behavior and that this was mostly rooted in foreign interference. Top targets of foreign operations included Ukraine, the United Kingdom, Libya and Sudan.

Trending Stories

But the company also said that about half of the influence operations it has removed since 2017 around the world were conducted by domestic, not foreign, networks.

IO [influence operations] really started out as an elite sport. We had a small group of nation states in particular that were using these techniques. But more and more were seeing more people getting into the game, Nathaniel Gleicher, Facebooks head of security policy, told reporters on a conference call.

Story continues below advertisement

Facebook said the domestic influence operations that targeted the United States were operated by conspiratorial or fringe political actors, PR or consulting firms and media websites.

Myanmar was the country targeted by the most domestic inauthentic networks, according to Facebooks count, though these networks were relatively small in size.

Gleicher said threat actors had pivoted from large, high-volume campaigns to smaller and more targeted ones, and that the platform was also seeing a rise in commercial influence operations.

I actually think the majority of what were seeing here, these arent actors that are motivated by politics. In terms of volume, a lot of this is actors that are motivated by money, he said. Theyre scammers, theyre fraudsters, theyre PR or marketing firms that are looking to make a business around deception.

Story continues below advertisement

Facebook investigators also said they expected it would get harder to discern what was part of a deceptive influence campaign as threat actors increasingly use witting and unwitting people to blur the lines between authentic domestic discourse and manipulation.

The report included more than 150 coordinated inauthentic networks identified and removed by Facebook since 2017.

Read the original here:

Facebook and fake news: U.S. tops list of targets of foreign influence operations - Global News

Posted in Fake News | Comments Off on Facebook and fake news: U.S. tops list of targets of foreign influence operations – Global News

Jeff Bezos exposed as the king of fake news – New York Post

Posted: May 14, 2021 at 6:50 am

Wow: It now looks like Jeff Bezos and his damage-control team just made up not one but two whole stories to deflect coverage of his affair with a then-married woman: one, a claim that the Saudis had hacked his phone to get telling texts and revealing photos; two, charges that the National Enquirer tried to blackmail him into halting his investigation into how the shots had leaked.

Eventually, the world learned that the guy who sold the info to the Enquirer was Bezos girlfriends brother, a Hollywood press agent no hacking required and nothing to make the Enquirer fear any investigation.

Brad Stones new book, Amazon Unbound, excerpted for Bloomberg News, details how a consulting firm helped the Amazon CEO assemble his false counterstory, which relied on the suggestion that hed been targeted because his Washington Post was so critical of both the Saudi regime and then-President Donald Trump and allowed him to reveal the affair himself while pretending he was being heroic by refusing to be blackmailed.

Pretty masterful while it lasted . . . except that the owner of The Washington Post (Democracy dies in darkness is its self-righteous Bezos-era motto) now stands exposed as a cynical purveyor of fake news who even tried to frame a media outlet to protect his own image.

The rest is here:

Jeff Bezos exposed as the king of fake news - New York Post

Posted in Fake News | Comments Off on Jeff Bezos exposed as the king of fake news – New York Post

Page 46«..1020..45464748..60..»