But these concerns are largely tertiary to the disinformation question. Americans have become so focused on how tech companies handle voices we find distasteful, repugnant or dangerous. My sense is that this has happened because social media largely serves up a world of entertainment, news and sports. It also allows its users to believe that they are participating in activism by posting about everything from police brutality to the Oscars, especially when their sentiments are part of a groundswell of opinion. As a result, online outrage will almost always be about things that are consumed online, like Rogans podcast, actors and comedians who say something offensive and some supposedly salacious books that are being dubiously canceled by the online right.
The ecosystem is closed and, at this point, almost entirely self-referential. News media, entertainment and sports go in; outrage over news media, entertainment and sports comes out.
Within these parameters, does the fight over disinformation simply mark the limit of what we are willing to do in the name of change? Do we care deeply because we really believe that people are being led astray? Or are we just responding to whats in front of us and admitting that while our political imaginations might be limited, we at least can clean up our timelines? Disinformation is certainly a real concern, but it also allows us to pretend that all the countrys problems can be solved by better algorithms and terms of service.
The effects of this aphasia have bled out into other parts of our daily interactions. Big Disinfo now shapes how we think about our fellow citizens, especially those we think are in thrall to a magical Facebook post. After the 2020 election, the news was filled with stories about how minority communities, particularly Asian American and Latino ones, had been bombarded with foreign language disinformation campaigns.
This particular disinformation panic coincided with a shift in both of those demographics to the Republican Party, one that has mostly continued over the past two years. The implication was that these voters had somehow been tricked by right-wing messaging to abandon the Democratic Party or, at the very least, its ideals. A 2018 paper from the UCLA Civil Rights Project, for example, argued that Asian American voters who opposed affirmative action had fallen for misinformation. That idea implies that if we just shut down the sources of misinfo, everyone will suddenly line up to vote for progressive candidates. It is also a broken way to think about our neighbors and fellow citizens.
There may very well be some misinformation about race-based preferences in college admissions floating around somewhere on the internet, but its far more likely that Asian Americans, many of whom believe that elite colleges are discriminating against them, simply oppose racial preferences out of pure self-interest. In these instances, the charge of misinformation obscures more than it illuminates.
At the same time, its true that too many people believe dubious sources of information on the internet. In 2019 researchers at Stanford published a study about how well American high schoolers could discern online disinformation. Of the more than 3,000 students who were shown a grainy video claiming to show ballot stuffing in the 2016 Democratic primaries, 52 percent believed it showed strong evidence of voter fraud. (The video was shot in Russia.) The report also found that 96 percent of students did not consider why ties between a climate change website and the fossil fuel industry might lessen that websites credibility. When asked how they evaluated the credibility of a site, they focused on superficial markers of credibility: the sites aesthetics, its top-level domain or how it portrayed itself on the About page.
Read the rest here: