What Are Shadowbans And Why Do They Happen? – Built In

Less than 24 hours after a violent mob of Trump supporters stormed the United States Capitol this year, Facebook took the once-unthinkable step of banning the worlds most powerful elected leader from its namesake platform and its subsidiary, Instagram. In the following days, platforms like Twitter, YouTube, Reddit, Twitch, TikTok, Snapchat and Discord followed suit, banning a constellation of accounts and groups affiliated with the sitting president, or involved in the effort to spread misinformation about the 2020 presidential election.

To many, this was a welcome, if overdue, response to then-President Donald Trumps efforts to undermine confidence in elections: 58 percent of U.S. adults supported the bans, according to Pew Research. But a number of high-profile Republicans spoke up in response, framing it as the latest ploy in an alleged conspiracy among tech companies to silence conservative voices.

Shadow bans block a user or individual pieces of content without letting the offending user know theyve been blocked.

Prior to this years influx of public bans, this censorship narrative was assembled around the concept of shadow bans: a moderation technique used to secretly block users from posting to a social platform. In conservatives telling, it is used in a targeted way to suppress political content Silicon Valley types disagree with.

To be clear, theres no reason to believe this claim of political targeting. A 2021 report from the New York University Stern Center for Business and Human Rights, calls the idea that social media companies unfairly target conservatives a falsehood with no reliable evidence to support it.

The ongoing controversy surrounding shadow bans points to a tension inherent to any attempt at building a global community: Most of us want some kind of moderation, but opinions differ widely as to where lines should be drawn. And in moderation systems where everyone has something to be unhappy about, secrecy around their animating policies provides fertile ground for conspiracy theories to spread.

More on Social Media:Nano-Influencers Are Marketings Not-So-Secret Weapon

A moderation technique first popularized in bulletin boards and early web forums, shadow bans block users or individual pieces of content without letting the offending user know theyve been blocked. To a shadow-banned user, the site continues to function as normal they can still make posts and engage with other peoples posts but to others, the user appears to have gone silent.

Its as if they were Bruce Willis in The Sixth Sense, and they didnt know they were dead, said Duane Roelands, who moderated a bulletin board hosted on a Rutgers University server in the late 1980s and early 1990s.

Its as if they were Bruce Willis in The Sixth Sense, and they didnt know they were dead.

Predating the modern internet, that message board, Quartz BBS, was essentially a collection of chat rooms dedicated to specific topics, ranging from jokes to television shows and political debates not unlike a Slack workspace or a Discord server. But the bulletin board, which Roelands accessed from his Commodore 64, only supported 10 concurrent users. The text-only rooms displayed posts in pure chronological order, and they maxed out at 200 posts, at which point old messages would get automatically deleted.

So if a room suddenly erupted with conversation about a hot topic an election or an important news event what would happen is what we called scrolling the room, where messages were leaving that 200-message window very quickly, Roelands said. Often in the case of a single day, or, with a very hotly contested topic, that scrolling could happen within an hour or even less.

To keep those debates from getting needlessly ugly, Roelands and his fellow moderators would shadow-ban users, either temporarily or permanently, depending on the offense.

Offensive behavior could be anything from simply being constantly abrasive and obnoxious, to being disruptive in a room devoted to serious topics like sexual orientation, gender identity or politics, he said. Our behavioral guidelines basically boiled down to, Dont be a jerk.

On Quartz BBS, shadow bans tended to happen in waves, as groups of ill-behaved newcomers flooded the message boards every few months. Roelands and his fellow moderators would refer to the waves as cicada season.

While an explicitly banned user is likely to create a new account and keep posting, a shadow-banned user might conclude that other people just dont care what they have to say. Over time, the thinking goes, they will lose interest and go away.

In that sense, shadow bans are just a technical implementation of a strategy long employed by forum users Dont feed the troll! with the added benefit of not relying on users to exercise restraint.

On traditional message boards, shadow bans are a clear-cut proposition: either your posts are blocked or they arent. And that approach makes sense when posts are served up chronologically. But modern social networks which usually serve up content through algorithmically curated feeds can achieve similar results through subtler means that limit certain users reach without blocking their content entirely.

One approach is to exclude a users posts from discoverability features. In 2017, a number of photographers, bloggers and influencers noticed substantial drops in engagement with their Instagram posts. At the time, more than a dozen Instagram users told tech reporter Taylor Lorenz that shadow-banned users posts werent showing up in hashtag searches or on the Instagram Search & Explore tab.

Instagram didnt tell these users what theyd done wrong, but Lorenz pointed to spammy hashtag usage and unauthorized automation tools as behaviors that likely triggered the changes.

The same year, in a similar effort to make feeds less spammy, Instagrams parent company, Facebook, deployed a machine learning model to identify and reduce the reach of people and pages who rely on engagement bait. Common examples of the genre include calls to share with a friend whos addicted to coffee, like if you support local coffee shops and tag someone you want to hang out with on our patio.

In addition to limiting the reach of individual posts that employ the strategy, Facebook announced that it would demote repeat offenders at the page level.

People would always find a way to follow the letter of the law while violating the spirit of the law.

This machine learning model has since been expanded to include comments under posts. And in 2019, Facebook started flagging engagement bait in the audio content of videos as well. (R.I.P. Dont forget to like and subscribe! Its been real.)

According to Facebooks own documentation, the platform does not tell publishers if their pages have been demoted, or why, citing concerns that users could rely on specific details to find workarounds a concern that hearkens back to the early days of shadow bans.

We figured out early on that if you clearly defined what was acceptable behavior and what was not, people would always find a way to follow the letter of the law while violating the spirit of the law, Roeland told me. This is a behavior that has persisted online to this day, and theres never really been a good solution for it.

The Instagram bans reported by Lorenz in 2017 look quite different from traditional shadow bans from a technical perspective, but they also share important similarities. The strategy targeted accounts according to undisclosed criteria, and aside from a rapid drop in reach, users had no way to find out why, or even whether, theyd been affected by the platforms decisions.

That secrecy seems to be the only real throughline among the techniques real and imagined that users refer to when they talk about shadow bans.

Often, the term is being used to describe more subtle [strategies] described by social media companies as downranking.

I dont think, when we hear the term [shadow ban], it always means the same thing, said Stephen Barnard, an associate professor at St. Lawrence University whose research focuses on the role of media and technology in fostering social change. Often, the term is being used to describe more subtle [strategies] described by social media companies as downranking.

Facebooks effort to limit the spread of engagement bait is a typical example of downranking. In a Medium post published in March 2021, Facebook VP of Global Affairs Nick Clegg acknowledged that the News Feed also downranks content with exaggerated headlines (clickbait) as well as content from pages run by websites that generate an extremely disproportionate amount of their traffic from Facebook relative to the rest of the internet.

In cases where a platform publicly announces changes to its feeds ranking algorithm, referring to the outcome as a shadow ban feels like a bit of a stretch though everyday users who dont follow Facebooks product blog may beg to differ.

Demoting pages for being disproportionately successful at drawing Facebook traffic seems more shadowy at first glance, but the internal logic makes sense when you consider how news sites compete for eyeballs on social networks. If a story is true and its headline is accurate, a number of credible news outlets will quickly corroborate it. As a result, users will share multiple versions of the same story from different, competing outlets. Conversely, if a story relies on sketchy sourcing, or if its headline makes claims unsupported by the reporting, the article can become a permanent exclusive the only link available for spreading the word.

In the aggregate, therefore, a disproportionate reliance on viral Facebook hits over other sources of traffic may be a pretty good indicator that a site is willing to stretch the truth although its certainly possible that legitimate publications may get caught up in the dragnet.

And while the platform may not offer clear guidance on where the line is, exactly, most sites that veer into extremely disproportionate Facebook traffic territory probably know that they are, in fact, actively juicing the algorithm.

But social media companies also employ strategies that look even more like traditional shadow bans with some important adjustments to account for how users engage with their platforms.

More on Marketing:Will Newsletters Launch a Marketing Boom the Way Podcasts Did?

One key difference between a traditional messageboard and social networks like Instagram, Facebook or LinkedIn is the overlap between the users digital and real life social circles. If a message board user simply stops posting one day, you might not think too much of it. But if a close friend stops showing up in your social feeds, you might ask them why theyve disappeared the next time you see them.

To help moderators slow the spread of offensive content and reduce the chance of backlash, Facebooks moderation approach allows for blocked posts to remain visible to a users first-degree connections. So instead of tilting at windmills all by yourself, you can do so in an echo chamber of people just like you.

This strategy is laid out in a 2019 patent for a system that automatically identifies and hides offensive content in Facebook groups or on pages: In one embodiment, the blocked comments are not displayed to the forum users. However, the blocked comment may be displayed to the commenting user and his or her friends within the social networking system. As such, the offending user may not be aware that his or her comment is not displayed to other users of the forum.

They dont know whats happened immediately. But they always find out.

Reasonable people may disagree on whether this can be accurately described as a shadow ban and some have. In a discussion thread about Facebooks patent, for example, Hacker News user NoodleIncident argued that the patented system is nicer to the banned user than a shadowban because the users friends can still see the comments.

This might be a necessary concession to effectively keep the user from finding out about their status the core idea of shadow banning. And ultimately, its probably more effective, because, according to Roelands, users are smarter than shadow-ban proponents tend to give them credit for.

When users are shadow-banned, they dont know whats happened immediately, he said. But they always find out.

The fact that common moderation techniques depend on keeping users in the dark has no doubt played a big role in turning the term shadow ban into a catch-all explanation for unexplained changes in social media performance and a rallying cry for those who think tech companies are censoring them.

In surveying users who experienced content moderation in the run-up to the 2016 election, Sarah Myers West, now a postdoctoral researcher at New York Universitys AI Now Institute, found that many users saw correlations between what they referred to as online censorship and algorithmic intervention.

My definition of content moderation might be: Was a post, photo or video you posted taken down, or was your account suspended, West said. But a number of folks would interpret content moderation as something like: I lost a lot of followers and I dont really have a good explanation for that but I think this is an overt effort by someone at this platform to shut me down. Or: My posts normally get a certain amount of engagement, but I posted about this topic, and all of a sudden my number of likes is negligible.

Social engagement can fluctuate for reasons that have nothing to do with censorship, of course. But West, who is also one of the conveners of the Santa Clara Principles on Transparency and Accountability in Content Moderation, said the opacity surrounding moderation systems and algorithmic feeds leave users speculating about how it all works.

Users would develop their own folk theories to make sense of it.

Users would develop their own folk theories to make sense of it, West said. And those folk theories tended toward the political.

One common theory among users was that social platforms deliberately aimed to suppress their points of view. Others believed their posts were actively sabotaged by other users making concerted efforts to flag posts for violating platform policies, in turn triggering mechanisms that limit a posts reach.

And then some people were just genuinely perplexed, West said. They just really did not know or understand what was going on, and they just wanted an explanation of how they could modify their behavior in the future so they wouldnt encounter this kind of issue.

The lack of transparency is especially frustrating for those who depend on social platforms in their day-to-day lives. In her research, West spoke with people living with disabilities whose support networks existed primarily online. Some told her they feared losing access to these networks not just because theyd lose a way to socialize with other people, but because they relied on these platforms to check in on each other and to reach out if they needed help.

These users stories help to illustrate an important consideration when talking about content moderation: It has real-life implications.

Chris Stedman, author of IRL: Finding Realness, Meaning, and Belonging in Our Digital Lives, sees the debate over shadow bans as a symptom of a broader anxiety about the power social media platforms have over our means of self expression.

Thats been especially true over the past year, as weve all come to rely on online platforms for much-needed social interaction. But for many, the distinction between social media and real life was fading away long before the pandemic.

At one point, the internet was a discrete space that we could step into and out of. I have vivid memories of biking to the library as a kid and writing my name on a clipboard to use a shared computer. It was really set apart from the rest of my life, Stedman said. Now, a bigger and bigger part of what it means to be me how I find a sense of connection and community and express myself has moved into digital spaces.

According to Google Trends, which measures the popularity of search terms over time, interest in shadow bans remained more or less flat from 2004 (the start of the data set) until April 2017 (when the Instagram shadow ban controversy reported by Taylor Lorenz began picking up steam). But interest really started ramping up in 2018, following a Vice story that used the term to describe a bug in Twitters interface that prevented some conservative leaders from showing up as suggestions within its search feature. A subsequent tweet by President Trump called the mishap a discriminatory and illegal practice.

And the ring of the term itself probably added fuel to the fire.

Shadow banning sounds quite nefarious, and I think that is part of its success in the public discourse.

Shadow banning sounds quite nefarious, and I think that is part of its success in the public discourse, Barnard said. It has this sense of a faceless entity, and its conspiratorial. It contains a more or less explicit assertion that these liberal tech executives from California are censoring us and conspiring to force their progressive agenda throughout American politics.

In his view, the lack of insight into the inner workings of social networks play a role too especially among conservatives, who tend to have a lower level of trust in media institutions. Together, these factors form a perfect storm where each social post that fails to gain traction becomes another piece of evidence of a broad-based effort at suppression, as opposed to just a failed attempt at going viral.

It becomes a seemingly plausible explanation, of course ignoring all the ways these platforms are helping them spread their messages which of course is the deep irony of all of this, Barnard said.

And underneath it all is a kernel of truth: Social networks arent censoring conservatives on ideological grounds, but they are trying to limit the spread of misinformation most notably about elections and about COVID-19. And among those most vocal in accusing social platforms of liberal bias are noted purveyors of misinformation on exactly those topics: Senator Josh Hawley, who raised his fist in solidarity with rioters outside the Capitol in January; Breitbart News, which was investigated by the FBIfor its role as a vector for Russian propaganda in the 2016 election; Ben Shapiro, whose censored site The Daily Wire saw its social media engagement skyrocket last year.

In short, social platforms are stuck between a rock and a hard place. Downranking is essential to preventing the spread of misinformation, and offering too much transparency about automated moderation systems will make it easier for bad actors to circumvent them. At the same time, any secretive, large-scale moderation system is bound to cause frustrations like those expressed by users in Wests study who said they didnt know what theyd done wrong.

More on Marketing:Good User-Generated Content Is Hard to Find

The term shadow banning has taken on a life of its own, evolving from a signifier of a specific moderation technique to shorthand for anything from actual downranking to unfounded conspiracy theories. And because most people learned about shadow bans as the centerpiece of a bad-faith argument, its hard to see how the term could return to its original meaning. According to Google Trends, interest in shadow banning as a topic reached an all-time high this January, and it continued to rise in February as well.

And on some level, maybe shadow bans were never all that great to begin with. Adrian Speyer, who is head of community at the forum software provider Vanilla Forums, urges users of his companys platform to treat shadow bans as a last resort.

If someone is not welcome in your community, you should escort them from the premises.

If someone is not welcome in your community, you should escort them from the premises, Speyer said.

In his view, shadow bans provide an easy way out of having difficult, but important conversations about community standards. These conversations can help foster a greater sense of ownership, and empower users to help moderators as they seek to uphold those standards.

Looking back on his time as a bulletin board moderator, Roelands has also come to see shadow bans differently. For one, because theres no feedback loop directly related to a specific action, users are never given an opportunity to learn where they went wrong. But perhaps more importantly, because people could usually tell when a trouble-making user suddenly disappeared, it created an environment where otherwise-upstanding community members sought to publicly humiliate users who had been shadow banned.

It made our community meaner, Roelands said. Its like the difference between restorative and retributive justice. Shadow bans wont turn people into better members of the community.

At any rate, social platforms are starting to recognize that opacity is a real problem. In his March Medium post, Clegg announced that Facebooks roadmap for this year includes providing more transparency about how the distribution of problematic content is reduced.

Twitter is also rethinking its moderation policies with an eye toward increasing transparency. In late January, the company rolled out a new pilot program called Birdwatch that lets users annotate tweets they believe to be misleading. Users then vote on each others submissions, in a system that will be familiar to users of web forums like Reddit or Hacker News.

Twitter will make all data contributed to Birdwatch available to the public, along with the algorithm the Birdwatch feature uses to determine which posts rise to the top.

At the time of this writing, the No. 1 post on Birdwatch was an exercise video posted by Republican Congresswoman Marjorie Taylor Greene with the caption: This is my Covid protection . The current top note labels the post as potentially misleading, citing CDC guidance on COVID-19 prevention: Exercise does not offer protection against COVID-19. Wearing a mask, staying socially distant, washing your hands, and getting vaccinated are the best ways to protect yourself and others.

As far as warnings go, this label is more specific and actionable than the generic This claim is disputed tag rolled out during the vote count following the presidential election. But perhaps most importantly, the system gives end users an opportunity to set their own standards and give each other clear feedback on whats in bounds and what isnt.

These features are unlikely to solve the companies moderation problems for good. Facebooks user base includes more than a third of the worlds population, which makes creating any agreed-upon set of community standards impossible. And an upvote-driven moderation system like Birdwatch could devolve into something resembling mob rule.

But both social media giants seem committed to giving their users more insight into why posts are downranked or flagged.

And thats a start, at least.

Read the original here:

What Are Shadowbans And Why Do They Happen? - Built In

Related Posts
This entry was posted in $1$s. Bookmark the permalink.