What can users do about shadowbanning? – The Conversation

Tech platforms use recommender algorithms to control societys key resource: attention. With these algorithms they can quietly demote or hide certain content instead of just blocking or deleting it. This opaque practice is called shadowbanning.

While platforms will often deny they engage in shadowbanning, theres plenty of evidence its well and truly present. And its a problematic form of content moderation that desperately needs oversight.

Simply put, shadowbanning is when a platform reduces the visibility of content without alerting the user. The content may still be potentially accessed, but with conditions on how it circulates.

It may no longer appear as a recommendation, in a search result, in a news feed, or in other users content queues. One example would be burying a comment underneath many others.

The term shadowbanning first appeared in 2001, when it referred to making posts invisible to everyone except the poster in an online forum. Todays version of it (where content is demoted through algorithms) is much more nuanced.

Shadowbans are distinct from other moderation approaches in a number of ways. They are:

Platforms such as Instagram, Facebook and Twitter generally deny performing shadowbans, but typically do so by referring to the original 2001 understanding of it.

When shadowbanning has been reported, platforms have explained this away by citing technical glitches, users failure to create engaging content, or as a matter of chance through black-box algorithms.

That said, most platforms will admit to visibility reduction or demotion of content. And thats still shadowbanning as the term is now used.

In 2018, Facebook and Instagram became the first major platforms to admit they algorithmically reduced user engagement with borderline content which in Meta CEO Mark Zuckerbergs words included sensationalist and provocative content.

YouTube, Twitter, LinkedIn and TikTok have since announced similar strategies to deal with sensitive content.

In one survey of 1,006 social media users, 9.2% reported they had been shadowbanned. Of these 8.1% were on Facebook, 4.1% on Twitter, 3.8% on Instagram, 3.2% on TikTok, 1.3% on Discord, 1% on Tumblr and less than 1% on YouTube, Twitch, Reddit, NextDoor, Pinterest, Snapchat and LinkedIn.

Further evidence for shadowbanning comes from surveys, interviews, internal whistle-blowers, information leaks, investigative journalism and empirical analyses by researchers.

Experts think shadowbanning by platforms likely increased in response to criticism of big techs inadequate handling of misinformation. Over time moderation has become an increasingly politicised issue, and shadowbanning offers an easy way out.

The goal is to mitigate content thats lawful but awful. This content trades under different names across platforms, whether its dubbed borderline, sensitive, harmful, undesirable or objectionable.

Through shadowbanning, platforms can dodge accountability and avoid outcries over censorship. At the same time, they still benefit financially from shadowbanned content thats perpetually sought out.

Recent studies have found between 3% and 6.2% of sampled Twitter accounts had been shadowbanned at least once.

The research identified specific characteristics that increased the likelihood of posts or accounts being shadowbanned:

On Twitter, having a verified account (a blue checkmark) reduced the chances of being shadowbanned.

Of particular concern is evidence that shadowbanning disproportionately targets people in marginalised groups. In 2020 TikTok had to apologise for marginalising the black community through its Black Lives Matter filter. In 2021, TikTok users reported that using the word Black in their bio page would lead to their content being flagged as inappropriate. And in February 2022, keywords related to the LGBTQ+ movement were found to be shadowbanned.

Overall, Black, LQBTQ+ and Republican users report more frequent and harsher content moderation across Facebook, Twitter, Instagram and TikTok.

Detecting shadowbanning is difficult. However, there are some ways you can try to figure out if it has happened to you:

rank the performance of the content in question against your normal engagement levels if a certain post has greatly under-performed for no obvious reason, it may have been shadowbanned

ask others to use their accounts to search for your content but keep in mind if theyre a friend or follower they may still be able to see your shadowbanned content, whereas other users may not

benchmark your contents reach against content from others who have comparable engagement for instance, a black content creator can compare their TikTok views to those of a white creator with a similar following

refer to shadowban detection tools available for different platforms such as Reddit (r/CommentRemovalChecker) or Twitter (hisubway).

Read more: Deplatforming online extremists reduces their followers but there's a price

Shadowbans last for varying amounts of time depending on the demoted content and platform. On TikTok, theyre said to last about two weeks. If your account or content is shadowbanned, there arent many options to immediately reverse this.

But some strategies can help reduce the chance of it happening, as researchers have found. One is to self-censor. For instance, users may avoid ethnic identification labels such as AsianWomen.

Users can also experiment with external tools that estimate the likelihood of content being flagged, and then manipulate the content so its less likely to be picked up by algorithms. If certain terms are likely to be flagged, theyll use phonetically similar alternatives, like S-E-G-G-S instead of sex.

Shadowbanning impairs the free exchange of ideas and excludes minorities. It can be exploited by trolls falsely flagging content. It can cause financial harm to users trying to monetise content. It can even trigger emotional distress through isolation.

As a first step, we need to demand transparency from platforms on their shadowbanning policies and enforcement. This practice has potentially severe ramifications for individuals and society. To fix it, well need to scrutinise it with the thoroughness it deserves.

Read the original here:

What can users do about shadowbanning? - The Conversation

Related Posts
This entry was posted in $1$s. Bookmark the permalink.