What can users do about shadowbanning? – The Conversation

Tech platforms use recommender algorithms to control societys key resource: attention. With these algorithms they can quietly demote or hide certain content instead of just blocking or deleting it. This opaque practice is called shadowbanning.

While platforms will often deny they engage in shadowbanning, theres plenty of evidence its well and truly present. And its a problematic form of content moderation that desperately needs oversight.

Simply put, shadowbanning is when a platform reduces the visibility of content without alerting the user. The content may still be potentially accessed, but with conditions on how it circulates.

It may no longer appear as a recommendation, in a search result, in a news feed, or in other users content queues. One example would be burying a comment underneath many others.

The term shadowbanning first appeared in 2001, when it referred to making posts invisible to everyone except the poster in an online forum. Todays version of it (where content is demoted through algorithms) is much more nuanced.

Shadowbans are distinct from other moderation approaches in a number of ways. They are:

Platforms such as Instagram, Facebook and Twitter generally deny performing shadowbans, but typically do so by referring to the original 2001 understanding of it.

When shadowbanning has been reported, platforms have explained this away by citing technical glitches, users failure to create engaging content, or as a matter of chance through black-box algorithms.

That said, most platforms will admit to visibility reduction or demotion of content. And thats still shadowbanning as the term is now used.

In 2018, Facebook and Instagram became the first major platforms to admit they algorithmically reduced user engagement with borderline content which in Meta CEO Mark Zuckerbergs words included sensationalist and provocative content.

YouTube, Twitter, LinkedIn and TikTok have since announced similar strategies to deal with sensitive content.

In one survey of 1,006 social media users, 9.2% reported they had been shadowbanned. Of these 8.1% were on Facebook, 4.1% on Twitter, 3.8% on Instagram, 3.2% on TikTok, 1.3% on Discord, 1% on Tumblr and less than 1% on YouTube, Twitch, Reddit, NextDoor, Pinterest, Snapchat and LinkedIn.

Further evidence for shadowbanning comes from surveys, interviews, internal whistle-blowers, information leaks, investigative journalism and empirical analyses by researchers.

Experts think shadowbanning by platforms likely increased in response to criticism of big techs inadequate handling of misinformation. Over time moderation has become an increasingly politicised issue, and shadowbanning offers an easy way out.

The goal is to mitigate content thats lawful but awful. This content trades under different names across platforms, whether its dubbed borderline, sensitive, harmful, undesirable or objectionable.

Through shadowbanning, platforms can dodge accountability and avoid outcries over censorship. At the same time, they still benefit financially from shadowbanned content thats perpetually sought out.

Recent studies have found between 3% and 6.2% of sampled Twitter accounts had been shadowbanned at least once.

The research identified specific characteristics that increased the likelihood of posts or accounts being shadowbanned:

On Twitter, having a verified account (a blue checkmark) reduced the chances of being shadowbanned.

Of particular concern is evidence that shadowbanning disproportionately targets people in marginalised groups. In 2020 TikTok had to apologise for marginalising the black community through its Black Lives Matter filter. In 2021, TikTok users reported that using the word Black in their bio page would lead to their content being flagged as inappropriate. And in February 2022, keywords related to the LGBTQ+ movement were found to be shadowbanned.

Overall, Black, LQBTQ+ and Republican users report more frequent and harsher content moderation across Facebook, Twitter, Instagram and TikTok.

Detecting shadowbanning is difficult. However, there are some ways you can try to figure out if it has happened to you:

rank the performance of the content in question against your normal engagement levels if a certain post has greatly under-performed for no obvious reason, it may have been shadowbanned

ask others to use their accounts to search for your content but keep in mind if theyre a friend or follower they may still be able to see your shadowbanned content, whereas other users may not

benchmark your contents reach against content from others who have comparable engagement for instance, a black content creator can compare their TikTok views to those of a white creator with a similar following

refer to shadowban detection tools available for different platforms such as Reddit (r/CommentRemovalChecker) or Twitter (hisubway).

Read more: Deplatforming online extremists reduces their followers but there's a price

Shadowbans last for varying amounts of time depending on the demoted content and platform. On TikTok, theyre said to last about two weeks. If your account or content is shadowbanned, there arent many options to immediately reverse this.

But some strategies can help reduce the chance of it happening, as researchers have found. One is to self-censor. For instance, users may avoid ethnic identification labels such as AsianWomen.

Users can also experiment with external tools that estimate the likelihood of content being flagged, and then manipulate the content so its less likely to be picked up by algorithms. If certain terms are likely to be flagged, theyll use phonetically similar alternatives, like S-E-G-G-S instead of sex.

Shadowbanning impairs the free exchange of ideas and excludes minorities. It can be exploited by trolls falsely flagging content. It can cause financial harm to users trying to monetise content. It can even trigger emotional distress through isolation.

As a first step, we need to demand transparency from platforms on their shadowbanning policies and enforcement. This practice has potentially severe ramifications for individuals and society. To fix it, well need to scrutinise it with the thoroughness it deserves.

Read the original here:

What can users do about shadowbanning? - The Conversation

Shadow Definition & Meaning – Merriam-Webster

1

: the dark figure cast upon a surface by a body intercepting the rays from a source of light

2

: partial darkness or obscurity within a part of space from which rays from a source of light are cut off by an interposed opaque body

3

4

: an attenuated form or a vestigial remnant

5

: an inseparable companion or follower

6

: pervasive and dominant influence

7

9

: shelter from danger or observation

10

: an imperfect and faint representation

13

: a source of gloom or unhappiness

14

: a state of ignominy or obscurity

transitive verb

1

2

: to accompany and observe especially in a professional setting

3

6

obsolete : to shelter from the sun

1

2

Subscribe to America's largest dictionary and get thousands more definitions and advanced searchad free!

Read the original here:

Shadow Definition & Meaning - Merriam-Webster

Those Schools Banning Access To Generative AI ChatGPT Are Not Going To Move The Needle And Are Missing The Boat, Says AI Ethics And AI Law – Forbes

Those Schools Banning Access To Generative AI ChatGPT Are Not Going To Move The Needle And Are Missing The Boat, Says AI Ethics And AI Law  Forbes

See the original post here:

Those Schools Banning Access To Generative AI ChatGPT Are Not Going To Move The Needle And Are Missing The Boat, Says AI Ethics And AI Law - Forbes

Michael Jordan was 3-inches short of wearing pants: His Airness Nearly Aced Signature Par-3 at Shadow Creek But His Baggy Pants Stole the Show – The…

Michael Jordan was 3-inches short of wearing pants: His Airness Nearly Aced Signature Par-3 at Shadow Creek But His Baggy Pants Stole the Show  The Sportsrush

Read the original:

Michael Jordan was 3-inches short of wearing pants: His Airness Nearly Aced Signature Par-3 at Shadow Creek But His Baggy Pants Stole the Show - The...

What is ‘shadow banning’, and why did Trump tweet about it?

Why are conservatives talking about shadow bans?

Twitter SHADOW BANNING prominent Republicans, Donald Trump tweeted Thursday morning. Not good. We will look into this discriminatory and illegal practice at once! Many complaints.

On Wednesday a Vice News story reported that some senior Republican officials were not visible in automatic search results. Vice framed this as shadow banning without providing any evidence that it was deliberate.

Conservative outlets such as Infowars and Breitbart soon picked up the story, which they saw as validation of their longstanding suspicions.

Then, on Thursday morning, Project Veritas the rightwing muckraker James OKeefes entrapment-based media enterprise released a video claiming to show a Twitter engineer admitting to the practice. By early Thursday, conservative media outlets had published dozens of articles on the controversy.

From there, the issue made a familiar journey through Fox News into Trumps brain, and then onto his Twitter account.

The idea that conservatives are being shadow banned is the latest iteration of an idea, bubbling away since the last election, that conservatives are being silenced by social media companies. Recently, conservatives have seized on changes that Twitter, in particular, has made to the way it filters users and tweets as evidence of subtle censorship.

Twitter did in fact make changes to the way it algorithmically ranks users, based on their behavior. Among other effects, this will de-prioritise abusive users in shared spaces like hashtags, search, and conversations. This means that badly behaved users will be less visible on the site. In launching the changes, Twitter explained that they were content-neutral.

But rightwing users have folded this into their contention that Twitter is shadow banning them. That term is internet lingo for a situation in which a social media user believes they have full access to the platform, but other users are prevented from seeing their accounts or messages.

Social media companies (and before them, forum moderators) have been frequently accused of using this technique to shut down users they see as problematic without risking the blowback that a fully-fledged ban might bring.

No at least not based on what Vice purported to show.

Twitters recent changes are, according to the company, an effort to crack down on bots and bad behavior and to encourage what the companys product lead Kayvon Beykpour calls healthy public conversation. Twitter says this process is mostly automated employing behavioral signals and machine learning and the company also says it is based on users actions, not ideologies.

We do not shadow ban, a Twitter spokesperson flatly told the Guardian. Our behavioural ranking doesnt make judgments based on political views or the substance of tweets.

Beykpour said the problems that Vice wrote about were the result of a glitch in predictive search results that has since been corrected. He reaffirmed his intention to create a healthier Twitter.

OKeefes video, meanwhile, offers no context for a former Twitter engineers quite general discussion of the concept of shadow banning. OKeefe is well known for misleading stunt journalism, and this morning Twitter told Fox News that OKeefes video was deceptive and underhanded.

Conservatives have often complained about the alleged liberal bias of tech companies, but its not clear whether, or how, social media users of other ideological stripes have been affected by Twitters changes. Conservatives claims of anti-conservative bias may simply be a case of a false positive. In addition, that stance doesnt account for the possibility that some conservative accounts may have been legitimately downranked for engaging in abusive, uncivil or trolling behaviour.

The shadow banning controversy is just the latest in a long line of accusations of bias conservatives have levelled at tech companies. Some on the right have gone so far as to launch legal action against companies for allegedly unfair treatment, and Republican members of Congress have grilled social media executives over their supposed efforts to shut down rightwing social media stars such as Diamond and Silk and Gateway Pundit.

The conspiracy theorist Alex Jones, in particular, has made accusations that YouTube and Facebook are censoring Infowars a staple of his broadcasts. (Somehow, his predictions of a shutdown has never come to pass.)

Progressives, as well as many journalists, make the opposite case that social media companies are overly permissive in allowing abusive and extremist voices to remain on their platforms.

A recent undercover investigation by Channel 4 in the UK revealed that Facebook not only allows extremist content to stay on its site, but appears to value that content for the traffic it brings. Twitter has faced persistent criticism for allowing far-right accounts to persist on its site, and effectively facilitating the campaigns of rightwing activists.

Originally posted here:

What is 'shadow banning', and why did Trump tweet about it?