{"id":55912,"date":"2023-12-20T02:36:51","date_gmt":"2023-12-20T07:36:51","guid":{"rendered":"https:\/\/euvolution.com\/open-source-convergence\/uncategorized\/what-can-users-do-about-shadowbanning-the-conversation.php"},"modified":"2023-12-20T02:36:51","modified_gmt":"2023-12-20T07:36:51","slug":"what-can-users-do-about-shadowbanning-the-conversation","status":"publish","type":"post","link":"https:\/\/euvolution.com\/open-source-convergence\/shadow-banning\/what-can-users-do-about-shadowbanning-the-conversation.php","title":{"rendered":"What can users do about shadowbanning? &#8211; The Conversation"},"content":{"rendered":"<p><p>    Tech platforms use recommender algorithms to control societys    key resource: attention.    With these algorithms they can quietly demote or hide certain    content instead of just blocking    or deleting it. This opaque practice is called    shadowbanning.  <\/p>\n<p>    While platforms will often deny they engage in shadowbanning,    theres plenty of evidence its well and truly present. And    its a problematic form of content     moderation that desperately needs oversight.  <\/p>\n<p>    Simply put, shadowbanning is when a platform reduces the    visibility of content     without alerting the user. The content may still be    potentially accessed, but with conditions on how it circulates.  <\/p>\n<p>    It may no longer appear as a recommendation, in a search    result, in a news feed, or in other users     content queues. One example would be burying a comment    underneath     many others.  <\/p>\n<p>    The term shadowbanning first appeared in 2001, when it    referred to making posts invisible to everyone except the    poster     in an online forum. Todays version of it (where content is    demoted through algorithms) is     much more nuanced.  <\/p>\n<p>    Shadowbans are distinct from other moderation approaches in a    number of ways. They are:  <\/p>\n<p>    Platforms such as     Instagram, Facebook and     Twitter generally deny performing shadowbans, but typically    do so by referring to the original     2001 understanding of it.  <\/p>\n<p>    When shadowbanning has been reported, platforms have explained    this away by citing technical glitches, users failure to    create engaging content, or as a matter of chance     through black-box algorithms.  <\/p>\n<p>    That said, most platforms will admit to visibility    reduction or demotion of content. And thats still    shadowbanning as the term is now used.  <\/p>\n<p>    In 2018, Facebook and Instagram became the first major platforms to    admit they algorithmically reduced user engagement with    borderline    content  which in Meta CEO Mark Zuckerbergs words    included sensationalist and provocative content.  <\/p>\n<p>    YouTube, Twitter, LinkedIn and TikTok    have since announced similar strategies to deal with     sensitive content.  <\/p>\n<p>    In    one survey of 1,006 social media users, 9.2% reported they    had been shadowbanned. Of these 8.1% were on Facebook, 4.1% on    Twitter, 3.8% on Instagram, 3.2% on TikTok, 1.3% on Discord, 1%    on Tumblr and less than 1% on YouTube, Twitch, Reddit,    NextDoor, Pinterest, Snapchat and LinkedIn.  <\/p>\n<p>    Further evidence for shadowbanning comes from     surveys,     interviews, internal     whistle-blowers, information     leaks,     investigative     journalism and empirical analyses    by researchers.  <\/p>\n<p>    Experts think shadowbanning by platforms likely increased in    response to criticism of big techs     inadequate handling of misinformation. Over time moderation    has become an increasingly politicised issue, and shadowbanning    offers an easy way out.  <\/p>\n<p>    The goal is to mitigate content thats lawful but awful. This    content trades under different names across platforms, whether        its dubbed borderline, sensitive, harmful,    undesirable or objectionable.  <\/p>\n<p>    Through shadowbanning, platforms can dodge accountability and    avoid outcries over censorship. At the same time, they still    benefit financially from shadowbanned content thats    perpetually     sought out.  <\/p>\n<p>    Recent    studies    have found between 3% and 6.2% of sampled Twitter accounts had    been shadowbanned at least once.  <\/p>\n<p>    The research identified specific characteristics that increased    the likelihood of posts or accounts being shadowbanned:  <\/p>\n<p>    On Twitter, having a verified account (a blue checkmark)    reduced the chances    of being     shadowbanned.  <\/p>\n<p>    Of particular concern is evidence that shadowbanning    disproportionately targets people in marginalised groups. In    2020 TikTok had to apologise for marginalising the black    community through its Black Lives Matter     filter. In 2021, TikTok users reported that using the word    Black in their bio page would lead to their content being    flagged as inappropriate.    And in February 2022, keywords related     to the LGBTQ+ movement were found to be shadowbanned.  <\/p>\n<p>    Overall, Black, LQBTQ+ and Republican users report more    frequent and harsher content moderation across Facebook,    Twitter, Instagram and     TikTok.  <\/p>\n<p>    Detecting shadowbanning is difficult. However, there are some    ways you can try to figure out if it has happened to you:  <\/p>\n<p>        rank the performance of the content in question against        your normal         engagement         levels  if a certain post has greatly under-performed        for no obvious reason, it may have been shadowbanned      <\/p>\n<p>        ask others to use their accounts to search for your content         but keep in mind if theyre a friend or follower they        may still be able to see your shadowbanned content, whereas        other users may not      <\/p>\n<p>        benchmark your contents reach against content from others        who have comparable engagement  for instance, a black        content creator can compare their TikTok views to those of        a white creator with a similar following      <\/p>\n<p>        refer to shadowban detection tools available for different        platforms such as         Reddit (r\/CommentRemovalChecker) or Twitter (hisubway).      <\/p>\n<p>    Read more:     Deplatforming online extremists reduces their followers  but    there's a price  <\/p>\n<p>    Shadowbans last for varying amounts of time depending on the    demoted content and platform. On TikTok, theyre said to last    about two weeks. If your account or content is shadowbanned,    there arent many options to immediately reverse this.  <\/p>\n<p>    But some strategies can help reduce the chance of it happening,        as researchers have found. One is to self-censor. For    instance, users may avoid ethnic identification labels such as    AsianWomen.  <\/p>\n<p>    Users can also experiment with external tools that estimate the    likelihood of content being flagged, and then manipulate the    content so its less likely to be picked up by algorithms. If    certain terms are likely to be flagged, theyll use    phonetically similar alternatives, like S-E-G-G-S instead of    sex.  <\/p>\n<p>    Shadowbanning impairs the free exchange of ideas and excludes    minorities. It can be exploited by trolls falsely flagging    content. It can cause financial harm to users trying to    monetise content. It can even     trigger emotional distress through isolation.  <\/p>\n<p>    As a first step, we need to demand transparency from platforms    on their shadowbanning policies and enforcement. This practice    has potentially severe ramifications for individuals and    society. To fix it, well need to scrutinise it with the    thoroughness it deserves.  <\/p>\n<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/theconversation.com\/what-is-shadowbanning-how-do-i-know-if-it-has-happened-to-me-and-what-can-i-do-about-it-192735\" title=\"What can users do about shadowbanning? - The Conversation\">What can users do about shadowbanning? - The Conversation<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Tech platforms use recommender algorithms to control societys key resource: attention. With these algorithms they can quietly demote or hide certain content instead of just blocking or deleting it. This opaque practice is called shadowbanning<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[60424],"tags":[],"class_list":["post-55912","post","type-post","status-publish","format-standard","hentry","category-shadow-banning"],"_links":{"self":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/posts\/55912"}],"collection":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/comments?post=55912"}],"version-history":[{"count":0,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/posts\/55912\/revisions"}],"wp:attachment":[{"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/media?parent=55912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/categories?post=55912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/euvolution.com\/open-source-convergence\/wp-json\/wp\/v2\/tags?post=55912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}