Shadow banning: What it is — and what it isn’t – CNET

Twitter said it doesn't shadow ban its users.

There's a shadow of a doubt.

On Thursday morning, President Donald Trump called out Twitter, accusing the social network of shadow banning prominent Republicans. The reaction came after Vice News reported that Twitter wasn't autopopulating Republicans in its drop-down search box.

But that's not shadow banning -- it's a bug, according to Twitter.

"We do not shadow ban," Twitter said in a blog post Thursday. "You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile)."

"We are aware that some accounts are not automatically populating in our search box, and [we're] shipping a change to address this," a Twitter spokesperson said earlier in the day. "The profiles, tweets and discussions about these accounts do appear when you search for them. To be clear, our behavioral ranking doesn't make judgments based on political views or the substance of tweets."

Thursday's presidential backlash against Twitter is the latest in a series of accusations lawmakers have made regarding social networks and censorship. The House Judiciary Committee has had two hearings on the subject, in July and April, with Republican lawmakers asking representatives from Twitter, Google and Facebook if the platforms were purposely silencing conservative voices.

The subject has come up before. In January during a Senate hearing Sen. Ted Cruz, a Republican from Texas, asked Twitter's policy director, Carlos Monje, if the social network practices shadow banning. Monje said no, and Twitter has said at multiple hearings on Capitol Hill that it doesn't shadow ban.

Most recently, during a hearing on July 18, Twitter's global lead for public policy strategy, Nick Pickles, told lawmakers, "Some critics have described the sum of all of this work as a banning of conservative voices. Let me make clear to the committee today that these claims are unfounded and false."

Now playing: Watch this: Trump slams 'shadow banning' on Twitter: What even is...

3:09

Shadow banning isn't a new concept; it's frequently used in forums and on other social networks as an alternative to banning someone outright.

Instead of kicking someone off, shadow bans make a person's post visible only to the user who created it. The idea is to protect others from harmful content while eventually prompting the shadow-banned user to voluntarily leave the forum due to a lack of engagement.

If you outright ban a user, the thinking goes, the person is aware of it and will likely just set up another account and continue the offending behavior.

Shadow banning was Reddit's only form of banning for years and was used by the site until November 2015.

The practice is similar to what Facebook does with misinformation. The social network told reporters on July 11 that instead of completely banning pages behind hoaxes and misinformation, it would rather demote their posts so fewer people see them.

Shadow banning is typically used to stop bots and trolls, said Zack Allen, director of threat operations at ZeroFox, a company that focuses on social media security.

"This can be effective in combating bots where 'bot herders' who maintain these accounts don't necessarily know whether or not their bots are actually being seen by other people," he said.

No.

You can still see posts from the Republicans named in the Vice News article, including Republican Party Chairwoman Ronna McDaniel and Rep. Matt Gaetz of Florida.

The White House, McDaniel and Gaetz didn't respond to requests for comment.

Your Twitter account may not autopopulate in searches, but that doesn't mean you've been shadow banned.

Kevin Lee, a trust and safety architect at Sift Science, an online fraud and abuse detection company, said Thursday's misunderstanding highlights how lawmakers need to do a better job at understanding technology.

"Our leaders need to identify how technology works to make informed decisions (or public-facing commentary), especially when their work can have such an impact on how such technologies are used and regulated," Lee said.

Twitter's moderators aren't actively taking measures against accounts and blocking them so that only these users can see their own tweets, the company says.

The search results bug involves an error with Twitter's algorithm, the social network's head of product, Kayvon Beykpour, said in a series of tweets Wednesday.

Twitter's behavior signals caused the mistakes with autosuggestions, Beykpour explained.

"Our usage of the behavior signals within search was causing this to happen & making search results seem inaccurate," he said in a tweet Wednesday. "We're making a change today that will improve this."

Twitter's product manager for health, David Gasca, talked to CNET about these signals earlier in July. They could include how often an account is muted, blocked, reported, retweeted, liked and replied to. Twitter's algorithm takes interactions into consideration, and its artificial intelligence classifies them as either positive or negative experiences.

As part of Twitter's push to create healthy conversations, its AI will favor accounts that have had more positive experiences.

First published July 26, 9:07 a.m. PTUpdate, 9:42 a.m.: Adds remarks from a security specialist,12:57 p.m.: Adds remark from social media expert. Update, 8:20 p.m. PT: Adds information from company blog post.

Cambridge Analytica: Everything you need to know about Facebook's data mining scandal.

CNET Magazine: Check out a sample of the stories in CNET's newsstand edition.

Excerpt from:

Shadow banning: What it is -- and what it isn't - CNET

Related Posts
This entry was posted in $1$s. Bookmark the permalink.