Does ‘deplatforming’ work to curb hate speech and calls for …

In the wake of the assault on the U.S. Capitol on Jan. 6, Twitter permanently suspended Donald Trumps personal account, and Google, Apple and Amazon shunned Parler, which at least temporarily shut down the social media platform favored by the far right.

Dubbed deplatforming, these actions restrict the ability of individuals and communities to communicate with each other and the public. Deplatforming raises ethical and legal questions, but foremost is the question of whether its an effective strategy to reduce hate speech and calls for violence on social media.

The Conversation U.S. asked three experts in online communications whether deplatforming works and what happens when technology companies attempt it.

Jeremy Blackburn, assistant professor of computer science, Binghamton University

The question of how effective deplatforming is can be looked at from two different angles: Does it work from a technical standpoint, and does it have an effect on worrisome communities themselves?

Does deplatforming work from a technical perspective?

Gab was the first major platform subject to deplatforming efforts, first with removal from app stores and, after the Tree of Life shooting, the withdrawal of cloud infrastructure providers, domain name providers and other Web-related services. Before the shooting, my colleagues and I showed in a study that Gab was an alt-right echo chamber with worrisome trends of hateful content. Although Gab was deplatformed, it managed to survive by shifting to decentralized technologies and has shown a degree of innovation for example, developing the moderation-circumventing Dissenter browser.

From a technical perspective, deplatforming just makes things a bit harder. Amazons cloud services make it easy to manage computing infrastructure but are ultimately built on open source technologies available to anyone. A deplatformed company or people sympathetic to it could build their own hosting infrastructure. The research community has also built censorship-resistant tools that, if all else fails, harmful online communities can use to persist.

Does deplatforming have an effect on worrisome communities themselves?

Whether or not deplatforming has a social effect is a nuanced question just now beginning to be addressed by the research community. There is evidence that a platform banning communities and content for example, QAnon or certain politicians can have a positive effect. Platform banning can reduce growth of new users over time, and there is less content produced overall. On the other hand, migrations do happen, and this is often a response to real world events for example, a deplatformed personality who migrates to a new platform can trigger an influx of new users.

Another consequence of deplatforming can be users in the migrated community showing signs of becoming more radicalized over time. While Reddit or Twitter might improve with the loss of problematic users, deplatforming can have unintended consequences that can accelerate the problematic behavior that led to deplatforming in the first place.

Ultimately, its unlikely that deplatforming, while certainly easy to implement and effective to some extent, will be a long-term solution in and of itself. Moving forward, effective approaches will need to take into account the complicated technological and social consequences of addressing the root problem of extremist and violent Web communities.

Ugochukwu Etudo, assistant professor of operations and information management, University of Connecticut

Does the deplatforming of prominent figures and movement leaders who command large followings online work? That depends on the criteria for the success of the policy intervention. If it means punishing the target of the deplatforming so they pay some price, then without a doubt it works. For example, right-wing provocateur Milo Yiannopoulos was banned from Twitter in 2016 and Facebook in 2019, and subsequently complained about financial hardship.

If it means dampening the odds of undesirable social outcomes and unrest, then in the short term, yes. But it is not at all certain in the long term. In the short term, deplatforming serves as a shock or disorienting perturbation to a network of people who are being influenced by the target of the deplatforming. This disorientation can weaken the movement, at least initially.

However, there is a risk that deplatforming can delegitimize authoritative sources of information in the eyes of a movements followers, and remaining adherents can become even more ardent. Movement leaders can reframe deplatforming as censorship and further proof of a mainstream bias.

There is reason to be concerned about the possibility that driving people who engage in harmful online behavior into the shadows further entrenches them in online environments that affirm their biases. Far-right groups and personalities have established a considerable presence on privacy-focused online platforms, including the messaging platform Telegram. This migration is concerning because researchers have known for some time that complete online anonymity is associated with increased harmful behavior online.

In deplatforming policymaking, among other considerations, there should be an emphasis on justice, harm reduction and rehabilitation. Policy objectives should be defined transparently and with reasonable expectations in order to avoid some of these negative unintended consequences.

Robert Gehl, associate professor of communication and media studies, Louisiana Tech University

Deplatforming not only works, I believe it needs to be built into the system. Social media should have mechanisms by which racist, fascist, misogynist or transphobic speakers are removed, where misinformation is removed, and where there is no way to pay to have your messages amplified. And the decision to deplatform someone should be decided as close to democratically as is possible, rather than in some closed boardroom or opaque content moderation committee like Facebooks Supreme Court.

In other words, the answer is alternative social media like Mastodon. As a federated system, Mastodon is specifically designed to give users and administrators the ability to mute, block or even remove not just misbehaving users but entire parts of the network.

For example, despite fears that the alt-right network Gab would somehow take over the Mastodon federation, Mastodon administrators quickly marginalized Gab. The same thing is happening as I write with new racist and misogynistic networks forming to fill the potential void left by Parler. And Mastodon nodes have also prevented spam and advertising from spreading across the network.

Moreover, the decision to block parts of the network arent made in secret. Theyre done by local administrators, who announce their decisions publicly and are answerable to the members of their node in the network. Im on scholar.social, an academic-oriented Mastodon node, and if I dont like a decision the local administrator makes, I can contact the administrator directly and discuss it. There are other distributed social media system, as well, including Diaspora and Twister.

The danger of mainstream, corporate social media is that it was built to do exactly the opposite of what alternatives like Mastodon do: grow at all costs, including the cost of harming democratic deliberation. Its not just cute cats that draw attention but conspiracy theories, misinformation and the stoking of bigotry. Corporate social media tolerates these things as long as theyre profitable and, it turns out, that tolerance has lasted far too long.

[Deep knowledge, daily. Sign up for The Conversations newsletter.]

See the original post here:

Does 'deplatforming' work to curb hate speech and calls for ...

Weighing the Value and Risks of Deplatforming GNET

Last month, the video platform TikTok banned far-right extremists Britain First and Tommy Robinson, the latest action taken by a tech platform to address hateful and extreme content by sanctioning abusers. Platforms embrace of deplatforming as the default tool for repeated or severe violations of terms of service shows progress in prioritising the issue of online extremism, but as a tool, it is a blunt instrument that may not be equally valuable in all circumstances. Not all platforms can or will address all content equally efficiently, and whether they should requires an assessment of unintended consequences. Whether those factors are correctly balanced by platforms, or deplatforming is simply the most straightforward tool at their disposal, remains to be seen.

Addressing harmful content that could lead to hate, extremism, and terrorism is critical for tech platforms, sometimes for legal compliance and other times simply because it is imperative to protect their users and our communities. For a sense of scale, recent transparency reports show that between January and June of 2019, Twitter took action against almost 600,000 accounts for violating policies related to hate and Facebook took action against 17.8 million pieces of content based on terrorist propaganda concerns and 15.5 million related to hate speech between January and September of 2019. The Global Internet Forum to Counter Terrorism asserts that its joint hashing database the shared mechanism for large tech companies such as Facebook, Microsoft, Twitter, YouTube, and others to post or find terrorism-related content has over 200,000 pieces of unique content. When these actions manifest as banning a user, the result can be severe: an oft-cited example of the success of deplatforming is that of far-right provocateur Milo Yiannopoulos, who may be as much as $2 million in debt following bans that have removed his ability to benefit financially from his notoriety. Alex Jones media outlet InfoWars had about 1.4 million daily views of its site and users before being banned from YouTube and Facebook, and 715,000 afterward, according to the New York Times analysis.

On the other hand, these results raise questions regarding whether platforms are efficient in carrying out bans. Jones, for example, launched Infowars is Back on Facebook an hour after it banned Infowars. Proxy channels emerged on YouTube, sharing Jones videos with over 1.6 million viewers, including 550,000 views in a thirty day period, and 10,000 subscribers. Lesser known antisemitic and white supremacist channels have managed to circumvent attempted bans. If the strategy to address online extremism must be whack-a-mole, there is considerable room to improve efficiency in finding users and content to ban, implementing bans, and finding and removing proxies.

Beyond efficiency is effectiveness: banning an individual or group may feel cathartic, but whether it achieves the desired result of degrading and helping defeat extremists and their movements is a far more central question. The verdict on that is, unfortunately, unclear.

Researchers at Georgia Institute of Technology looked at bans on Reddit, concluding that users that experienced sanctions from Reddit for hate speech left Reddit entirely, reduced hate speech on Reddit by 80-90 percent, and many also migrated to new Reddit threads. Audrey Alexanders study for the George Washington University Program on Extremism shows that mass bans of Islamic State (IS) followers on Twitter deteriorates [IS] followers ability to gain traction on the platform, likely hindering their reach to potential recruits and acknowledges that the decay on Twitter corresponded with IS strategic shift to Telegram as its platform of choice.

Strategic success for mass bans has often been interpreted (1) as digital decay for the individual platform in question, rather than the integrated online ecosystem, and (2) in terms of the volume of users and their hateful content rather than the escalation or de-escalation of extremism.

Telegram, for example, became the platform of choice for jihadists as mainstream platforms began to use bans, removing IS sympathizers ability to recruit followers from a mainstream audience, but driving their online communications underground to a less-visible and less-regulated platform. Now it is also becoming a destination of the global white supremacist movement.

Similar platform migration has led to extremist use of VK, the Russian Facebook-equivalent; Gab, far right-extremists Twitter-equivalent; and lesser-known sites that their users would move to if those platforms began regulating, which, as ADL analysis suggests, could be WrongThink, minds.com, toko.tech, MeWe, or freezoxee. The evolution of the chans is illustrative: bringing attention to 4chan or 8chan may have led to particular actions to limit extremist content on them, but also led 8chan to go dark and return several times, and also gave rise to Endchan, 7chan, and myriad other copycat sites that aim to circumvent attempts to regulate them.

According to an analysis by ADL and the Network Contagion Research Institute, during months when a Twitter mass ban took place corresponded to more than double the percent of new members on Gab than a typical month. The frequency with which the users referenced the ban, and the corresponding spiteful references to censorship (e.g. fascistbook and goolag) suggests that the new users are joining Gab due to mass bans on another platform, and that being banned fueled their anger not self-reflective anger for the behavior that got them banned, but toward the authorities than banned them. Another study reached similar conclusions, looking at Facebook and VK. This analysis suggests that the grievances that fuel far-right extremism may be heightened in users that are banned from mainstream platforms, and that those grievances are then expressed in fora with less oversight and a higher portion of like-minded members. In other words, there is a distinct possibility that deplatforming trades high exposure to a broad population for more extreme exposure to other extremists. And no amount of whack-a-mole will prevent extremists from finding the next forum on which they can post their hate and recruit new followers, with authorities potentially unaware of the platform migration.

Removing users and content also hinders investigation and research into the threat. Imagine an individual that poses a security concern and whose primary means of being discovered by law enforcement is online behavior for example, Conor Climo, whose online conversations and support for the Feuerkrieg Division led law enforcement to search his home, where they found bomb making materials and evidence of violent plots. If such a suspect were removed from all platforms that could be accessed by law enforcement and informants, then plots may continue, but out of sight. Further, researchers looking into such behavior to inform policymakers and the public no longer have visibility into concerning behavior once it is removed, which could distort public opinion and decision-making based on an inaccurate picture of threats.

Deplatforming may limit the breath of hate and extremism on mainstream platforms but increase extremists motivations to plot, doing so in secret. On the other hand, allowing hate unfettered access to the worlds most powerful megaphones to recruit more to their cause is similarly risky. Neither, of course, is an acceptable outcome, which is why comprehensive approaches and comprehensive research into what works is needed. Whether providing law enforcement more opportunities to track extremism, tech platforms better ways to implement terms of service enforcement, or promoting good speech to overwhelm hate and extremism online comprehensive, integrated approaches are necessary.

Read more:

Weighing the Value and Risks of Deplatforming GNET

Twitter Found Trump Didn’t Violate Policy, But Banned Him Anyway

A new batch of the Twitter Files released by independent journalist Bari Weiss reveals Twitter employees acknowledged on internal channels that former President Donald Trumps tweets shortly after the Jan. 6, 2021 Capitol riot did not violate Twitter policies, but the Big Tech company cooked up a reason to ban him anyway with the corporate press running cover.

Weiss revealed in a Twitter thread on Monday that Twitter employees pushed for the removal of Trump long before they created a reason to ban him from their platform on Jan. 8, 2021. So did the media after Trumps loss in the 2020 presidential election, propaganda press outlets began dredging up polls and other content designed to cast doubt on Trumps presence on Twitter, Facebook, and YouTube.

When media figures werent personally calling for the suppression of Trump online, outlets such as Politico amplified Democrats who wanted nothing more than to see their top enemy banned from social media for life.

One of the most egregious and effective examples of this was when The Washington Post chose to publish a letter from hundreds of Twitter employees early in the afternoon on Jan. 8. In the letter, Twitter staff demanded then-CEO Jack Dorsey, legal and policy executive Vijaya Gadde, and other executives permanently suspend Trump. They also called on the Big Tech company to review the role it played as Trumps megaphone and helping fuel the deadly events of January 6th.

We appreciate stronger measures, like the interstitials recently used on his account and his Jan. 6 timeout. We do not believe these actions are sufficient, the Tweeps wrote.

At the time of the letters amplification in the digital pages of WaPo, however, Twitter staff doubted that Trumps tweets on Jan. 8 would justify his permanent removal from the platform.

Even after Twitters safety team concluded that Trumps tweets did not violate any policies, Twitter banned the sitting president by the evening of Jan. 8.

Instead of heeding the assessment that Trumps tweets did not violate Twitters policies, the Big Tech company took a page out of its Hunter Biden laptop censorship playbook and used a vague excuse to nuke the presidents personal page.

At the time, Twitter claimed that giving Trump access to the online public square posed a risk of further incitement of violence.

After close review of recent Tweets from the @realDonaldTrump account and the context around them specifically how they are being received and interpreted on and off Twitter we have permanently suspended the account due to the risk of further incitement of violence, Twitters official statement read.

Censors at Twitter, who pined for Trumps suspension for months, celebrated the suspension in various internal chat rooms. Corrupt corporate media outlets returned the favor by publicly glorifying the decision.

These employees were so emboldened by their sweeping act of speech suppression that they began to brainstorm what other types of content and users they could get away with deplatforming.

To this day, corrupt corporate media continue to advocate for the downfall of anyone who threatens their allies in Big Tech. That includes publishing hit pieces about new Twitter CEO Elon Musk, who initiated the release of the Twitter Files with the hopes of exposing Big Tech collusion and censorship tactics.

Jordan Boyd is a staff writer at The Federalist and co-producer of The Federalist Radio Hour. Her work has also been featured in The Daily Wire and Fox News. Jordan graduated from Baylor University where she majored in political science and minored in journalism. Follow her on Twitter @jordanboydtx.

View original post here:

Twitter Found Trump Didn't Violate Policy, But Banned Him Anyway

Social media misogyny: The new way Andrew Tate brought us the same old hate

If you dont recognize the name Andrew Tate, you have (luckily) avoided one of the most significant waves of misogyny on mainstream social media in recent memory.

Tate, a pseudo right-wing influencer who espouses deep misogyny, rocketed to fame (and infamy) thanks to clever manipulation of social media algorithms especially TikTok. Many articles have been written about Tates rise to fame and subsequent banning on social media.

However, most articles fail to acknowledge that the only thing new or innovative about Tate and his rhetoric was his ability to leverage platform algorithms. Tates example highlights how proponents of misogyny are using new technology to amplify their messaging. And while his content is not new, his tactics present an important site of innovation and inquiry.

Andrew Tate got famous fast. In July 2022 Google searches for Tate exceeded Donald Trump and Kim Kardashian combined. Over a short period, he was transported from marginal fame to social media stardom.

Banned from Twitter in 2017 (for anti #MeToo tweets, among other things), Tate surged into mainstream consciousness on TikTok in 2022 where videos of him have been viewed more than 12 billion times.

These videos, which ranged in content from cryptocurrency tips to overt calls for violence against women, were promoted and shared heavily by the platforms users. Arriving on the TikTok scene at the right time with the right approach worked.

Tate has said that his popularity has more to do with the appeal of his message and the desire for real masculinity than the outcome of algorithmic manipulation. But examining his rhetoric belies the fact that nothing Tate says is new not the violence, not the domination, not the body politics, not the cigars, not even the shaved head to hide the changing hairline none of it.

Tate is the newest addition to the lineup of masculinist grifters who have existed since the rise of the mens movement. His approach works because it is modelled after previous successful masculinist grifts that have sought to attract young men with promises of power. A famous example is Daryush Valizadeh (aka Roosh V), who 20 years ago pivoted from pickup artist persona to anti-feminist and pro-rape notoriety.

This fame also resulted in organized resistance and one instance where he and his supporters were challenged to a boxing match by the Newsgirls womens boxing club in Toronto.

Valizadeh was also not the first to leverage these anti-women and pro-sexual violence sentiments. What each of these men tap into is a desire for power over others that is socialized into young men, often subconsciously, through framing boys as protectors, leaders and budding stoics through ideas like boys dont cry.

Tate and those who have come before him adopt these personas to meet a single, personal desire: power. In Tates case, his endeavors in the 2020s have been about accessing power through money and (potential) social media influence. Tates anti-woman and anti-feminist rhetoric taps into the latent violent misogyny that underpins most traditional patriarchal social structures.

This is a component of what philosophy scholar Kate Manne frames as male entitlement. The desire for power over others is one way that men find self-worth in neoliberal capitalist cultures; and for some, the easiest way is through power over women.

We see similar processes occurring in white supremacist and white nationalist spaces where violence against women and the racialized other often occur in tandem. Each version of right-wing masculinity like Tates is a re-hashing of the same appeals to supremacist power structures. They are tired and boring, but unfortunately no less engaging to a certain segment of the population.

TikTok and other social media had no interest in banning Tate until public outrage over his rhetoric became impossible to ignore. They are, after all, in the business of getting the most users to spend the most time on their platforms. Having someone like Tate get 12 billion views is good for the bottom line, even if it is bad for society.

However, deplatforming can address the actions of an individual, or at least take that person out of the spotlight. It is impossible to get all their content off the internet, but social media platforms can cut the revenue streams to individuals and have dramatic effects on how these people finance their lives and lies.

Does deplatforming eliminate misogynist or gender-violent rhetoric? No, it doesnt. But it does pull the rug out from under some of its loudest proponents. Tate has been banned from most mainstream media, but this isnt because his message had stopped resonating. It is because platforms faced enough pressure and outrage to do the right thing.

Read more:

Social media misogyny: The new way Andrew Tate brought us the same old hate

WordPress discussed deplatforming The New York Post after Hunter Biden …

The content of Hunter Bidens laptop was certainly scandalous enough, but the handling, i.e., censoring of the story about that by tech platforms influenced by political preference certainly gave it a run for its money.

Even by the standards of 2020, the year of censorship,TwitterandFacebooks decision to suppress the New York Post article just before the US presidential election that resulted in the victory of Hunters father, was highly unusual.

Facebook used potential misinformation policies to artificially limit the storys spread, whileTwitter went for outright censorshipby banning links to the Post page, saying it was enforcing hacked materials rules (which did not apply).

In early December, Twitter ownerElon Muskput this controversy back in the spotlight when he announced that to restore public trust in the social platform he recently acquired, internal documents detailing how the censorship decision was made would be released.

And now that people are once again talking about it, so is WordPress founder Matt Mullenweg,who this week revealedthat WordPress owner Automattic was also considering censoring the New York Post story since the media outlet used WordPress VIP as its content management system.

Although Mullenweg framed all this in an interview with The Verge as an example of the hardships of moderation, what the revelation really shows is the infrastructural depth of online speech restrictions, which is not limited to social media but also includes deeper layers, such as a CMS platform.

Automattic in the end managed to resist the urge to join Twitter and Facebook in, to all intents and purposes, trying to hide a legitimate and highly relevant news story from the public.

According to Mullenweg, the to censor or not to censor debate revolved around probing Automattics terms of service, to find out whether some of them could be interpreted to mean that Americas oldest news outlet had violated the rules.

We made a decision there to not touch it, he said, adding, The interpretation of the policies is really where I think the art and science of it is.

In other words, major online platforms like to have rules that are not really rules, but rather statements that can be interpreted whichever way best suits those platforms in any given case.

Read more:

WordPress discussed deplatforming The New York Post after Hunter Biden ...

‘Deplatforming the President’: Twitter Files Part 3 reveals events that led to removal of Donald Trump – Business Today

  1. 'Deplatforming the President': Twitter Files Part 3 reveals events that led to removal of Donald Trump  Business Today
  2. 'Deplatforming The President': Elon Musk Sheds Light On Ouster Of Donald Trump In Twitter Files Part 3  ABP Live
  3. Twitter Files 3.0 reveal how Donald Trump was deplatformed by 'Supreme Court of moderation'  WION
  4. Twitter Files Part 3 reveals how top execs interfered with US election before ban on Trump  Republic World
  5. Trump Is Using the Twitter Files to Fundraise for 2024  Rolling Stone
  6. View Full Coverage on Google News

Originally posted here:

'Deplatforming the President': Twitter Files Part 3 reveals events that led to removal of Donald Trump - Business Today

FACT CHECK: Did Elon Musk Call Twitter Users Upset By Trumps …

An imageshared on Facebook purports Twitter CEO Elon Musk called users a bunch of babies addicted to the drama in regards to former President Donald Trump being reinstated.

This tweet is digitally fabricated. Musk did not make this comment.

Musk has restored several Twitter accounts that were previously suspended, including psychologist Jordan Peterson and Trump,CBS News reported. Musk recently announced he will not be restoring radio host and conspiracy theorist Alex Jones account on the platform,CNN reported.

The Facebook image claims to show a screenshot from Musks Twitter account addressing reactions to Trumps Twitter account being reinstated. The alleged tweet was made the morning of Nov. 20 with over 6,000 retweets.

If you dont like Trumps reinstatement or the way Im running Twitter or getting several daily push notifications from my account, then delete the app, the alleged tweet reads. For as much as youre all stomping and crying, I know you wont because youre a bunch of babies addicted to the drama.(RELATED: Is PayPal Fining Users $2,500 For Misinformation?)

The image is digitally fabricated. This post appears nowhere on Musksverified Twitteraccount. There is also no record of it on the deleted tweet tracker website,Politwoops by ProPublica. There are also nocredible news reportssuggesting Musk had lambasted twitter users with these comments.

Musk has been quite outspoken about his disapproval for Twitter users who disagree with the way he is running the app. Several tweets show Musk calling them judgy hall monitors, boasting about the success of the platform since his takeover and even memes about the criticism and misinformation.

The Twitter CEO did respond to criticism of the decision to reverse the suspension in a reply tweet to comedian Tim Young. The language featured in the tweet, however, was not comparable to the Facebook post.

The important thing is that Twitter correct a grave mistake in banning his account, despite no violation of the law or terms of service, the tweet read. Deplatforming a sitting President undermined public trust in Twitter for half of America.

This is not the first time false information about Musk has been shared on social media. Check Your Fact recentlydebunkeda post claiming Musk destroyed a Russian aircraft carrier.

Go here to read the rest:

FACT CHECK: Did Elon Musk Call Twitter Users Upset By Trumps ...

App Store ‘Gatekeepers’ Urged To Deplatform "Dangerous" Twitter Itself

Authored by Paul Joseph Watson via Summit News,

App store gatekeepers Apple, Google Play and Amazon could all decide to deplatform Twitter itself following new owner Elon Musks commitment to free speech, according to a report.

Speculation over app stores potentially targeting Twitter intensified after Apple executive Phil Schiller deactivated his Twitter account for no apparent reason days after Donald Trump was restored to the platform.

Twitter users also noticed that the official Apple account itself had apparently removed all of its tweets, although it was later revealed that this was nothing new.

According to areportby Fast Company, Musk is playing a dangerous game that could spell game over for the platform he just bought by his supposed failure to moderate impose stringent moderation.

Musks platforming of hateful content could get Twitter itself deplatformed, writes Clint Rainey, adding that the company could be on a collision course with app store gatekeepers.

Rainey and his ilk are mad that Musk has culled the number of moderators employed to track harmful content and enforce Twitters rules against it.

Showcasing again how journalists now weaponize their platforms to try to chill free speech, Fast Company contacted Apple, Google and Amazon to ask them if they planned to deplatform the Twitter app itself, but none responded.

The article notes how both Parler and Truth Social were at one stage banned from app stores before they were forced to agree to more draconian moderation policies.

While its incredibly unlikely that the Twitter app itself would ever be deplatformed, the gatekeepers could follow the example of advertisers by putting pressure on Musk to impose tighter censorship, therefore derailing the billionaires stated goal to free the bird and restore true freedom of speech.

Given that Apple is also likely to take a large cut of Twitters new $8 subscription service, a financial motive is also in place to maintain a good relationship with Musk.

Much of the hysteria seems to be based around Musk allowing accounts that had been unfairly banned or suspended on the platform, such as those operated by the Babylon Bee and Jordan Peterson, to return.

Trump himself returning to the platform, despite insisting he has no plans to actually use his account again, has also instilled terror into censorious leftists who fear they are losing power over the ability to shut down adversarial voices.

* * *

Brandnew merchnow available! Get it athttps://www.pjwshop.com/

In the age of mass Silicon Valley censorship It is crucial that we stay in touch. I need you to sign up for my free newsletterhere. Support my sponsor Turbo Force a supercharged boost of clean energy without the comedown. Get early access, exclusive content and behind the scenes stuff by following me onLocals.

Loading...

Go here to see the original:

App Store 'Gatekeepers' Urged To Deplatform "Dangerous" Twitter Itself