Is There A World Beyond YouTube for Crypto? – Cryptonews

Source: Adobe/bloomicon

The relationship between the communications giant Google and the cryptoasset industry can be characterized as somewhat antagonistic.

Now, its video-sharing platform, YouTube seems to be on the offensive, de-platforming some crypto-related content creators, leaving them looking for viable alternatives.

As early as 2019, reports began to emerge from content creators within the cryptosphere who claimed that they were receiving warnings from YouTube about the content in their videos. In many cases, videos were removed, forcing the content creators to appeal the decision. In more extreme cases, some content creators found their YouTube channels de-platformed.

Towards the end of 2019, these claims reached a fever pitch as the crypto community noticed a purge of crypto-related material on YouTube. Channels like BTC Sessions, ChrisDunnTV, and Crypto Tips received warnings and had numerous videos deleted.

On December 23, 2019, Chris Dunn questioned YouTubes choices saying, "YouTube just removed most of my crypto videos citing "harmful or dangerous content" and "sale of regulated goods"... it's been 10 years of making videos, 200k+ subs, and 7M+ views. WTF are you guys doing?!

Given the widespread nature of the deletions, speculation rose about YouTubes motives. Considering restrictions on crypto and blockchain ads on Google, some members of the crypto community believed the video-sharing platform was purposefully participating in censorship.

YouTube eventually released a statement, saying that the deletions were accidental and urging content creators to appeal any decisions they feel were made in error. For content creators like Chris Dunn, however, going through the appeal process in order to have the videos reinstated did not always work. Dunn said, Today, YouTube not only took down the videos that they reinstated yesterday, but they took down at least one other video that theyd never taken down before.

In March 2020, YouTubes parent company Google announced a number of measures it was taking to ensure the safety of its employees in light of the unprecedented COVID-19 pandemic. Google and its subsidiaries would begin to limit the number of employees coming into the office in order to reduce the spread of the disease. On March 15, YouTube published a statement detailing how its move to limit employee activity within the centralized working spaces would affect content creators.

The video-sharing platform said that automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

Unfortunately for creators on YouTube, the measures would mean there would be an uptick in video removals. While AI is helpful in many ways, the technology lacks nuances and is likely to flag up content as if it is in violation, when it is in fact not. Knowing this, YouTube stated they would not issue a punitive warning against creators whose videos were flagged unless it was determined without a doubt the content was violative.

The platform also reiterated its open-door appeal policy, which they believe provides the content creator power to overturn a decision they feel to be unfair. YouTube further noted that the process would take longer because of the aforementioned employee measures.

Since YouTube began to enforce its workforce policies, crypto and blockchain content has seen an uptick in deletions. Beginning in March, complaints are turning into a loud buzz as the year progresses. For instance, Lark Davis, who publishes content under The Crypto Lark channel had 11 videos deleted, all in April. The videos were mostly crypto-related news with the most recent being an interview with blockchain evangelist, Andreas Antonopoulos. YouTube eventually reinstated some of his videos.

Other content creators were not as lucky as they had their channels banned. Blockchain education channel BTC Sessions and popular crypto YouTuber Tone Vays both had no access to their channels for around 24 hours.

Creators like Crypto Crow, Ivan on Tech and The Moon, and even the Roger Ver-led Bitcoin.com channel were banned, albeit temporarily, from the video-sharing platform.

While YouTube is not working for crypto and blockchain content creators, many feel beholden to the platform due to a combination of factors. However, they are now starting to look for hosting options for their content, especially decentralized platforms. After their channel reinstatement, BTC Sessions, for example, announced that they would begin to publish content on other platforms, such as Twitter, Twitch, and blockchain-based DLive.

The Blockchain Education Network (BEN), a YouTube channel that has also been flagged, started to publish content on DTube and LBRY - both blockchain-based platforms. DTube has sizeable viewership, it struggles with bandwidth and can fail uploading videos.

Other decentralized platforms that crypto content creators have been leveraging are Hive and 3speak.

While there are options for content creators to migrate to, YouTubes superior bandwidth and market reach are still hard to beat.

The presumably worlds biggest independent video content creator, PewDiePie, whose real name is Felix KjellbergPewDiePie, is a good example of this. After a year of livestreaming exclusively on DLive, he went back to YouTube where he signed an exclusive streaming deal.__Learn more: Trump vs. Twitter Fight Reignites Social Media Decentralization VisionThe Twitter Hacks and Battles of Plato, Socrates and Aristotle

Read the original:

Is There A World Beyond YouTube for Crypto? - Cryptonews

Social Media Deplatforming: Big Techs Gag Order – Dateway

Ever since the likes of Alex Jones and Milo Yiannopoulos were kicked out of their dominant platforms, debate has begun on whether tech companies should be armed with the power of deplatforming.

Deplatforming isnt a concept exclusive to technology. Some of the earliest forms of deplatforming were noticed on college campuses when controversial speakers would come. Afraid of the backlash from students, parents and even the general public, college management would preemptively ban certain speakers.

The crux of that argument still holds good. Especially more so today. The purpose of deplatforming is fundamentally to restrain speech of some individuals by removing the platform for which they use to express these opinions. Thereby withdrawing the medium that these people use to spread their message.

Typically these opinions tend to be polarizing which is why deplatforming is considered political activism.

With social media, the worry is greater. Opinions posted tend to be unfiltered. That coupled with social medias massive reach provides a compelling platform. You dont need to be an esteemed philosopher or scientist to air out your view. On such a platform, qualifications dont attract views, personalities do. Anyone convincing individual can thrive on such a platform.

Which is why the likelihood of large masses of people getting brainwashed is higher and sadly is ever-increasing. Big Tech has recognized this liability and have increasingly become vigilant watchdogs of what is posted on their platforms.

Deplatforming isnt limited to just social media sites. Any platform that allows such individuals to benefit from it have become notoriously vigilant. Recently many GoFundMe pages and PayPal links have been taken down on such grounds.

While the area hasnt seen much research-funding, the sparse few that exist claim that deplatforming, at least in the long-run, significantly reduces the persons user base. While this is debatable a research lead had this to say:

Generally the falloff is pretty significant and they dont gain the same amplification power they had prior to the moment they were taken off these bigger platforms.

Essentially, the audience that these individuals draw from Facebook, Twitter or YouTube isnt scalable to other platforms. Cutting that out usually means removing a vast set of eyes.

Anotherstudyby Georgia Tech examined the effects of deplatforming subreddits filled with hate speech. The crucial learning was thatreducing hate, reduces hate elsewhere, throughout the platform.

Other learnings regarding these hate speech spewing redditors were as follows:

The greatest power of social media is its options. The lack of monopoly allows individuals to switch as they wish and continue living their life. This undermines the power of deplatforming. However the greatest blow that deplatforming can give is when coordinated. Take Alex Jones for example. In just a day, Facebook, Spotify, Apple and YouTube banned him, cutting away millions of his listeners and viewers. Whether or not it was an ethical or democratic move is questionable but it undeniable dealt a crippling blow to his business.

In a free market there are countless players, theres bound to be one which will accept you no matter how radical you are. While deplatformed individuals were pushed out of mainstream focus they managed to survive elsewhere on platforms like Gab, Voat and BitChute.

There is an entire suite of extremist versions of the same sites that we daily scroll through. While these platforms dont have nearly the same numbers that mainstream ones do, their users are insignificant. If these people are deplatformed by the mainstream media and still prosper without having to change what they put out, was the initial deplatforming really effective?

Does this make deplatforming a tool of the past?

Additionally pushing these extremists to the corners of the internet doesnt mean that they have vanished, just harder to find. Yet these people garner a loyal fanbase. Relegating these extremists to the depths of the internet doesnt necessarily mean that ardent fans wont follow.

These alternate platforms may not be as popular but filling it extremists is bound to create a concentration of hate speech. It becomes an echo chamber for radical thoughts that go unchecked but amplified. This alienation from mainstream media is dangerous for two reasons.

Firstly, this alienation leads to a limited social circle, one that is limited by (usually) same political views. In an environment of only similar lines of thinking, the same radical thoughts get appreciated and these participants become unaware of the diametrically opposing views that are there. This leads to hyper-radicalization of both parties.

Usually this isnt conducive for any debate. Secondly, such a system removes the platform for conducive debates even if that was a possibility in the first place. If people are separated into different platforms based on their views, everyone just begins to live in their own bubble.

Such a system curbs debate. Pushing people awayfragments political discussions. While the media doesnt often portray it as such, people do listen to reason during discourse. Deplatforming removes the opportunity to even do so.

This segregation has already shown its teeth. Many people noted, that if rogue-shooter, Robert Bowers had posted this message on a mainstream media site instead:

HIAS [Hebrew Immigrant Aid Society] likes to bring invaders in that kill our people. I cant sit by and watch my people get slaughtered. Screw your optics, Im going in.

It would have been flagged sooner and probably the casualties could have been minimized. However, since it was posted on Gab (alt-right Twitter), his followers didnt seem to have any objections to his statements. Had there been no separation, this could have been preventable.

Deplatforming has always been sold as preventing violence and curbing the spread of socially destructive misinformation but in truth has always been a form of virtue signalling. In todays age where everything is fueled by profits, this is yet another aspect of a product or service that can be monetized. If you can prove that you dont even want to associate yourself with these radical views, your brand becomes more attractive. Whether this is the ulterior aim, it gives undue power to these tech companies.

Silicon Valley has been handed a very potent tool of censorship and if history is any indication, it is that they will misuse it if they already have not (allegedly have targeted only alt-right people).

The issue of censorship is age-old. The line between free speech and hate speech is blurred but we need a better solution to moderate hate speech. Deplatforming already seems like a solution of the past, a battle with increasingly short-lived victories. While it has served its purpose of limiting the spread of hate speech, we need one to identify and prevent the spread altogether. To be in such a situation is a luxury that we couldnt afford in days where we banned college speakers but times have changed. We need a lasting solution and one based on societal consensus.

Original post:

Social Media Deplatforming: Big Techs Gag Order - Dateway

On the ethics of deplatforming : stupidpol

I've just watched this panel discussion and found it to be pretty interestingnot only on the basis of the topics covered, but the environment in which it took place and how the various parties responded to each other. We've all likely seen this a number of times by now: a college hosts a speaker who is persona non grata to the activist left, and the event is disrupted in various ways because of this. (I recommend watching the whole thing if you have the time, but the meatiest parts are the walkout at the 20-minute mark and the rageful questioner around 1:02:30.) The question I wanted to raise, though, is this:

We can all agree that this type of response to this type of event is out of line. Howeveras an extreme counterexampleI think we can also generally agree that disrupting, say, the meeting of an armed militia who are actively conspiring against our own community is morally right and even demanded of us. Somewhere in between these two near extremes (I only say near because the activists could very well have chosen to machine gun down everyone in the room, etc.) exists a boundary that separates just disruption from unjust disruption. The question is, where does that boundary lie, and can it be rendered clearly? Certainly acts of sabotage have their place in political conflict, and while the saboteurs in this video are clearly in the wrong, I can conceive of a similar instance in which it's not cutting off a productive discussion, but rather disrupting the stoking of dangerous attitudes and intentions.

As a related and more immediate example (because I'm arguing with a friend about it right now), I think it was morally wrong of Stephen Colbert to have Donald Rumsfeld on his pleasant late night talk show because it humanizes him. My friend argues that it gave him an opportunity to ask him uncomfortable questions on a national stage. And I'd even say that there's merit to that in principle, but that Colbert failed by not being sufficiently antagonistic (i.e. Rumsfeld didn't leave the interview a weaker man than when he came on).

So what do you think? I have no clear answers of my own, which is why I'm throwing it to the crowd in hope of insight. Where and how do we draw the line, and how do we communicate that line to others so that we might form some consensus?

e: I sent this to three of my friends and they all understood the question perfectly and had something interesting to say about it. C'est la vie, I guess.

Continued here:

On the ethics of deplatforming : stupidpol

Milo Yiannopoulos is proof that deplatforming works

Wait, Milo Yiannopoulos is still on Facebook?

Late Friday night, former Breitbart golden boy and rallying figure for the alt-right, Milo Yiannopoulos, complained in the comments section of a Facebook post about how hard his life had become.

"I have lost everything standing up for the truth in America, spent all my savings, destroyed all my friendships, and ruined my whole life," Yiannopoulos wrote. "At some point, you realize its occasionally better to spend the money on crabs and cocktails."

Yiannopoulos later characterized the comment as "casually snapping" at someone, but his words highlight a greater point: that de-platforming hate-mongering internet celebrities actually works. It reduces the influence pernicious trolls like Yiannopoulos can have on national discourse. And makes their speech, though still hateful, and free, do less harm.

For those of you who have forgotten about this once relevant person, Milo Yiannopoulos is the former tech editor of Breitbart. The marriage of Yiannopoulos and his devoted alt-right social media following with Breitbart, helped catapult Breitbart into the influential outlet it became in the lead up to the 2016 election.

For a few years there, Yiannopoulos was a reigning troll of the alt-right. He championed the ability to demean anyone anywhere, and called it free speech. Notoriously, he dumped approving gasoline on the Gamergate controversy, in which trolls doxxed and harassed women who were calling for more diversity, and less toxic masculinity, in video games. He has worked to legitimize the alt-right and white nationalist movements by working hand-in-hand with known neo-nazis to bring their positions out of the internet shadows and into the light of day; his former staffer was a participant in the deadly 2017 Charlottesville Unite the Right Rally of white supremacists. Yiannopoulos' rise and influence crystallizes how social media can amplify a fringe voice by coalescing followers and normalizing once-abhorred opinions and groups, which leads to real world violence.

Eventually, however, Yiannopoulos took it too far for social media, his speaking sponsors, and even his bosses to handle.

In 2016, Twitter permanently banned Yiannopoulos for his participation in a targeted racist harassment campaign against comedian Leslie Jones. In 2017, Yiannopoulos resigned from Breitbart amidst outrage over comments he made seemingly defending pedophilia. That also resulted in the termination of his book deal with Simon & Schuster. Universities canceled multiple speaking engagements, including his 'Free Speech Week,' amidst protest to his ideas on, well, everything. And as recently as last week, Politicon pulled him from the speaking lineup which was to be his return to the speaking spotlight.

"My events almost never happen," Yiannopoulos wrote in the same comment. "Its protests, or sabotage from Republican competitors or social media outcries. Every time, it costs me tens or hundreds of thousands of dollars. And when I get dumped from conferences, BARELY ANYONE makes a sound about it not my fellow conservative media figures and not even, in many cases, you guys."

Milo's events don't happen because his words, and the real world action they've inspired, triggered "de-platforming." De-platforming is the idea that the best way to combat hate and vitriol in the real world is to take away amplification, usually online. It most recently regained prominence amidst the wide scale ban of Alex Jones and InfoWars from every major platform he had, except Twitter.

Yiannopoulos' recent comments made waves on Twitter, and he took, once again to Facebook, to respond. In a post on Sunday, he apologized for being "too real," and also called himself a superhero. But he resolved to keep fighting his good fight, and stick it to his haters by never backing down.

"Since the best form of revenge is to stick around to make their lives hell, my critics have done me a favor again by reminding me that what they really want is to shame and humiliate me into silence," Yiannopoulos wrote. "THAT WILL NEVER FUCKING HAPPEN."

But here, Milo is missing the point: he can keep talking, but it just does't matter if nobody is around to hear him.

Yiannopoulos' general misery and fear of violent retribution aren't something to celebrate. But now, Milo only makes news when something or someone cancels him; when people say "no" to his insistences that white privilege is fake or that black lives matter is a hate group. The fact that Yiannopoulos has found his reach and influence so depleted that he can't get new gigs and takes to comments on Facebook to complain shows the real world effect that de-platforming a toxic public figure can actually have. Indeed, the pro-InfoWars fervor surrounding Alex Jones' ban from social media lasted about 24 hours; much more enduring is his silence.

Read this article:

Milo Yiannopoulos is proof that deplatforming works

Is cancel culture silencing open debate? The perils of shutting down disagreeable opinions and arguments – ABC News

Last week, 150 high-profile authors, commentators, and academics signed an open letter in Harpers magazine claiming that open debate and toleration of differences are under attack. Signatories included J.K. Rowling, Margaret Atwood, Stephen Pinker, Gloria Steinem, and Noam Chomsky.

While prefacing their comments with support for current racial and social justice movements, the signatories argue there has been a weakening of the norms of open debate in favour of dogma, coercion, and ideological conformity. They state, an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty.

Rowlings own participation comes in the wake of widespread backlash against her controversial comments on transgender issues and womanhood. Actor Daniel Radcliffe (Harry Potter himself) joined the chorus of disapproval over her comments, arguing they erased the identity and dignity of transgender people. Employees at Rowlings publisher subsequently refused to work on her forthcoming book.

The Harpers letter invokes similar cases of what it sees as punitive overreactions to unpopular views, suggesting they formed part of a larger trend:

Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes.

The reference to editors being fired is perhaps the most well-known recent incident. Last month, the New York Times published an opinion piece by Republican Senator Tom Cotton calling for the military to provide an overwhelming show of force to restore order in US cities during the protests over the killing of George Floyd. The pieces publication attracted immediate criticism for promoting hate and putting black journalists in danger.

In response, the editorial page editor emphasised the newspapers longstanding commitment to open debate and argued the public would be better equipped to push back against the senators policy if it heard his view. This defence failed, and within days he resigned.

Unsurprisingly, perhaps, the Harpers letter has received spirited critique. Some commentators noted past cases where the signatories had themselves been censorious. Others argued that any perceived threat was overblown. Indeed, the link the open letter draws between a repressive government and an intolerant society may seem a long bow to draw. There is a world of difference between the legal prohibition of speech and a wave of collective outrage on Twitter.

Yet, it is nevertheless worth considering whether important ethical goods including those commonly invoked in free speech arguments are threatened in a culture of outrage, de-platforming, and cancelling.

Want the best of Religion & Ethics delivered to your mailbox? Sign up for our weekly newsletter.

Almost everyone would agree some types of speech are beyond the pale. Racial slurs dont deserve careful listening and consideration. They require calling out, social censure, and efforts at minimising harm.

Rather than objecting to outrage per se, the Harpers letter asserts there is a broadening in the scope of views that attract punitive responses. This seems plausible. In recent scholarly work on the tensions between censorship and academic freedom on university campuses, both sides of the dispute acknowledge that in the current environment virtually all utterances offend someone.

And yet, perhaps there are good reasons for this broadening of scope. In each of the cases raised in the letter, there were seemingly sensible reasons for applying social sanctions. These included judgements that:

For someone who is genuinely concerned that speech is wrong in these ways, it will seem not just morally permissible to take action against the speaker. It will feel obligatory. But several concerns arise when we attach punitive consequences to peoples speech based on its perceived moral wrongfulness (as opposed to simply arguing it is mistaken or false):

If we think a persons view is wrong and immoral, we might suppose there is no great loss about a debate being derailed and the person sanctioned. But there are genuine ethical concerns here.

First, even if we think harmful or offensive speech deserves punishment or silencing, we still need to make sure the punishment fits the crime. For example, in a given case, calling out speech as intolerant, tone deaf or racist, and withdrawing ones support for a person, might be clearly justified. But stronger consequences like personal abuse, continued hounding, doxing, enforcing social ostracism, refusing to work with the speaker (if they are a colleague), demanding their removal from their position, or issuing threats may be quite inappropriate and even wrongfully harmful themselves.

Similarly, once we start moving to the harsher types of punishment, what we might call rule of law considerations become increasingly relevant. These require that norms be clear and proscriptive, and that the process of guilt finding is done impartially and with procedural safeguards. Without attention to such matters, abuse of power can occur, and a chilling and anxiety-ridden climate can ensue. One of the core ethical attractions of cancel culture lies in its ability to create consequences and accountability for wrongdoers. But as we all know, those wielding coercive power and issuing punishments to ensure accountability must themselves be accountable in how they use that power. Otherwise fears of mob rule, arbitrary power, and bullying arise.

Second, public deliberation is itself a critical source of legitimacy. The fact that different views are widely heard and inclusively considered provides a reason for accepting collective decisions. This can happen in a local way, where a family, group or team discusses a matter. Even if they dont resolve their disagreements, a constructive deliberation where everyones views are considered may still help the group move forward consensually. But inclusive deliberation is also important for maintaining the legitimacy of larger institutions. Democracy itself assumes citizens can hear different arguments, evidence, and perspectives. If significant parts of the political spectrum are no longer tolerated in public discourse, then social institutions lose this important type of legitimacy.

Third, listening to others with different opinions, and engaging with them, can help us understand their views and develop more informed versions of our own positions. On the flipside, being consistently outraged by opposing viewpoints provides a ready reason not to consider them. Indeed, we currently have a surfeit of challenges that we can level at a view we dislike by saying it is offensive, harmful, unhelpful, dog-whistling, said in bad faith, driven by ulterior motives, punching down, drowning out other voices or suffers guilt by association. Use of these easily available options can entirely remove the hard work of trying to understand and then to interrogate and perhaps refute our opponents arguments. When used in a pervasive and widespread fashion, such challenges feed directly into major concerns with political tribalism, confirmation bias, group-think, and group polarisation effects.

Fourth, shaming people can cause a persuasive boomerang to occur. When people feel others are trying to control them, they can become even more attached to the view others are trying to combat. Perversely, our actions can encourage the very belief we are trying to eliminate.

None of these concerns categorically rule out attaching punishing consequences to hateful or harmful speech. But they do imply the open letter has a point worth serious attention. Seeing mistaken views as intolerable speech carries genuine ethical costs.

Hugh Breakey is a Senior Research Fellow in moral philosophy at Griffith Universitys Institute for Ethics, Governance and Law. Since 2013, he has served as President of the Australian Association of Professional and Applied Ethics. An abbreviated version of this piece appeared in The Conversation.

Follow this link:

Is cancel culture silencing open debate? The perils of shutting down disagreeable opinions and arguments - ABC News

Judge allows withdrawal of Spencer’s attorney – The Daily Progress

A judge in a long-running Unite the Right lawsuit has recently attempted to settle several side avenues to the main case, in one instance pointing out that a campaign to discredit his law clerks was irrelevant.

The Sines v. Kessler case has slowly worked its way through the Charlottesville U.S. District Court since it was filed in October 2017, less than two months after the deadly rally. The lawsuit targets key organizers and participants of the white supremacist rally and was filed on behalf of a number of Charlottesville-area residents by Integrity First For America.

Recently, U.S. District Judge Norman K. Moon denied a motion from defendants Jason Kessler, Matt Parrott and Nathan Damigo that called on the judge to recuse two of his law clerks for alleged personal connections to a plaintiff, Elizabeth Sines.

In Moons denial, he wrote that the defendants were pushing on an open door, and not asking for any relief that is different from the status quo because the named clerks had never been involved in the case.

They promptly informed me of the potential conflict and have heeded my instructions not to work on this case, Moon wrote. Rather, my career clerk has assisted me on this matter whenever I have found the assistance of a law clerk useful. Such measures are routine in the event of a law clerk conflict.

Moons order came the same week U.S. Magistrate Judge Joel C. Hoppe, who also has presided over the case, ruled that John DiNucci, an attorney representing white supremacist and University of Virginia graduate Richard Spencer, could withdraw from the case.

During a telephonic hearing earlier in the month, DiNucci argued he should be allowed to withdraw as Spencers attorney because he had not been paid in months. Spencer claimed his financial woes were due to deplatforming efforts and asked for a week to develop a payment plan, which was granted.

According to Hoppes order, on June 18 DiNucci informed the court that he stood by his motion to withdraw, which the court subsequently granted.

Another issue plaguing the case was seemingly concluded during a Thursday telephonic hearing.

Since January, defendant James Alex Fields Jr.s state trial attorneys Denise Lunsford and John Hill have repeatedly filed motions arguing they cannot produce documents responsive to the case. The attorneys have argued that the documents are non-privileged and are unable to be produced because of a confidentiality agreement they signed with Charlottesville Commonwealths Attorney Joe Platania.

Hoppe denied their motion to quash earlier this month, which prompted a motion to reconsider.

Less than a half-hour prior to Thursdays hearing, Lunsford filed supplemental evidence of a 2017 Charlottesville General District Court order that prevented the dispersal of case materials in an effort to protect the privacy of Fields victims.

Lunsford argued that this order, which she had been unable to locate prior to the hearing, prevented her and Hill from providing much of the documents requested by the plaintiffs. She also argued that the discovery requests put an undue burden on her and Hill, who are not parties in the lawsuit and only represent Fields on the state criminal level.

Describing this evidence as beyond the 11th hour, Joshua Siegel, an attorney representing the plaintiffs, countered that the court order did not meet the standard for a motion to be quash and should have been brought up months prior.

After a nearly two hour hearing, Hoppe said he would respect the Charlottesville General District Courts order and the parties would need to request a change to the order, which could then allow Lunsford and Hill to share responsive documents.

According to Siegel, the plaintiffs expect to wrap up depositions in the next three weeks and the case is still on track for a three-week trial in October.

Go here to read the rest:

Judge allows withdrawal of Spencer's attorney - The Daily Progress

The boundaries of free expression are up for debate in the Age of COVID-19 and Black Lives Matter – capitalcurrent.ca

Amid calls for racial justice in Canada, the United States and around the world, a letter signed by group of prominent writers, artists and academics, including Margaret Atwood and J.K. Rowling, argues that supporters of social justice should also practise civility and not stand in the way of a free exchange of ideas.

A Letter on Justice and Open Debate, published in Harpers magazine on July 7, says that restrictions on expression and debate weaken full participation in democracy.

The letter has been attached for vague wording and lack of references to specific terms or examples of what it is describing.

While the letter makes no specific references, many believe it is decrying what is known today as call-out culture or cancel culture.

We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought.

The letter has sparked heated public discussion about whether support for a public figure can be withdrawn in response to what critics deem objectionable behaviour or opinions.

The letter is viewed as continuing an argument on the appropriate response to what many see as harmful actions or comments against groups in society. Many also see it as symptomatic of a divide between a long-standing liberal view of free speech and an emerging movement that demands greater accountability for speech and behaviour that dehumanizes certain social groups, such as racialized people and trans individuals.

The way to defeat bad ideas is by exposure, argument and persuasion, not by trying to silence or wish them away, the letter reads. We refuse any false choice between justice and freedom, which cannot exist without each other.

The letter praises movements for racial and social justice as a needed reckoning, but goes on to warn against the stifling of debate.

The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted.

We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought.

Among the supporters is James Turk, director of Ryerson Universitys Centre for Free Expression. He said that the wording of the letter aligns with a robust protection of free expression.

Its expressing concern that theres growing pressure not to debate things, but to simply shut people up.

AbigailCurlew, a journalist and doctoral researcher whose work focuses onsocial media, Internet doxxingand anti-trans digital vigilantes, has her reservations, however.

These people are mostly very rich, have humongous platforms, but to shield themselves from criticism that might be either angry or very sharp, it removes that layer of accountability that they have as public figures, said Curlew.

Asam Ahmad, a writer and community organizer who has written several pieces on thesubject of disposability culture, disagrees with the premise of the letter.

I dont think this letter is about free speech, but about power, he told Capital Current in an email.

Ahmad cited the example of Masuma Khan, who faced disciplinary action at Dalhousie University for a Facebook post related to Canadas colonial past and white fragility.

If this is going to be framed as an issue of free speech, lets seriously discuss who has free speech in this country.

A number of the letters signatories have faced intense criticism in the past for their comments and actions, particularly on topics around race, gender and sexuality.

Bari Weiss, for example, is an American writer who has been denounced for celebrating cultural appropriation.

Jesse Singal and Katie Herzog have garnered criticism for their writing about trans people. Both Singal and Herzog have written stories about people who have de-transitioned, a perspective which has been argued amplifies the phenomenon more than it actually happens. Singal and Herzogs writings have been argued to misrepresent the nuances and complexities of trans peoples experiences and systemic barriers to adequate health care. Singal has argued against the idea that marginalized groups like transgender people should not have to debate their right to exist.

I do think that the letter is just another kind of manifestation of ways to excuse bigotry,

Canadian psychologist Steven Pinker has also been criticized for suggesting that women may be underrepresented in fields of science because of innate biological differences. Pinker has also been accused of dismissing on Twitter the impacts of police brutality on Black people.

Atwood has faced intense criticism in recent years for signing a letter supporting writer and teacher Steven Galloway, who was fired by the University of British Columbia after allegations of bullying and sexual assault. Galloway is suing his accuser and 20 others.

Atwood and the other signatories of that letter lamented Galloways damaged reputation while offering no consolation for the complainants, whose names were leaked to the public and were smeared in the process.

Rowling has experienced public backlash in recent weeks for comments she made about trans people.

She insisted on Twitter that only women can menstruate, which critics pointed out ignores the trans men and non-binary people who menstruate. Rowling defended her initial tweet, suggesting that it is not hateful towards trans people to equate sex with gender.

I do think that the letter is just another kind of manifestation of ways to excuse bigotry, said Curlew, drawing attention to what she considers oppressive and dehumanizing statements made by some of the signatories.

Turk said that the letter should be judged on its content rather than looking at the names of people who signed it.

Criticizing the document or the signatories because somebody who you find reprehensible somehow pollutes everybody who signed the document, I think is really unfair.

Turk warned against the societal limitation of any speech, even if it protects against perceived hate speech. To do so, he argued, would pave the way to allow anyone in society to do the same.

When a group of people decides that they can take that right to act on behalf of all of us onto themselves, its what in (the) old days we did call vigilante justice.

If someone says, Well, this crosses the line and its not illegal, but Im going to stop it from happening, then I think there is a problem, said Turk. If they say, This crosses the line and Im going to denounce them publicly, Im going to stand and protest, or Im going to encourage everyone to boycott them, all of that is part of free expression.

He added: But if you physically prevent them from being heard, then I think we have a problem because then youre taking power under yourself.

Ahmad challenged the notion that de-platformed figures experience chill on their free speech.

If this is going to be framed as an issue of free speech, lets seriously discuss who has free speech in this country.

De-platforming refers to a type of activism that seeks to deny a controversial speaker access to a venue or platform for express their opinions.

When very visible people with large platforms are cancelled they are often given even larger platforms to discuss their cancellation.

Curlew acknowledges Canadian law does protect free speech (with such exceptions as advocating genocide or violence) but points out it does not also guarantee a platform to amplify expression.

If a university or a social media platform removes you from that platform because of hate speech, its their decision based on multiple things like public relations and liability, and maybe morals to decide who they platform, she said.

Turk argues that if the rights to freely express are weakened, the most hard hit will be marginalized folks.

The people whose voices are suppressed and for whom constitutional protections for free expression protect most are the people who are challenging conventional wisdom and privilege and orthodoxy, he said.

Curlew said, I have concerns that the free speech that (the letter signers) are advocating for is going to create and is already creating a threatening environment that burdens the free speech of marginalized people.

She said that the idea that space must always be made for unfettered speech, including the possibility of hate speech towards marginalized groups, fails to take up the social context of those groups, citing the example of the continued harsh treatment of transgender people.

Weve gone through a century of really harsh treatment from the state, from police and from the public.

When a group of people decides that they can take that right to act on behalf of all of us onto themselves, its what in (the) old days we did call vigilante justice.

In apublished surveyof 433 transgender Ontarians in 2013, The Trans PULSE Project found that experiences of transphobia were nearly universal among trans Ontarians, with 98 per cent reporting at least one experience of transphobia.

When you have entire groups of people being marginalized in incredibly violent ways, you cant just expect them to engage in theoretical debate about whether theyre human or not. Its not feasible and its kind of a violence in and of itself, said Curlew.

Curlew also contextualized the recent experience of the trans community, which currently faces rising hate crimes and hate speech against transgender people from anti-trans far-right groups.

When we speak out we end up getting silenced by getting doxxed, harassed, by getting death threats, threats of sexual assault. So this chills the climate of free speech, she said.

How are we supposed to engage with the public discourse if the consequences of engaging are that we get attacked or get exposed to violence?

For Curlew, while it is important to hold people to account for their harmful actions, she is ambivalent about de-platforming.

How do you get a person to recognize the harm that they do through dialogue, and find ways to extend chances for redemption?

At the same time, Curlew acknowledges the limits of that type of approach.

I recognize that not everyone has the patience to do that. And not everyones safe enough to do that. And its a really slow process. And sometimes you just cant wait to change peoples minds when your rights are about to be taken away, she said.

Ahmad said the Harpers letter distracts from the broader social context,which exposes the true stakes at play.

Letters like this actually make it far more difficult to have a serious, engaged, complicated conversation about disposability culture and toxic forms of call-out culture, both of which are more harmful and prevalent amongst marginalized communities, and impact writers and artists from these communities far more than those who already hold vast amounts of power (individually and in their professions).

I would even go so far as to say that letters like this make my life and my work far more difficult.

Read more:

The boundaries of free expression are up for debate in the Age of COVID-19 and Black Lives Matter - capitalcurrent.ca

Owen Benjamin and His Personal Army (Collided With Reality) – Patheos

Hi and welcome back! Need a jolt of good or at least funny news for your weekend? This might just do ya. Recently, Patreon deplatformed an alt-right nutjob named Owen Benjamin. He decided to hit back at them in a novel waythat has backfired in not only his own face but those of the fanboys who decided to act as his personal army. Today, let me show you what happens when someones personal army turns out to be the Persians, not the Spartans.

Lately, various social-media platforms have been cracking down on right-wing nutjobs (RWNJs) and alt-right loons and their wackadoo ideas. Every day, it seems, some new story emerges about someone in that crowd losing accounts on sites likeTwitter, Facebook, YouTube, and Instagram.

Instead of learning to play by the rules of the privately-owned companies granting them these accounts, the nutjobs in question just keep drilling down harder on their wackadoodlery.

Indeed, alt-right loons operate a great deal like toxic Christians. Theyre so similar, in fact, that its all but impossible to tell if any given alt-right loon is a toxic Christian or an atheist (in my experience, theyre divided about 50/50; in todays case, our subject considers himself a firm Christian). Both groups use the same tactics, attract the same kinds of recruits, suffer from the same mistakes in their thinking, want the same basic things, and hate the same outgroups.

More than that, even, they try their hardest to find some twist of Martian logic thatll become the magic key toforcingthese privately-owned companies to put up with them and their noxious presence.

And lately, some of them think theyve found that magic key.

Patreon is a social-media and crowdfunding service/platform that allows users to offer regular monthly donations to their favorite content creators. This site represents one of the main ways that content creators can earn a living these days. Indeed,Ive got a Patreon myself and deeply appreciate my patrons.

Like all such privately-owned sites, Patreon maintains a list of terms and conditions for its creators and users. Since right-wing nutjobbery largely violatesanymeaningful ethical boundary one can imagine, they began cracking down on such nutjobs toward the end of 2018. At that time, they whacked such alt-right luminaries as Sargon of Akkad and Milo Yiannopoulos.

One of the alt-rights current darlingsdu jour, Owen Benjamin, used to have a fairly thriving lil account there. Hes a sometime comedian, podcaster, and actor with some incredibly toxic and erroneous opinions. Last winter, Patreons site owners caught up with him. As the Daily Dot tells us,they banned him. They werent the only ones banning him, either.Instagram, YouTube, PayPal, and Facebookall joined thatparty,most citing repeated violations of their clearly-stated rules about hate speech.

But Owen Benjamin didnt take that bannination sitting down. No way, no sir! He protested and raised a personal army to try to fight the ban byforcingPatreon to take him back.

(Also, at some point he tried to ban-evade with alternate accounts which hes also lost now. Ive just got no words. What a whiny little control-grabbychild. I just want to tell people like that to grow a little goddamned dignity. Never in the world would I ever want to be part of any site or project that didnt want me involved.)

Two of the most idiotic liars on the internet I have ever seen.

A Redditor regarding Vox Day and Owen Benjamin(and he aint wrong either)

Over the years, Owen Benjamin (a Holocaust denier and anti-vaxxer) has built up a large army of creepy, fanatical fanboys. They call themselves bears, with Benjamin himself wearing the nickname Big Bear. And if drive-by Christians think R2D mods can be strict, well, all I can say there is that were the kiddie league compared to Owen Benjamins tightly-moderated spaces.

Shortly after Owen Benjamins bannination, his fellow toxic Christian and wackadoo Vox Day claimed to be filing something against Patreon. Hes since backtracked that claim as apparently both of these conspiracy theorists do often. Its possible Vox Day (real name: Theodore Beale) simply meant he washelpingBenjamin with this ludicrous plan of his, but who even knows or cares.

Either way, at first Owen Benjamin filed suit for USD$2.2M. Then, he upped that figure to $3.5M. And then, apparently he asked some of his followers to file lawsuits alongside him. All in all, about 100 of his fanboys filed lawsuits. All of them appear to have used Benjamins own lawyers to do the filing, and all demanded that Patreon either deal with them or pay Benjamin the $3.5M he wanted.

It was sheer lunacy. I really dont know what they thought was going to happen. Maybe Owen Benjamin thought this plan would get him the money he needs to build his the Northern Idaho fantasy ranch of his dreams, which he seriously namedBearTaria (oh my sides).

(Stay tuned for my future fundraising drive for a Tuscan dream estate of my very own, which Ive dubbed Villa Space Princess. /s)

Whatever Owen Benjamin and his fans thought would happen, Patreons actual response was to sue 72 of those fanboys. According to that Daily Dot article (relink):

This lawsuit is about keeping hate speech off of Patreon, the company told the Daily Dot via email. We wont allow former users to extort Patreon, and are moving these frivolous claims to court where they belong.

Hmm, they dont sound in the slightest bit nervous about anything. Maybe thats because they instituted two rules in January to both [prohibit] users from filing claims based on the platform kicking off someone else and [require] any who do so to pay the companys attorneys fees and costs.

And the fanboys claims were filed a solid month later, in February.

Oops!

So it seems unlikely that the fanboys will come anywhere close to success here and may be on the hook for alotof money if/when they lose.

These LOLsuite filers seem to be under the impression that Patreon is somehow obligated to deal with their horsesh*t however they wish to dish it to the site. A similarly-minded fundagelical explains the illogic here:

The reason they are in trouble is because they have been deplatforming some of their clients.

They have every right to do so, dont they?

Well, no, in fact. They dont.

They have the right to stop providing their own services, absolutely. Twitter can kick off anybody they want, because their service is free. [. . .]

Owen did business with his patrons, and Patreon, in their Terms of Service, explicitly repudiated any responsibility for these individual transactions.

This is huge.

When Patreon kicked off Owen Benjamin, they werent just removing somebody they didnt like. They were interfering with Owens personal business relations with his patrons.

Intentional interfering with contractual relations has another legal name, and that is: tortious interference.

So there you have it. The alt-right is now trying to claim that by blocking Owen Benjamin from their site, Patreon is in effect blocking him from the people who have contracted with him through the site. After crowing about the alt-rights magical new key to what theyre callingre-platforming,this guy gloats:

Deplatforming has been the recent norm, but that is all going to change.

Sure it is, Jan.

He might be the most selfish, shameful human being Ive ever seen. Hes pissed that he couldnt raise $2 million off of them in 3 days in the middle of the greatest depression this country has ever seen. wow

A YouTube commenter regarding Owen Benjamin(and that one wasnt wrong either)

That fundagelicals gloat-rant was written back in January or so, and so far I havent seen any real success with the tactic. Apparently Vox Day foundsomesuccess before that in using it to get back on IndieGogo after they banned him, but other sites seem to have learned from IndieGogos mistakes and have adapted as needed as Patreon clearly did in January.

In fact, this latest news about Patreon suing Owen Benjamins fanboys (no, I will not call them bears) seems to indicate that the tactic will just become yet another failure in the alt-right World of Failtrains.

Gosh, I guess Owen Benjamin is gonna hafta findanotherway to gain his Idaho Fantasy Ranch.

NSFW for language. Seven minutes of an alt-right loon pouting and whining about how little money his fans are giving him for his dumb Idaho Fantasy Ranch dream. A critic of his uploaded this.

The rest of us might think alt-right loons new strategy makes them look like sovereign citizens or something, but theyre dead serious about it for now, at least.

For now, it sure doesnt look like Owen Benjamin is going to get toforceanybody to put up with him any time soon. Poor baby.

NEXT UP: The new influx of women leaving Christianity. See you tomorrow!

Come join us onFacebook,Tumblr,Pinterest,Twitter, and our forum atrolltodisbelieve.com! (AlsoInstagram, where I mostly post cat pictures. About 99% of my insta consists of Bother being adorable.)

Also please check outour Graceful Atheist podcast interview!

Ifyou like what you see, I gratefully welcome your support. Please consider becoming one of my monthly patrons viaPatreon with Roll to Disbelievefor as little as $1/month! MyPayPal is captain_cassidy@yahoo.com(thats an underscore in there) for one-time tips. You can also support this blog throughmy Amazon Affiliate linkand, of course, by liking and sharing my posts on social media! This blog exists because of readers support, and I appreciate every single bit of it.

If youve noticed my posts are a wee bit shorter than usual, its because my shoulders been killing me lately. Typing is a severe strain on it, so Ive been trying to keep that activity to a minimum. Please dont worry if I need a break, I know I can take one! <3 you all.

Read more:

Owen Benjamin and His Personal Army (Collided With Reality) - Patheos

Minds Wants to Pay You to Post on Its Social Network, and Is Expanding Into India – Gadgets 360

It's much easier to grow when you spy on people. Our growth has been slower because we don't spy on people. It's much easier to provide recommendations when you're following people around and watching everything they do, says Bill Ottman, CEO and co-founder of Minds, an open source, decentralised social network that uses cryptocurrency to reward users for engagement. That's a lot of buzzwords, but Ottman, whose aim is to provide a spying free alternative to Facebook says, Facebook and the others are closed platforms that are extracting value from the users.

Ever since the government banned 59 Chinese apps, including TikTok, there has been a scramble to gain ground in India. Various made in India alternatives have sprung up, such as Roposo, Moj from Sharechat, and newer alternatives like Chingari and Mitron. We've also seen the entry of Reels from Facebook's Instagram, which was launched in Brazil, but made a quick appearance in India right after TikTok was banned.

Minds, which has also been slowly growing globally (though its biggest markets are the US and UK), is also keen to make its mark in India. But it's taking on an uphill task many networks have come up with the stated goal of unseating Facebook, such as Ello, and others have set out to be more open and decentralised, such as Mastodon, but the incumbents are still standing.

Ottman argues that Facebook and other big networks are tainted by secrecy. Everyday there's a new scandal. People are looking for alternatives and want to diversify, he says. In India, the fact that a new app is able to launch every day and claim 100,000 or more new users each day suggests that people are definitely looking for networks, but whether these companies will be able to retain or monetise these new users is still unclear.

However, Ottman believes that as more and more people join Minds, they're going to reach a tipping point. The trend is towards open source. We've seen this happen in other areas already. We believe like Linux, Wikipedia, Bitcoin, this is going to happen in social media as well, Ottman says.

Facebook has certainly faced its share of controversy. The Cambridge Analytica scandal during the 2016 US elections was just the tip of the iceberg during the 2019 general elections in India, reports showed how groups were being created to promote inauthentic behaviour and influence elections. Facebook's WhatsApp, the most popular messaging platform in India, was similarly leveraged to gain votes. Facebook has also been called out for letting the US President Donald Trump post what critics say are calls for violence. But can Minds avoid the same trap?

Minds faced its own controversies in 2018, when a lot of hate groups found its free speech ideals a great way to spread their message without worrying about being shut down. After the reports came out, Minds took steps to remove the content, yet it is still working out the line between free speech and hate speech.

Volunteers are now working to translate Minds into different languages, and helping it to grow in places like Thailand and Vietnam. Yet the numbers are low according to Ottman, the platform has 2.5 million registered users, and around 300,000 monthly active users, and approximately 2 million active visitors. That's about double the registered users from 2018, based on the company's statements, and about triple the MAU from the same time.

If you look at how the big networks are behaving like take the algorithm on Facebook, you're only reaching around five percent of your followers when you post, says Ottman. As long as that kind of behaviour keeps coming, they're pushing people away, and find other networks where they can get more exposure.

What we're noticing, in terms of the influencers that are coming and driving a lot of our traffic and monetisation has been the main interest of the influencers who have come so far from YouTube, he adds. A lot of big influencers are scared of losing their revenue on YouTube, or losing their reach on Facebook.

Bill Ottman, of Minds.comPhoto Credit: Andy Culp/ Wikipedia

The core proposition of Minds, to Ottman, is privacy. We're trying to add new features in 2020 to make it more enticing for people to be on board Minds, and more competitive with mainstream apps, while staying true to the ethos of respecting your privacy, he says. However, to many users, it's the fact that you can get paid simply for using the platform.

YouTube pays some creators, but for most of 2020 we have been focussed on monetisation, both blockchain and fiat, Ottman says. Specially now, with COVID-19, people are really looking for independent revenue streams, and combining social media and monetisation will be something that all social networks focus on more.

To that end, Minds offers a wallet to its users, and lets them earn for posting to the network. The main differentiator is the wallet you can earn dollars (or your rupees or whatever) or Ethereum or Bitcoin. The gamification element where you receive payment for engagement, with animations and badges makes it more engaging, Ottman adds.

But the problem with this is that an incentive to post topics that generate high engagement comes into the picture. And this in turn leads to influencers posting more and more controversial subjects in order to get more visibility. This is an accusation often levelled at journalists, but they're paid a fixed salary. An influencer on the other hand needs to keep increasing engagement.

A popular YouTuber, food scientist Ann Reardon, highlighted this problem with a YouTube channel called Five Minute Crafts, which she said posts unsafe content because it does well on the algorithm, and raises more money. In her video, Reardon notes, It's more clickable, and clickbait content is what's currently working on the YouTube algorithm, and apparently it works on Facebook too.

Ottman agrees that this is a problem. It is very complex and it is not easy. If you look at how mainstream networks are handling sensationalism and sensitive topics, they're taking a very centralised approach with a small handful of fact checkers and saying This is the truth.' We have started a program to create webs of trust through decentralised identity, based on users and content, he says.

Even within a reputation type system where users are voting and scoring users and content, you'll still have manipulation with bots and trolls and its really an ongoing and never-ending battle against misinformation and spam and bots and trolls. But I do think that the best path is incentivising 'good' behaviour, he adds.

This also means that Ottman sees censoring hate speech as a problem. By banning the content they're making the people more radicalised. Censorship causes more violence than free speech, he says.

In recent times, platforms like Twitter and Reddit have been more active in banning political hate speech. Twitter put [content warnings on Trump tweets], and Reddit [removed a group called The_Donald], which was seen by many as a source of political hate speech. Facebook has also been criticised for not following suit with even its own employees staging a virtual walkout to protest the posts.

But Ottman doesn't agree with these moves. Blocking Trump's tweets, or banning The_Donald was very short sighted in my view. There was a study done on Reddit by Georgia Institute of Technology and the University of Michigan, which analysed hundreds of millions of posts. They studied the 2015 ban that Reddit did, he says.

The conclusion of the study was that this just caused the trolls to go to other networks, and encode their language on Reddit, he adds.

Minds, whose board includes Daryl Davis, an African American musician who is famous for attending Ku Klux Klan (an American White Supremacist group) rallies and converting members, follows the same philosophy according to Ottman.

All the increases in bans are resulting in greater polarisation. Look at how divided the US is right now. The major social networks are probably the number one contributors to this because of their policies, and the offensive part to me is that they are acting like they are on the moral high ground, he says.

You should be able to control what you're seeing and I want to be able to control my experience so that I am not seeing that content, and that is one of the greatest challenges that we are hyper-fixated on right now, he adds.

We want to make sure that you don't see anything you don't want to see, while also not making the Internet more toxic, Ottman continues. How this is different from deplatforming hate groups and making them occupy smaller and smaller niches of the Internet isn't clear but Ottman feels that only by engaging with hate groups can we make the world a better place.

Of course, this also means that the burden for making the Internet a better place lies on the more moderate users. People that are fomenting hate need to be reasoned with and pacified and convinced, and this is only possible if we're seeing the toxic elements that Ottman wants to allow us to filter out. There's a level of self-contradiction at play here which raises questions about how successful Ottman can be, which he agrees as well, but sticks by his arguments to say that banning speech is not the solution.

It's way easier to just ban it, to spy on people and feed them good recommendations, and grow the network. But I don't think that giving the control and still staying free are mutually exclusive, it's just a more difficult path, he says.

In 2020, will WhatsApp get the killer feature that every Indian is waiting for? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts or RSS, download the episode, or just hit the play button below.

Continue reading here:

Minds Wants to Pay You to Post on Its Social Network, and Is Expanding Into India - Gadgets 360

Is cancel culture silencing open debate? There are risks to shutting down opinions we disagree with – The Conversation AU

Earlier this week, 150 high-profile authors, commentators and scholars signed an open letter in Harpers magazine claiming that open debate and toleration of differences are under attack. Signatories included JK Rowling, Margaret Atwood, Gloria Steinem and Noam Chomsky.

While prefacing their comments with support for current racial and social justice movements, the signatories argue there has been a weakening of the norms of open debate in favour of dogma, coercion and ideological conformity. They perceive

an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty.

The letters signing by Rowling comes in the wake of widespread backlash against her controversial comments on transgender issues and womanhood.

Actor Daniel Radcliffe (Harry Potter himself) joined a chorus of disapproval of her comments, arguing they erased the identity and dignity of transgender people. Employees at Rowlings publisher subsequently refused to work on her forthcoming book.

The Harpers letter invoked similar cases of what it saw as punitive overreactions to unpopular views, suggesting they formed part of a larger trend:

Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes.

The reference to editors being fired is perhaps the most well-known recent incident. Last month, the New York Times published an opinion piece by Republican Senator Tom Cotton calling for the military to provide an overwhelming show of force to restore order in US cities during the protests over the killing of George Floyd.

The pieces publication attracted immediate criticism for promoting hate and putting black journalists in danger. In response, the editorial page editor emphasised the newspapers longstanding commitment to open debate, arguing the public would be better equipped to push back against the senators stance if it heard his views.

This defence failed, and within days he resigned.

Read more: In publishing Tom Cotton, the New York Times has made a terrible error of judgment

Perhaps unsurprisingly, the Harpers letter has received spirited critique. Some commentators noted past cases where the signatories had themselves been censorious. Others argued that any perceived threat was overblown.

Indeed, the link the open letter makes between a repressive government and an intolerant society may seem a long bow to draw. There is a world of difference between the legal prohibition of speech and a wave of collective outrage on Twitter.

Yet, it is nevertheless worth considering whether important ethical outcomes are threatened in a culture of outrage, de-platforming and cancelling.

Almost everyone would agree some types of speech are beyond the pale. Racial slurs dont deserve careful consideration. They require calling out, social censure and efforts at minimising harm.

Rather than objecting to outrage per se, the Harpers letter asserts there is a broadening in the scope of views that attract punitive responses. This seems plausible. In recent scholarly work on the tensions between censorship and academic freedom on university campuses, both sides of the dispute acknowledge that in the current environment virtually all utterances offend someone.

Yet, perhaps there are good reasons for this broadening of scope. In each of the cases raised in the letter, there were seemingly sensible reasons for applying social sanctions. These included judgements that:

the speech was morally wrongful

the speech was gravely offensive

the speech would have seriously worrying consequences. It was unhelpful, harmful, damaging or divisive.

For someone who is genuinely concerned that speech is wrong in these ways, it will seem not just morally permissible to take action against the speaker. It will feel obligatory.

Read more: No, you're not entitled to your opinion

But several concerns arise when we attach punitive consequences to peoples speech based on its perceived moral wrongfulness (as opposed to simply arguing it is mistaken or false).

First, claims of moral wrongfulness in a debate assume immediate urgency and distract from the debate itself. For example, lets say in a debate about immigration, one person says something that offends another. Discussion of the original issue (immigration) will be bracketed until the issue of moral wrongdoing (the perceived slight or offence) is resolved.

Second (except in obvious cases), claims about wrongfulness, offensiveness and harmfulness are all open to debate. As philosopher John Stuart Mill once observed:

The usefulness of an opinion is itself a matter of opinion: as disputable, as open to discussion, and requiring discussion as much, as the opinion itself.

Third, allegations of wrongdoing create heat. Few people respond constructively to allegations of wrongdoing. They often retaliate in kind, escalating the conflict.

In a less politicised environment, a contentious claim might be treated as a contribution to a debate to be considered on its merits. But in our current climate, the same claim creates only angry allegations flying in both directions. As a result, the claim isnt considered or debated.

If we think a persons view is wrong and immoral, we might suppose there is no great loss about a debate being derailed. But there are genuine ethical concerns here.

First, public deliberation is a source of legitimacy. The fact that different views are widely heard and inclusively considered provides a reason for accepting collective decisions.

Democracy itself assumes citizens can hear different arguments, evidence and perspectives. If significant parts of the political spectrum are no longer tolerated, then social institutions lose this important type of legitimacy.

Read more: Actually, it's OK to disagree. Here are 5 ways we can argue better

Second, listening to others with different opinions, and engaging with them, can help us understand their views and develop more informed versions of our own positions.

On the flip side, being consistently outraged by opposing viewpoints provides a ready reason not to consider them. This feeds directly into confirmation bias and group-think.

Third, shaming people can cause a persuasive boomerang to occur.When people feel others are trying to control them, they can become even more attached to the view others are trying to combat.

None of these concerns categorically rule out attaching punishing consequences to hateful or harmful speech. But they do imply the open letter has a point worth serious attention. Seeing mistaken views as intolerable speech carries genuine ethical costs.

Follow this link:

Is cancel culture silencing open debate? There are risks to shutting down opinions we disagree with - The Conversation AU