Page 67«..1020..66676869..8090..»

Category Archives: Free Speech

The Old Media and the New Must Work Together to Preserve Free Speech Values – EFF

Posted: February 2, 2021 at 8:07 pm

EFF Civil Liberties Director David Greene delivered the following as a keynote address on March 6, 2020, at the Media Law and Policy in the Digital Age: Global Challenges and Opportunities symposium hosted by Indiana University's Center for International Media Law and Policy Studies and its Barbara Restle Press Law Project.

A few years ago, I was summoned to the office of an eminent TV journalist, one of those people commonly described as the dean of . . . something. He wanted me to come by, he said, because he had an idea to run by me. So I went.

After the small talk we both had suffered the same back injury! he ran his idea by me. This is a paraphrase: We should bring back the Fairness Doctrine. And not just for broadcast news, but for all media, especially the Internet. Looking back, I think it made us better journalists. He was planning a conference and wanted this to be a major discussion point. In my memory, my jaw dropped cartoonishly all the way to the floor.

The Fairness Doctrine was a Federal Communications Commission rule that imposed fair reporting requirements on radio and television broadcasters. By broadcasters, I, and the FCC, mean those entities that have a license to broadcast over a certain over-the-air frequency, as opposed to cable or satellite or now streaming services. Its the stuff you get for free if you just plug in a TV or radio with an antenna. The Fairness Doctrine had many facets. But the main one required broadcasters to devote time to discussing controversial matters of public interest, and then to air contrasting views as well. In some circumstances this could require the broadcaster to provide reply time to any person. The rule was in effect from 1949 until 1987. Ill talk more about it a little later.

As I said, I was taken aback by this eminent journalists suggestion. Ive been a First Amendment lawyer for 20+ years and have worked with and on behalf of journalists and news organizations for much of that time. During all that time, without exception, journalists considered the fairness doctrine to be a serious infringement on editorial discretion and freedom of the press in general. How could this person who I knew to be a champion of a free press want to revive it, and apply it to all news media?

So I responded that it was a terrible idea and probably unconstitutional. Needless to say, I was not invited to participate in his conference.

Unfortunately, this was not an aberration. Ive seen it repeated in different forms ever since: news media advocates calling for regulation that would have until recently been seen as heretical to our established conceptions of a free press.

The cause, of course, is social mediaand Internet platformsand Big Tech.

But its not that the advent and popularity of social media has adjusted our free press priorities. Rather, social media and the Internet in general has changed the business of news reporting. Legacy new media, especially print, are largely suffering financially, especially at the regional and local levels. And when they see certain social media companies Facebook, Instagram, Twitter, Google, YouTube, Snapchat thriving, they reach out for ways to fight these intruders. To look for ways to level the playing field.

I completely understand the frustration that motivates this. I also fear a country with diminished or no local or regional reporting. Ive seen that there is so much less money now to fund public records requests and court access litigation. Indeed, these lawsuits now often fall to nonprofit organizations like EFF. I subscribe to home delivery of two newspapers and a bunch of magazines.

But its a huge mistake to let this despair lead us to a path of abandoning or weakening important free press principles and open the door to the regulation of journalism. Especially when, as I will discuss toward the end of this talk, abandoning these principles wont actually help.

So my job here today is to convince you that the news media, all facets of it, from news gatherers and reporters to those who simply provide platforms for others to publish to those who simply suggest news reading to others, must stick together and remain unified champions of a free press. To do otherwise is far too dangerous, especially in the anti-press climate cultivated by the sitting Executive branch.

Over the past few years, Ive noticed at least three formerly taboo regulatory threats being given some life by those who are otherwise free press champions.

Ive already mentioned the Fairness Doctrine. So Ill start there. As I said earlier, the Fairness Doctrine required broadcasters to present contrasting views of any public controversy. The U.S. Supreme Court upheld the rule in 1969 in a case called Red Lion Broadcasting v. FCC, on the basis that the FCC was merely requiring the broadcaster to momentarily and occasionally share the license that the FCC had granted it. The Court stated, though, that it would reconsider that decision if it became clear that the doctrine was restraining speech (that is, that broadcasters were choosing to avoid discussing public controversies rather than being forced to present both sides of them).

Five years later, the Supreme Court made clear that a similar rule could not be imposed on newspapers. In that case, Miami Herald Co v. Tornillo, the Court struck down a Florida right of reply law that required any newspaper that endorsed a candidate in an election to offer all opponents equal and equally prominent space in the newspaper to respond. The Court explained that such an intrusion into the editorial freedom of a newspaper was per se a violation of the First Amendment. And then in 1996, in ACLU v. Reno, the Supreme Court, in a different context, ruled that the Internet would be treated like print media for the purposes of the First Amendment, not broadcast.

The FCC revoked the Fairness Doctrine in 1987 (although it formally remained on the books until 2011) after a few lower courts questioned its continuing validity and amid great unpopularity among Republicans in Congress. There are occasional Congressional or FCC-initiated attempts to bring it back many blame it for the advent of seemingly partisan news broadcasts like Fox News, even though the rule never applied to cable television but none have been successful.

To bring back the Fairness Doctrine and then apply it to all media would mark a serious incursion on First Amendment rights.

Ive seen a similar flip with respect to professional ethics, specifically news media advocates urging the legal codification of their voluntary industry ethical standards, embodied in the ethical codes created by professional societies like the Society of Professional Journalists and the Radio and Television News Directors Association, and the National Press Photographers, etc. This typically takes the form of calling for conditioning legal protections for online news production, distribution, aggregation, or recommendation services on following these ethical standards. Like, for example, saying that Wikileaks should be subject to the Espionage Act, because it does not follow such practices, while "ethical journalists" must be exempted from it.

These codes have always been very intentionally voluntary guidelines and not law for several good reasons.

First, ethics are inherently flexible principles that dont easily lend themselves to absolute rules, tend to be fact-intensive in application, and can vary greatly depending on number of legitimate and worthy priorities. They are generally an ill fit for the bright lines we insist on for laws that limit speech.

Second, free press advocates have been rightfully concerned that transforming journalism's ethical codes to legal standards will only lead to vastly increased legal liability for journalists. This could happen both directly -- by the codes be written into laws -- and indirectly -- by the codes becoming the "standard of care" against which judges would assess negligence. "Negligence," that is, the failure to act reasonably, is a common basis for tort liability. It is typically assessed with reference to a standard of care, that is, the care a reasonable person would have exercised. Were ethical codes to become the standard of care, journalists could bear legal liability any time they failed to follow an ethical rule, and, even worse, have to defend a lawsuit every time their compliance with an ethics rule was even a question. And they would then be held to a higher standard than non-journalists who would only need to act as a "reasonable person," instead of as a "professional journalist."

Third, and perhaps most basically, this would be direct governmental regulation of the press, something antithetical to our free speech principles.

These all remain correct and relevant, and it remains a bad idea to give professional ethical codes the force of law or condition other legal protections on adherence to them.

The third flip Ive seen, and this is probably the most common one, is a sudden embrace of republication liability. Republication liability is the idea that you are legally responsible for all statements that you republish even if you accurately quote the original speaker and attribute the statement to them. To have my students truly understand the implications of this rule, that is, to scare them, I liketo discuss two examples.

In one case, Little v. Consolidated Publishing, (Ala App 2010), a reporter attended a city council meeting. Her reporting on the meeting included an accurate quotation of a city council member, Spain, who at the meeting repeated rumors that one of his rival council members, Little, was in a personal relationship with a city contractor and thus pushed for her hiring, a move that was now being questioned. The article included another statement from Spain in which he said that if the rumors about Little were untrue, they would be very unfair to Little. The article also included Littles denial. Nevertheless, Little sued the newspaper for defamation. The court rejected the argument that the publication was true since the rumor was in fact circulating at the time. The court explained that publication of libelous matter, although purporting to be spoken by a third person, does not protect the publisher, who is liable for what he publishes, and that it did not matter if in the same article the newspaper had decried the rumor as false.

In another case, Martin v. Wilson Publishing (RI 1985), a newspaper published an article about a real estate developer buying up historic properties in a small village. The article was generally supportive of the development and investment in the village, but explained that some residents were less than enthusiastic about the developers plans and doubted his good intentions. The article then stated that some residents stretch available facts when they imagine Mr. Martin is connected with the 1974 rash of fires in the village. Local fire official feel that certain local kids did it for kicks. And the article further expressed doubts about the claims of arson. The developer sued, and the court found that the newspaper could be liable for this republication even though the rumors did in fact exist and even though the newspaper had reported that it believed they were false.

The republication liability rule apparently dates back to old English common law, the foundation of almost all US tort law. Originally it seems to have been a defense to accurately attribute the statement to the original speaker. But attribution hasnt helped a reporter since at least 1824, when English courts adopted the present rule, and it quickly was adopted by US courts.

In my twenty or so years of teaching this stuff, republication liability is by far the most counter-intuitive thing I teach. Students commonly refuse to believe its true. It leads to absurd results. Countless journalists ignore it and hope they dont get sued.

And it gets worse, or at least more complicated. Since at least 1837 (the earliest English case I could find), republication liability has been imposed not just on those who utter or put someone elses libelous words in print, but also to those who are merely conduits for libel reaching the audience. The 1837 case, Day v. Bream, imposed liability on a courier who delivered a box of handbills that allegedly contained libelous statements in them, unless he could prove that he did not know, and should not have known, of the contents of the box. Early cases similarly impose knowledge-based liability on newsstands, libraries, and booksellers. The American version of this knowledge-based distributor liability is most commonly associated with the U.S. Supreme Courts 1959 decision in Smith v. California, which found that a bookseller could not be convicted of peddling obscene material unless it could be proven that the bookseller knew of the obscene contents of the book. Outside of criminal law, US courts imposed liability on distributors who simply should have known that they were distributing actionable content.

Given this, there developed two subcategories of republication liability: distributor liability for those like booksellers, newsstands, and couriers who merely served as passive conduits for others speech; and publisher liability for those who engaged with the other persons speech in some way, whether by editing it, modifying it, affirmatively endorsing it, or including it as part of larger original reporting. For the former, group, the passive distributors, there could be no liability unless they knew, or should have known, of the libelous material. For the latter group, the publishers, they were treated the same as the original speakers whom they quoted. Because one was treated a bit better if they were a passive distributor, the law actually disincentivized editing, curation, or reviewing content for any reason, and thus, some believed, encouraged bad journalism.

Historically, free press advocates have thus steadfastly resisted any expansion of republication liability. Indeed, they have jumped at any opportunity to limit it.

So why is this changing now?

It all started way back in the 1990s, when courts started to apply republication liability to early online communications services, bulletin boards, chat rooms, and even email forwarding. A New York Court found that the online subscription service, Prodigy, which had created a bulletin board called "Money Talk" for its users to share financial tips, was the publisher of an allegedly defamatory statement about the investment banking firm Stratton Oakmont (later immortalized in The Wolf of Wall Street) even though the comment was solely authored by a Prodigy user, and not edited by Prodigy. The court found that Prodigy was nevertheless a publisher, and not merely a distributor, because it (1) maintained community guidelines for users of its bulletin boards, (2) enforced the guidelines by selecting leaders for each bulletin board, and (3) used software to screen all posts for offensive content. This decision was in contrast to a previous decision, Cubby v. Compuserve, in which distributor liability was applied to Compuserve because it lacked any editorial involvement. (Compuserve had created a news forum but contracted out the creation of content to a contractor which then engaged a subcontractor, Rumorville.)

These holding gave rise to three major concerns about applying these print-world rules to online publication:

In order to address these concerns, Congress enacted 47 USC 230, which essentially gets rid of republication liability (both publisher and distributor liability) for much third-party speech. (There were two big exceptions: user speech that infringes intellectual property rights and user speech that violates federal criminal law.) Members of Congress acted on concerns that the unmanageable threat of liability would thwart the growth and wide adoption of the Internet and the development of new communications technologies within it. And those worried about sexual content wanted to remove all disincentives to remove content when an intermediary wanted to do so.

Section 230 has always been a bit controversial, and has been firmly in the crosshairs of regulators angry about all things online these days. Im not going to use more time here to go over those various attacks on the law. The point I want to make is that in the past few years, legacy news media advocates have joined the throngs blaming Section 230 for pretty much everything they see as wrong with the Internet that is, pretty much anything they dont like about Facebook is because of Section 230. That is, the loss of advertising dollars that used to sustain newspapers.

Again, this is remarkable to me, because as I said, the press has always hated republication liability and sought to chip away at it. But it is now supporting efforts to chip away at some of the protections that are in place. Justa few months ago, the News Media Alliance, as part of convening on Section 230 called by Attorney General Barr, called for reforming of the immunity as part of a larger overhaul of the news media landscape. And this is important the Section 230 protections apply to the new media when it publishes non-original content online, like reader comments, op-eds, or advertisements. Indeed, as I wrote a few months back, one of the most widely successful applications of Section 230 is to the online version of legacy news media. And Section 230 also protects individual users when the forward email or maintain a community website. Its not a Tech Companyimmunity; its auserimmunity.

Moreover, its largely assumed that online intermediaries, that is, those who transmit the speech of others, dont want to screen that speech for misinformation or other harmful speech. While it is true that some services adhere to an unmoderated pipeline model, its more the case, especially with the big services like Facebook, You Tube, Twitter, etc., that services very much want to moderate content, but that monitoring and evaluating speech at the appropriate scale is impossible to do well. The vast majority of decisions are highly contextual close calls. This impossibility is exactly why Congress passed Section 230 faced with liability for making the wrong decision and republishing actionable speech, these intermediaries will err on the side of censorship. And that brand of censorship inevitably has greater impact on already marginalized speakers.

Each of these examples of abandonment of traditional free press principles are motivated by the same desire: to level the playing field between traditional news media and online services. That is, the news media now see their ethical and professional norms and legal burdens as giving them a market disadvantage against their competitors for advertising dollars, namely Facebook and Google. And they see the imposition of their norms and legal obligations on these competitors as a matter of fundamental fairness. They in effect want to make good journalism a legal requirement.

Thats astounding. Free press advocates have historically recognized the need to support legal challenges aimed at bad journalism tabloids like the National Enquirer because they rightfully recognized that those who seek to weaken legal protections target the lowest hanging fruit. And even if you look to defamation law as an example where good journalism gives you some legal advantage, free press advocates have rightfully argued that even if they can prove in court that their journalistic practices were solid, to do so is very expensive and the prospect of doing so exerts a powerful chill on reporting.

And it is really dangerous to hand government the power to reward what it believes to be good journalism and punish what it believes to be the bad. Just imagine the havoc ourlast press-demeaning administration would wreak with such power. As it is, wehave seen press libel suits by President Trump and Devin Nunes, and offhand threats to pull the nonexistent licenses of cable broadcasters.

We should be calling for more protections for speakers, writers, and their platforms now, not fewer. I understand that unlike the fairness doctrine or ethics codes, legacy news media advocates arent now claiming to love republication liability. Rather, they are saying, if we are burdened by it, then they should be too. But still, wouldnt it be better to level the playing field, as it were, be removing republication liability from everyone, rather than placing the nonsensical and counterproductive legal requirement on everyone?

As I said above, I understand this perceived unfairness and I am very concerned about the economic instability of our news media ecosystem. But I am also concerned about abandoning free press principles in the false hope that in doing so, we will reclaim some of that stability.

AndI dont think it will help. I dont see a connection between the imposition of journalistic norms as legal requirements and the financial disruption to the news media marketplace. That is, I doubt that elevating good journalism to the force of law would help stabilize the market place.

There is no historic correlation between advertising income and quality of journalism. That is, advertisers dont and never have rewarded newspapers with advertising because of their journalistic prowess. Rather, newspapers used to have a functional monopoly over certain types of advertising. If an advertiser wanted an ad to reach most persons houses, they could either use direct mail or newspapers. Newspapers were especially effective for classified advertising, but also for car sales and other full-spread ads and inserts. Newspapers stalwart sections of highly marketable news sports, entertainment, national news in effect supported local and investigative journalism that standing alone might not have been a draw for either readers of advertisers.

But seemingly overnight, Craigslist gutted the classified advertising market. Its not because Craigslist was a more righteous platform to advertise, its because a continuously updating online platform with either targeted or broader reach to which any person with an Internet connection can almost instantly add is just a far better way of advertising for such things.

In many ways, and certainly for certain populations, the type of online advertising offered by Facebook and Google is simply a better deal for advertisers. They are not deceiving advertisers into thinking they are good journalists, and advertisers dont really care (nor do I) whether an online service is considered a publisher or a platform. Its a legally and practically irrelevant distinction. They just want effective advertising.

The hope, I think, is that enshrining good journalism into the law will either drive their advertising competitors out of business or burden them with costs that will make them less hugely profitable. At a minimum, it will just make us feel like the system is more fair. But none of that drives advertising dollars back to legacy news media.

(Ill acknowledge one exception Section 230 means that online services can accept certain ads that print publishers could not ones that are deceptive or misleading or discriminatory. But this is not a significant source of revenue.)

Moreover, the Internet is not just Facebook and Google, or a few other other large and rich sites. It represents a huge number and variety of communications platforms, from the very very local to the very global. And many of them are not hugely profitable. Many of them serve vital human rights functions,from connecting diaspora communities, to coordinating human rights reporting, to undermining communications bans in oppressive regimes. These are the sites and services that are threatened by the costs the good journalism legal standards would impose. Those with lots of money, the very sites these efforts actually target, are the very ones that have the financial wherewithal to absorb them.

The non-economic reason for giving good journalism the force of law is more compelling to me, though not ultimately availing. Ellen Goodman in her recently published paper for the Knight First Amendment Institute writes of the policy need to re-introduce friction into digital journalism in order to restore the optimal signal to noise ratio, signal being information that is truthful and supportive of democratic discourse; noise being that which misinforms and undermines discursive potential. Journalism norms boost the signal and diminish the noise. Digital delivery of information is relatively frictionless, resulting in less filtering of noise. So, the argument goes, the imposition of good journalism norms inserts productive friction into digital media.

I see the appeal to this and I understand the goals. Nevertheless, I would look to other methods, as outlined by Goodman to introduce friction built-in delays or limits on virality (such as what WhatsApp self-imposed)rather than placing in governments hands the setting and enforcement of journalistic norms, which is essentially government control of reporting itself.

Aside from what I see as the democratic threat to the government adoption, and thus co-option, of good journalism norms, there are also serious practical concerns.

And this is mostly because whereas a newspaper delivers almost only news, Internet media are typically far more diverse. Most Internet sites are multi-purpose: they may serve news and political advocacy. They may include journalists who have the luxury of attaching their own names to articles and who have the resources to fact-check and lawyers to vet stories. But they may also include political dissidents who must remain pseudonymous,or dissident news organizations whose reporting is otherwise blocked in a country, or independent journalists,or community organizers.Or just the average Internet user sharing information with friends. Were good journalism to become the law, these speakers may lose their audiences. I dont think we want an Internet shrunk down to manageable scale, where user created content is limited so that it is as manageable as the letters to the editor page.

So, in closing, I urge us all to stay steadfast to our traditional distaste for government regulation of journalistic practice. Good journalism is certainly an ideal. It is an admirable quality to urge any media outlet to adopt and follow. The norms are important and should continue to be taught, not merely to avoid legal liability, but because they serve an important democratic function. But they are not law and should not be.

See the article here:
The Old Media and the New Must Work Together to Preserve Free Speech Values - EFF

Posted in Free Speech | Comments Off on The Old Media and the New Must Work Together to Preserve Free Speech Values – EFF

Letter to the editor: Silencing free speech by breaking the law not what Edmonds is about – My Edmonds News

Posted: January 31, 2021 at 7:14 am

Editor:

Recently, My Edmonds News reported on the vandalism to the art installation that changed key characteristics of the message into an entirely new message altogether. The public was appropriately appalled, and legal action was pursued. Oscar Wilde is quoted as sayingI may not agree with you, but I will defend to the death your right to make an ass of yourself. Service members have given their lives defending this freedom. All the people of Edmonds should be mindful, respectful, and display the true meaning of decorum when it comes to defending every citizens right to protected speech.

In local Facebook groups, discussion is abundant and emotionally charged about the theft of political signs and other signs reflecting individual beliefs. We might not agree, or even understand the messages on those signs and can become blinded by our own passion. The use of a sign is a way to provoke thought, declare a strong personal belief, offer support. It is a sacred and protected right we as Americans enjoy and others in the world can only envy. Recently, many in Edmonds have had to resort to extreme measures to keep trespassers, enemies of free speech, and those who disagree from stealing yard signs. This week Councilmember Adrienne-Fraley Monillas used her position to make rhetorical and unproven statements during the Council Comments section of the Jan. 26, 2021 council meeting. I will fight to protect her right to express her thoughts. She is perfectly within her rights to say the things that she does, and residents of Edmonds are perfectly within their rights to display legal signage on private property to express their views.

A case in point. Recently, I was asked to watch a neighbors property while they were away. They had multiple legal signs on their private property. Over the course of seven days, trespassers stole private yard signs no fewer than four times. Fortunately, in two of the cases, I was able to secure photographs of these individuals. I promptly filed a police report and provided the photographs.

The people stealing these signs (and silencing protected speech) need to be prosecuted and serve their penalty.

Whether you want to express Black Lives Matter, Drop the Mike, Equity, Justice, We Choose Kindness, or I Like Turtles signs on your private property is your business. If you dislike the sign, think about why that is and engage in a discussion with those you disagree with to understand their views and share yours. Using misleading speech and to emotionally divide our community as Councilmember Adrienne-Fraley Monillas is doing is both wrong and dangerous.

One thing most of us have in common is our love for this city. We have done our best work when we share ideas, opinions, and respect. Members of this community crave having a voice and a meaningful role in their future. History has taught us what we become when we seek to silence those with whom we disagree.

George BennettEdmonds

More:
Letter to the editor: Silencing free speech by breaking the law not what Edmonds is about - My Edmonds News

Posted in Free Speech | Comments Off on Letter to the editor: Silencing free speech by breaking the law not what Edmonds is about – My Edmonds News

The Great Deplatforming: Can Digital Platforms Be Trusted As Guardians of Free Speech? – ProMarket

Posted: at 7:14 am

Online social media platforms accepted the role of moderating content from Congress in 1996. The Great Deplatforming that occurred after January 6 was less a silent coup than a good faith effort to purge online platforms of toxic content.

After former President Trump and many of the extremist followers he goaded were removed from a variety of online platformsmost notably Twitter, Facebook, YouTube, as well as Redditmany saw the subsequent silence as a welcome relief. But is that, as Luigi Zingales posits, an emotional reaction to a wrong that is being used to justify something that, at least in the long term, is much worse?

The Great Deplatforming (aka the Night of Short Fingers) has exposed the fact that a great deal of political discourse is occurring on private, for-profit internet platforms. Can these platforms be trusted as our guardians of free speech?

Recognizing that the Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity, and with the intent of preserving the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation, Congress granted immunity in 1996 to interactive computer services (which are equivalent of Twitter, Facebook, YouTube, and other online social media platforms) for any information published by the platforms users. Without this immunity, social media platforms would simply not exist. The potential liability for the larger platforms arising from the content of the millions upon millions of posts would be far too great a risk.

Congress also granted social media platforms immunity for any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected. This provision gives platforms an incentive to moderate content to weed out objectionable and illegal content. It can be a vile world out there, as evidenced by some of the hateful, harmful, and obscene comments that are posted in online forums. As noted by Cloudfare founder Matthew Prince, What we didnt anticipate, was that there are just truly awful human beings in the world. The situation has devolved to such an extent that experts now recognize that some human moderators suffer from post-traumatic stress disorder-like symptoms.

What do we do when the moderation we have encouraged social media platforms to conduct is applied to what some consider political speech? Many fear that platforms have difficulty differentiating between racist and/or extremist posts advocating violence against individuals or institutions based on political views, and simple political opinion. Indeed, Facebooks own executives acknowledge that the companys algorithm-generated recommendations were responsible for the growth of extremism on its platform. Concerns have also been raised that, with bad intent, some platforms refuse to make that differentiation to promote their executives own political agendas. Social media platforms have also been accused of moderating such political speech with a bias toward a particular party.

One approach is to amend Section 230 of the Communications Decency Act, the law that provides internet social media platforms their immunity. Republicans introduced five bills in the 20192020 Congressional session calling for amendments, as well as the full repeal, of Section 230. For example, Senator Josh Hawley (R-Mo), claiming a lack of politically neutral content on social media platforms, sponsored the Ending Support for Internet Censorship Act. Under the bill as introduced, online social media platforms with 30 million or more active monthly users in the US (or 300 million or more active monthly users worldwide, or more than $500 million in global annual revenue) would have to obtain an immunity certification from the FTC every two years. The social media platform would be denied a certification and lose Section 230 immunity if it was determined to be moderating in a politically biased manner, which would include disproportionately restricting or promoting access to, or the availability of, information from a political party, political candidate, or political viewpoint. As one article noted, Hawley wants to stop internet censorship by censoring the internet, not to mention regulating political speech.

The Parler lawsuit amplifies a point made by one commentator: If you are so toxic that companies dont want to do business with you, thats on you. Not them.

In May 2020, President Trump signed an Executive Order claiming online platforms are invoking inconsistent, irrational, and groundless justifications to censor or otherwise restrict Americans speech, and stating that online platforms should lose their immunity, because rather than removing objectionable content in good faith as required under the law, they are engaging in deceptive actions by stifling viewpoints with which they disagree. The Executive Order called on the FCC to propose new regulations to clarify immunity under Section 230 and for the FTC to investigate online platforms for deceptive acts. Both Tim Wu and Hal Singer have provided cogent arguments on ProMarket as to why these are doomed approaches.

Online moderation is enforced through each platforms terms of service (TOS) that each user must agree to before being able to post to each respective platform. After the January 6 siege of the US Capital, the tech companies that deplatformed a large number of users, including former President Trump, and deleted tens of thousands of posts, did so on the basis of users violating their respective TOS. For example, Amazon Web Services (AWS) stopped hosting Parler because of Parlers alleged violations of the AWS TOS, which include an Acceptable Use Policy. Parlers subsequent lawsuit against AWS could have served as a bellwether case for the application of TOS to regulate speech. Unfortunately, the case Parler presented is appallingly weak (one lawyer referred to it as a lolsuit). For example, the AWSParler hosting agreement clearly gives AWS the power to immediately suspend and terminate a client that has violated AWSs Acceptable Use Policy.

In its swift denial of a preliminary injunction in this case, the court also noted the lack of any evidence that Twitter and AWS acted together either intentionally or at all to restrain Parlers business.

Parlers causes of action continue with a claim that Twitter was given preferential treatment by AWS because similar content appeared on Twitter, for which Twitters account was not suspended, while Parlers account was terminated. There are two issues with this assertion: First, we return to the role of moderation. Twitter actively (though, to some degree, imperfectly) moderates the content on its platform. In contrast, Parlers home page stated its users could [s]peak freely and express yourself openly, without fear of being deplatformed for your views.

Even more damaging to Parlers assertion is the fact that the evidence in the case demonstrates that AWS doesnt even host Twitter on its platform and therefore did not have any ability to suspend Twitters account even if it wanted to. But the Parler lawsuit amplifies a point made by one commentator: If you are so toxic that companies dont want to do business with you, thats on you. Not them (which would seem to apply to Zingaless justification for Simon & Schusters cancellation of Senator Josh Hawleys book deal).

In denying Parlers request for a preliminary injunction that would order AWS to restore service to Parler pending the full hearing, the court rejected any suggestion that the public interest favors requiring AWS to host the incendiary speech that the record shows some of Parlers users have engaged in. While this ruling does not end the case, it does substantiate the weakness of at least some of Parlers arguments.

The corporate owners of the social media platforms are permitted in our free enterprise system to set the terms under which content may be posted or removed from the platforms.

Although the AWS-Parler case before the court will ultimately resolve this particular dispute, the underlying issues will remain far from resolved regardless of the outcome of the case. The explosion of social media over the last ten years, and its supplanting of traditional media to a large degree, has created a new and untested playing field for public discourse. Some of the issues raised are similar in scope, if not size, to the issues our courts have dealt with in the past. The corporate owners of the social media platforms are permitted in our free enterprise system to set the terms under which content may be posted or removed from the platforms. As non-government actors, the First Amendments freedom of speech protections do not apply to the speech of a private company, a fact confirmed by the court in denying Parlers preliminary injunction. The audience of the social media platforms at issue, however, has grown to exponentially. While deplatforming will, at least temporarily, silence those voices promoting violence on specific platforms, the long-lasting implications are less clear.

While it is true that Trump lost the popular vote in the 2020 election by 7 million votes, the fact remains that 74.2 million Americans did cast their votes for Trump. Once Twitter permanently banned Trump from its platform, and many surmised that he would join the less restrictive Parler, nearly one million people downloaded the Parler app from Apple and Google before it was removed from those stores and Parler was suspended from AWS. Moving the conversation off of mainstream social media and driving it into less balanced platforms whose subscribers are more homogeneous, much to the dismay of even Parlers CEO, John Matze, encourages an echo chamber of ideas in smaller encrypted platforms that are more difficult to monitor and potentially amplifies the most angry and passionate voices.

If the touted exodus of conservatives from Twitter to other platforms that they view as more welcoming comes to fruition, could our public discourse become even more divided, with opposing viewpoints feeding upon their own biases rather than potentially being tempered by responses and dialogue with each other? An informed public exposed to conflicting opinions is the best chance for resolving political differences. These issues are important and warrant further discussion, but the current termination of Parler from the AWS platform does not seem to result in heightened concern that public discourse is truly harmed. There are alternatives to AWS, and in fact, Parler already seems to have found one that will bring it back. While there is a significant amount of talk of conservatives leaving Twitter, there seems to be little evidence that has happened on a large scale. For the moment, at least, the largest social media platforms seem to be retaining users on both sides of the political spectrum, which we believe is good for democracy.

Go here to read the rest:
The Great Deplatforming: Can Digital Platforms Be Trusted As Guardians of Free Speech? - ProMarket

Posted in Free Speech | Comments Off on The Great Deplatforming: Can Digital Platforms Be Trusted As Guardians of Free Speech? – ProMarket

The free speech row tearing apart the tech community – Spectator.co.uk

Posted: at 7:14 am

Donald Trumps Twitter suspension after the riot at the US Capitol made headlines around the world. What was less reported, however, was that as the then-President was suspended, so too were tens of thousands of right-wing accounts. Their social media refuge was Parler, another micro-blogging platform.

Parler markets itself as a free speech-focused and unbiased alternative to mainstream social networks. Whatever its intentions, in recent years the platform has become a cesspit of extremist content. So extreme, in fact, that Amazon banned Parler from its hosting services earlier this month. The case is now going through the courts, after Parler launched a lawsuit.

What makes Parler an interesting case is that there was initially speculation it could be hosted on the blockchain networks used by cryptocurrencies such as Bitcoin (although Parler has since opted for a more conventional hosting system). For the uninitiated, blockchain is a special type of computer database for cryptocurrency transactions. It functions like a ledger and can record transactions between parties in a way thats verifiable, permanent and, most importantly, anonymous.

So what unites an obscure social media platform dedicated to free speech and the fiendishly complicated databases used by cryptocurrencies? In short, libertarian politics. These networks do not have any top-down control, whether from governments or internet monitors; that is their appeal. If Parler moved its network to a blockchain, it would ensure its ability to survive without interference from Amazon or any government.

The problem is that if a service like Parler was hosted on such a network, messages that incite violence would be pretty much impossible to attribute. Funds could move to finance bombings, murders or other terrorist activities while remaining undetected by the authorities.

Indeed, fears of a far-right insurgency have spooked some big players in the cryptocurrency world. Vinay Gupta, who helped launch Ethereum, a major Bitcoin rival, responded in no uncertain terms to suggestions that Parler should move to Ethereums network: You will regret it. We will collude against you. We will make your lives miserable. We will seriously figure out underhanded-yet-ethical ways to make your project fail if you force us to host it. Please, kindly, fuck off and build your own infrastructure. You are not wanted here.

His belief that Ethereum should under no circumstances allow itself to become a haven for the far-right goes beyond political preferences: If these people move on to our platform, we will be knee deep in terrorism financing lawsuits in 18 months. It is critical for the survival of the open internet that we are not a safe haven for fascists. Let me explain the logic If Parler was built on Ethereum instead of AWS [Amazons network] right now we would be losing all of the non-blockchain infrastructure the Ethereum community depends on: websites, exchange licenses, bank accounts, IPFS infrastructure, Infura, he tweeted.

Guptas fears are far from unfounded. Reuters reported on 14 January that payments in Bitcoin worth more than $500,000 were made to 22 different virtual wallets, most of them belonging to far-right activists and internet personalities, before the storming of the US Capitol. There is a real danger that a blockchain network could find itself complicit in acts that might fall under terrorism legislation. But there are those who disagree with Guptas approach. Ethereum co-founder Vitalik Buterin has claimed that Parler has a right to exist, full stop.

At the heart of the matter is a conflict that has been raging in the tech world for years: how much freedom should there be online? This debate has particularly affected the crypto and hacking communities. Defcon, one of the world's largest hacker conventions, effectively told alt-right agitators they werent welcome at its events in 2018. In the same year, Hope, another conference, was accused of refusing to remove fascists and white nationalists.

Its not just in the US either: at CCC, a conference organised every December in Leipzig, an actual physical fight reportedly broke out between left-wing organisers and attendees accused of alt-right sympathies in 2019.

What comes up again and again, from Parler being shut down to Trump being banned from Twitter, is the tension between the right to free speech and the potential of these technologies to cause real-life harm. Its about the threat that the far-right could pose if they had the expertise to utilise technologies like blockchain networks. And furthermore, how much of a presence the alt-right already has in the tech community.

Gupta tells me he believes the tension has arisen online because the libertarians in the crypto-community look at the new left, the woke left, and what they see is Stalinism. At the same time:

The left hates libertarians because theyre individualist and dont care about political correctness or the idea of society. So the libertarian position, that we are not responsible for what other people do, we are only responsible for what we do, is the standard Bitcoin political doctrine Libertarians will say you want to impose censorship on the blockchain, just like you imposed censorship on the internet. What they dont understand is that if we end up hosting that kind of content, the feds are going to shut us down. Fundamentally, the clash with a sovereign national state is one that the blockchain cannot survive.

What is clear, as we have seen over the last month, is that the far-right can thrive in the space created by the (admirable, if at times naive) defence of absolute free speech. And it will fall on communities, like the blockchain one, to find a way to navigate these choppy waters.

The stakes are real: if self-regulation fails, nation states will start imposing their sovereignty over parts of the internet, in effect Balkanising it. The era of the free internet, where information flows (mostly) seamlessly, would be over. In its place would be a version of the internet that more closely resembles countries like Russia and China heavily censored and subject to the whims of politicians. It would be a sad end for one of the greatest experiments in history.

Go here to see the original:
The free speech row tearing apart the tech community - Spectator.co.uk

Posted in Free Speech | Comments Off on The free speech row tearing apart the tech community – Spectator.co.uk

Sanctifying sentiment? Bail must be overriding principle in free speech cases unless theres incitement to v – The Times of India Blog

Posted: at 7:14 am

Supreme Courts refusal to grant interim protection from arrest to the cast and crew of Tandav facing FIRs in multiple states, for allegedly hurting religious sentiments, doesnt help the cause of free speech. The court observed that the right to free speech is not absolute and cannot come at the cost of hurting the rights of others. Hurt to sentiments, however, is a slippery slope that imparts a chilling effect on any exercise of the right to free speech. If Article 19(1) of the Constitution protects the citizens fundamental right to free speech, it is noteworthy that hurt to sentiment doesnt figure among the reasonable restrictions to free speech in Article 19(2).

Hurt to sentiments is subjective in the sense that what is entertainment or information or a legitimate belief to one person could be deeply disturbing or falsehood or blasphemy for another. Unless a clear cut offence that endangers public order by inciting imminent violence is committed, the state and its organs like police and courts will ordinarily have a tough time adjudicating free speech cases amid such subjective biases. Clearly, the Tandav makers havent succeeded in inciting any such offence. Indias free speech laws then necessarily command authorities to exercise forbearance in such cases.

The tactic of police filing cases in multiple states is a deadly and effective instrument of state harassment. Few individuals have the resources or temperament to withstand such pressure. In SCs glorious precedents lies its equally effective antidote of asking the offence industry to refrain from seeing what they cannot stomach. Lawyers for the Tandav cast and crew invoked the SC relief in similar situations for journalists Arnab Goswami and Amish Devgan. Both hyper-aggressive practitioners of their free speech rights were granted protection from arrest, and rightly so, effectively ending witch hunts against them.

The Tandav petitioners have an uphill task approaching each high court under whose jurisdiction FIRs have been filed, seeking protection from arrest. They had apologised and deleted the offending portions hoping to mollify the culture censors. A similarly apologetic Munawar Faruqui, still in jail for jokes he didnt crack, has suffered three bail rejections. A single-judge MP high court bench has raised the possibility of investigation coughing up more incriminating evidence. Faruquis jokes disqualifying him for bail conflict with free speech protections under Article 19(1). All courts must promptly call the police bluff, not let it play on much longer.

This piece appeared as an editorial opinion in the print edition of The Times of India.

END OF ARTICLE

Follow this link:
Sanctifying sentiment? Bail must be overriding principle in free speech cases unless theres incitement to v - The Times of India Blog

Posted in Free Speech | Comments Off on Sanctifying sentiment? Bail must be overriding principle in free speech cases unless theres incitement to v – The Times of India Blog

LETTER: Free speech protects us all | Archives | sanfordherald.com – The Sanford Herald

Posted: at 7:14 am

To the Editor:

The First Amendment giving us freedom of speech was not intended to just protect those you agree with. It protects all of us and all our differences.

Sherry Womack, school board member, attending a gathering in Washington D.C. has nothing to do with her position on the school board. It should have no impact unless she committed some sort of a crime. I see no evidence of that.

Seems to me we have bigger fish to fry than attacking a veteran and patriot committing herself to her community as a school board member. Let's focus instead on raising the quality of education in our area, improving test scores, raising up the schools and students to higher levels of performance. Now that's something worth fighting for!

Lynn Goldhammer

Pinehurst

Read the rest here:
LETTER: Free speech protects us all | Archives | sanfordherald.com - The Sanford Herald

Posted in Free Speech | Comments Off on LETTER: Free speech protects us all | Archives | sanfordherald.com – The Sanford Herald

Our View: Protect free speech, the ‘dread of tyrants’ – Duluth News Tribune

Posted: January 23, 2021 at 6:20 am

Modern America is a starkly changed and different place, and the canceling actions of late of media giants Facebook, Twitter, Amazon, and others have made it clear: When it comes to the power to dangerously prohibit or restrict speech and the free exchange of ideas and viewpoints, the government is far from the only entity with the size and capability to be a threat.

Theres another principle going on here. That is censorship, the silencing and restriction of the exchange of information, Minnesota Newspaper Association attorney Mark Anfinson, a First Amendment and press-freedom expert, said in phone interviews this week with the News Tribune Opinion page. I frankly am very troubled by what Silicon Valley has done, and Im troubled by the lack of indignation on the part of the news media about it. It isnt censorship by the government, but its still censorship. The effect is the same because of the power of these big tech-media entities. Its arguably worse.

As true in revolutionary times as now, The debate on public issues should be uninhibited, robust, and wide open; and it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials, Anfinson also said, sounding every bit the journalism and communications instructor he has been for 15 years at the University of St. Thomas in St. Paul. Facebook, Twitter, Amazon, and others are grotesquely interfering with that principle. Theyre ignoring it, and theyre desecrating the tradition of American free speech by doing so.

The emergence of unchecked and all-powerful social-media corporations reminds the Minneapolis lawyer of Europe in 1450 after the invention of the Gutenberg press, the first commercial printing machine. Suddenly, people were able to share information widely and rapidly. It utterly transformed society, Anfinson said. New rules and laws were needed to safeguard against the abuses of this newfound power and to compel its responsible use.

Then, like now, the new rules and laws were slow to catch up with the technology. In these modern times, while newspapers and other traditional media adhere to journalistic ethics and norms like fairness and accuracy, social media remains a Wild West. Its refusal to monitor or take responsibility for its content is as irresponsible and dangerous as its more recent moves to muzzle those with whom it doesnt agree by banning them altogether and to take offline platforms like Parler that cater to differing views.

Its a threat to the very dissemination of information, Anfinson said. What they did to Parler is one of the most scandalous desecrations of free-speech philosophy I have ever seen.

The answer to distasteful speech has always been more speech: accurate and reliable information to counter the propaganda, lies, and misinformation used by those who are dishonest or who only hunger for power.

Americans can call on Congress and the social-media power players of today to begin a robust and respectful conversation and to hammer together and settle on rules and laws that help to ensure accountability, discourage irresponsible censorship, and protect free speech.

Abolitionist Frederick Douglass said in 1860 that free speech is the dread of tyrants. That means protecting free speech is the counter to tyranny, whether the oppressor is the government, a powerful industry or corporation, or another entity threatening to stifle our free flow of ideas, information, and viewpoints. Free speech must be protected at all costs.

OVERHEARD:

Beware those attempting to control the narrative

HIstory taught the Founders that if you allow anybody any authority to select what people get to hear or see or read, society will go over the cliff, because truth will be suppressed and the people who always have a lust and hunger for power will be given greater power. They will control the narrative. Thats what were facing now.

I support carving out special laws and rules that apply to these gigantic social-media companies and imposing on them duties and obligations comparable to what are imposed on government by the First Amendment. ...

Facebook, Twitter, and these other entities are, for most people, the main conduit of expression, including expression of a political matter. Anybody who thinks that (Facebook CEO) Mark Zuckerberg or (Twitter CEO) Jack Dorsey or the weirdos who are sitting somewhere in a building in California (working) for Googles content division, anyone who thinks that over a long period of time theyre going to make all good decisions, the right decisions, and protect the rest of us from bad thoughts is ignorant of history. What they will do is gradually acquire more and more power for themselves by controlling the narrative, as the autocrats of history have always done. They've always sought to control information. It doesnt matter if the government is doing it or if a private powerful entity is doing it. The results are eventually going to be the same.

Minneapolis lawyer Mark Anfinson, a free speech and First Amendment expert, in telephone interviews this week with the News Tribune Opinion page.

More:
Our View: Protect free speech, the 'dread of tyrants' - Duluth News Tribune

Posted in Free Speech | Comments Off on Our View: Protect free speech, the ‘dread of tyrants’ – Duluth News Tribune

What Zoom Does to Campus Conflicts Over Israel and Free Speech – The New York Times

Posted: at 6:20 am

Back home in New Jersey, she enrolled in self-defense classes and bought a Taser for security.

In September, N.Y.U. settled Ms. Cojabs complaint with the Office of Civil Rights, outlining steps to address anti-Semitism on campus, as defined in the presidents executive order. But the school did not concede any wrongdoing, nor mention the section of the executive order citing examples of anti-Israel speech as anti-Semitic.

In the meantime, the conflicts continue, with or without students on campus. Universities are left to muddle in the middle, to balance irreconcilable imperatives.

Columbias president, Lee Bollinger, reaffirmed the schools commitment to free speech but vowed to disregard the student referendum on divestment. N.Y.U.s president, Andrew D. Hamilton, expressed consternation to Zoom over its cancellation of the webinar with Ms. Khaled, but he also chided the professors who sponsored it.

For now, though, the virtual campus makes it easy not to listen to one another, to refuse to normalize an opposing point of view. Instead, both sides dig into their own moral narratives, said Kenneth S. Stern, the director of the Center for the Study of Hate at Bard College in Annandale-on-Hudson, N.Y., who was the lead drafter in the group that created the working definition of anti-Semitism invoked in Mr. Trumps executive order. Mr. Stern said the definition was meant principally for data gathering, not regulating campus debate.

The reality is that both arguments are true, and to understand the issue you have to not just pick one side and battle against the other, you have to say that both people have indigenous claims, and one can make the case, from the Jewish perspective, that of course weve always been there, and the Palestinians can say, Weve been here for a long time and were indigenous. Both of those things are true.

The history is messy, he said, with justice on both sides, and injustice on both sides.

Even without remote learning, students have little incentive to see the other view and strong support for hardening their own sides.

Mr. Stern said, mildly, That makes conversations very difficult.

See more here:
What Zoom Does to Campus Conflicts Over Israel and Free Speech - The New York Times

Posted in Free Speech | Comments Off on What Zoom Does to Campus Conflicts Over Israel and Free Speech – The New York Times

Why Trumps Twitter ban isnt a violation of free speech: Deplatforming, explained – Vox.com

Posted: at 6:20 am

Within days of the January 6 Capitol insurrection, outgoing President Donald Trumps internet presence was in upheaval. Trumps social media accounts were suspended across Facebook, Twitter, YouTube, Instagram, Snapchat, Twitch, and TikTok.

The same was true for many of Trumps more extremist followers. Twitter suspended more than 70,000 accounts primarily dedicated to spreading the false right-wing conspiracy theory QAnon. Apple, Google, and Amazon Web Services banned the right-wing Twitter alternative Parler, effectively shutting down the site indefinitely (though its attempting to return) and relegating many right-wingers to the hinterlands of the internet.

Permanently revoking users access to social media platforms and other websites a practice known as deplatforming isnt a new concept; conservatives have been railing against it and other forms of social media censure for years. But Trumps high-profile deplatforming has spawned new confusion, controversy, and debate.

Many conservatives have cried censorship, believing theyve been targeted by a collaborative, collective agreement among leaders in the tech industry in defiance of their free speech rights. On January 13, in a long thread about the sites decision to ban Trump, Twitter CEO Jack Dorsey rejected that idea. I do not believe this [collective deplatforming] was coordinated, he said. More likely: companies came to their own conclusions or were emboldened by the actions of others.

Still, the implications for free speech have worried conservatives and liberals alike. Many have expressed wariness about the power social media companies have to simply oust whoever they deem dangerous, while critics have pointed out the hypocrisy of social media platforms spending years bending over backward to justify not banning Trump despite his posts violating their content guidelines, only to make an about-face during his final weeks in office. Some critics, including Trump himself, have even floated the misleading idea that social media companies might be brought to heel if lawmakers were to alter a fundamental internet law called Section 230 a move that would instead curtail everyones internet free speech.

All of these complicated, chaotic arguments have clouded a relatively simple fact: Deplatforming is effective at rousting extremists from mainstream internet spaces. Its not a violation of the First Amendment. But thanks to Trump and many of his supporters, it has inevitably become a permanent part of the discourse involving free speech and social media moderation, and the responsibilities that platforms can and should have to control what people do on their sites.

We know deplatforming works to combat online extremism because researchers have studied what happens when extremist communities get routed from their homes on the internet.

Radical extremists across the political spectrum use social media to spread their messaging, so deplatforming those extremists makes it harder for them to recruit. Deplatfoming also decreases their influence; a 2016 study of ISIS deplatforming found, for example, that ISIS influencers lost followers and clout as they were forced to bounce around from platform to platform. And when was the last time you heard the name Milo Yiannopoulos? After the infamous right-wing instigator was banned from Twitter and his other social media homes in 2016, his influence and notoriety plummeted. Right-wing conspiracy theorist Alex Jones met a similar fate when he and his media network Infowars were deplatformed across social media in 2018.

The more obscure and hard to access an extremists social media hub is, the less likely mainstream internet users are to stumble across the group and be drawn into its rhetoric. Thats because major platforms like Facebook and Twitter generally act as gateways for casual users; from there, they move into the smaller, more niche platforms where extremists might congregate. If extremists are banned from those major platforms, the vast majority of would-be recruits wont find their way to those smaller niche platforms.

Those extra hurdles added obscurity and difficulty of access also apply to the in-group itself. Deplatforming disrupts extremists ability to communicate with one another, and in some cases creates a barrier to continued participation in the group. A 2018 study tracking a deplatformed British extremist group found that not only did the groups engagement decrease after it was deplatformed, but so did the amount of content it published online.

Social media companies should continue to censor and remove hateful content, the studys authors concluded. Removal is clearly effective, even if it is not risk-free.

Deplatforming impacts the culture of both the platform thats doing the ousting and the group that gets ousted. When internet communities send a message of zero tolerance toward white supremacists and other extremists, other users also grow less tolerant and less likely to indulge extremist behavior and messaging. For example, after Reddit banned several notorious subreddits in 2015, leaving many toxic users no place to gather, a 2017 study of the remaining communities on the site found that hate speech decreased across Reddit.

That may seem like an obvious takeaway, but it perhaps needs to be repeated: The element of public shaming involved in kicking people off a platform reminds everyone to behave better. As such, the message of zero tolerance that tech companies sent by deplatforming Trump is long overdue in the eyes of many, such as the millions of Twitter users who spent years pressuring the company to ban the Nazis and other white supremacists whose rhetoric Trump frequently echoed on his Twitter account. But it is a welcome message nonetheless.

As for the extremists, the opposite effect often takes place. Extremist groups have typically had to sand off their more extreme edges to be welcomed on mainstream platforms. So when that still isnt enough and they get booted off a platform like Twitter or Facebook, wherever they go next tends to be a much laxer, less restrictive, and, well, extreme internet location. That often changes the nature of the group, making its rhetoric even more extreme.

Think about alt-right users getting booted off 4chan and flocking to even more niche and less moderated internet forums like 8chan, where they became even more extreme; a similar trajectory happened with right-wing users fleeing Twitter for explicitly right-wing-friendly spaces like Gab and Parler. The private chat platform Telegram, which rarely steps in to take action against the many extremist and radical channels it hosts, has become popular among terrorists as an alternative to more mainstream spaces. Currently, Telegram and the encrypted messaging app Signal are gaining waves of new users as a result of recent purges at mainstream sites like Twitter.

The more niche and less moderated an internet platform is, the easier it is for extremism to thrive there, away from public scrutiny. Because fewer people are likely to frequent such platforms, they can feel more insular and foster ideological echo chambers more readily. And because people tend to find their way to these platforms through word of mouth, theyre often primed to receive the ideological messages that users on the platforms might be peddling.

But even as extreme spaces get more extreme and agitated, theres evidence to suggest that depriving extremist groups of a stable and consistent place to gather can make the groups less organized and more unwieldy. As a 2017 study of ISIS Twitter accounts put it, The rope connecting ISISs base of sympathizers to the organizations top-down, central infrastructure is beginning to fray as followers stray from the agenda set for them by strategic communicators.

Scattering extremists to the far corners of the internet essentially forces them to play online games of telephone regarding what their messaging, goals, and courses of action are, and contributes to the group becoming harder to control which makes them more likely to be diverted from their stated cause and less likely to be corralled into action.

So far, all of this probably seems like a pretty good thing for the affected platforms and their user bases. But many people feel wary of the power dynamics in play, and question whether a loss of free speech is at stake.

One of the most frequent arguments against deplatforming is that its a violation of free speech. This outcry is common whenever large communities are targeted based on the content of their tweets, like when Twitter finally did start banning Nazis by the thousands. The bottom line is that social media purges are not subject to the First Amendment rule that protects Americans right to free speech. But many people think social media purges are akin to censorship and its a complicated subject.

Andrew Geronimo is the director of the First Amendment Clinic at Case Western Reserve law school. He explained to Vox that the reason theres so much debate about whether social media purges qualify as censorship comes down to the nature of social media itself. In essence, he told me, websites like Facebook and Twitter have replaced more traditional public forums.

Some argue that certain websites have gotten so large that theyve become the de facto public square, he said, and thus should be held to the First Amendments speech-protective standards.

In an actual public square, First Amendment rights would probably apply. But no matter how much social media may resemble that kind of real space, the platforms and the corporations that own them are at least for now considered private businesses rather than public spaces. And as Geronimo pointed out, A private property owner isnt required to host any particular speech, whether thats in my living room, at a private business, or on a private website.

The First Amendment constrains government power, so when private, non-governmental actors take steps to censor speech, those actions are not subject to constitutional constraints, he said.

This distinction is confusing even to the courts. In 2017, while ruling on a related issue, Supreme Court Justice Anthony Kennedy called social media the modern public square, noting, a fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more. And while social media can seem like a place where few people have ever listened or reflected, its easy to see why the comparison is apt.

Still, the courts have consistently rejected free speech arguments in favor of protecting the rights of social media companies to police their sites the way they want to. In one 2019 decision, the Ninth Circuit Court of Appeals cited the Supreme Courts assertion that merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints. The courts generally reinforce the rights of website owners to run their websites however they please, which includes writing their own rules and booting anyone who misbehaves or violates those rules.

Geronimo pointed out that many of the biggest social media companies have already been enacting restrictions on speech for years. These websites already ban a lot of constitutionally protected speech pornography, hate speech, racist slurs, and the like, he noted. Websites typically have terms of service that contain restrictions on the types of speech, even constitutionally protected speech, that users can post.

But that hasnt stopped critics from raising concerns about the way tech companies removed Trump and many of his supporters from their platforms in the wake of the January 6 riot at the Capitol. In particular, Trump himself claimed a need for Section 230 reform that is, reform of the pivotal clause of the Communications Decency Act that basically allows the internet as we know it to exist.

Known as the safe harbor rule of the internet, Section 230 of the 1996 Communications Decency Act is a pivotal legal clause and one of the most important pieces of internet legislation ever created. It holds that No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Simply put, Section 230 protects websites from being held legally responsible for what their users say and do while using said websites. Its a tiny phrase but a monumental concept. As Geronimo observed, Section 230 allows websites to remove user content without facing liability for censoring constitutionally protected speech.

But Section 230 has increasingly come under fire from Republican lawmakers seeking to more strictly regulate everything from sex websites to social media sites where conservatives allege they are being unfairly targeted after their opinions or activities get them suspended, banned, or censured. These lawmakers, in an effort to force websites like Twitter to allow all speech, want to make websites responsible for what their users post. They seem to believe that altering Section 230 would force the websites to then face penalties if they censored conservative speech, even if that speech violates the websites rules (and despite several inherent contradictions). But as Recodes Sara Morrison summed up, messing with Section 230 creates a huge set of problems:

This law has allowed websites and services that rely on user-generated content to exist and grow. If these sites could be held responsible for the actions of their users, they would either have to strictly moderate everything those users produce which is impossible at scale or not host any third-party content at all. Either way, the demise of Section 230 could be the end of sites like Facebook, Twitter, Reddit, YouTube, Yelp, forums, message boards, and basically any platform thats based on user-generated content.

So, rather than guaranteeing free speech, restricting the power of Section 230 would effectively kill free speech on the internet as we know it. As Geronimo told me, any government regulation that would force [web companies] to carry certain speech would come with significant First Amendment problems.

However, Geronimo also allows that just because deplatforming may not be a First Amendment issue doesnt mean that its not a free speech issue. People who care about free expression should be concerned about the power that the largest internet companies have over the content of online speech, he said. Free expression is best served if there are a multitude of outlets for online speech, and we should resist the centralization of the power to censor.

And indeed, many people have expressed concerns about deplatforming as an example of tech company overreach including the tech companies themselves.

In the wake of the attack on the Capitol, a public debate arose about whether tech and social media companies were going too far in purging extremists from their user bases and shutting down specific right-wing platforms. Many observers have worried that the moves demonstrate too much power on the part of companies to decide what kinds of opinions are sanctioned on their platforms and what arent.

A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same, Twitters Jack Dorsey stated in his self-reflective thread on banning Trump. He went on to express hope that a balance between over-moderation and deplatforming extremists can be achieved.

This is by no means a new conversation. In 2017, when the web service provider Cloudflare banned a notorious far-right neo-Nazi site, Cloudflares president, Matthew Prince, opined on his own power. I woke up this morning in a bad mood and decided to kick them off the Internet, he wrote in a subsequent memo to his employees. Having made that decision we now need to talk about why it is so dangerous. [...] Literally, I woke up in a bad mood and decided someone shouldnt be allowed on the Internet. No one should have that power.

But while Prince was hand-wringing, others were celebrating what the ban meant for violent hate groups and extremists. And that is really the core issue for many, many members of the public: When extremists are deplatformed online, it becomes harder for them to commit real-world violence.

Deplatforming Nazis is step one in beating far right terror, antifa activist and writer Gwen Snyder tweeted, in a thread urging tech companies to do more to stop racists from organizing on Telegram. No, private companies should not have this kind of power over our means of communication. That doesnt change the fact that they do, or the fact that they already deploy it.

Snyder argued that conservatives fear of being penalized for the violence and hate speech they may spread online ignores that penalties for that offense have existed for years. Whats new is that now the consequences are being felt offline and at scale, as a direct result of the real-world violence that is often explicitly linked to the online actions and speech of extremists. The free speech debate obscures that reality, but its one that social media users who are most vulnerable to extremist violence people of color, women, and other marginalized communities rarely lose sight of. After all, while people whove been kicked off Twitter for posting violent threats or hate speech may feel like theyre the real victims here, theres someone on the receiving end of that anger and hate, sometimes even in the form of real-world violence.

The deplatforming of Trump already appears to be working to curb the spread of election misinformation that prompted the storming of the Capitol. And while the debate about the practice will likely continue, it seems clear that the expulsion of extremist rhetoric from mainstream social media is a net gain.

Deplatforming wont single-handedly put a stop to the spread of extremism across the internet; the internet is a big place. But the high-profile banning of Trump and the large-scale purges of many of his extremist supporters seems to have brought about at least some recognition that deplatforming is not only effective, but sometimes necessary. And seeing tech companies attempt to prioritize the public good over extremists demand for a megaphone is an important step forward.

Support Vox's explanatory journalism

Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that empowers you through understanding. Voxs work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts to all who need them. Please consider making a contribution to Vox today, from as little as $3.

More here:
Why Trumps Twitter ban isnt a violation of free speech: Deplatforming, explained - Vox.com

Posted in Free Speech | Comments Off on Why Trumps Twitter ban isnt a violation of free speech: Deplatforming, explained – Vox.com

We still give license to expensive speech in name of free speech – The Boston Globe

Posted: at 6:20 am

Re Money (and the lack of it) talks, Shirley Leungs Jan. 12 front-page commentary: Corporate campaign contributions have been blessed by the Supreme Court as free speech and justified as a legitimate expression of corporate values. But theyre fundamentally different from the small contributions we make to candidates who reflect our values and aspirations. Big money is about a different set of values and aspirations influence and control. To ignore this, as the Supreme Courts Citizens United decision did, is to give license to expensive speech in the name of free speech.

If withholding money from the worst actors in Congress surrounding the events of Jan. 6 is all that we do, shame on us. We need an examination of the role that money plays in politics, and that role is corrosive.

Basic fact: To serve in Congress, you need to spend hours daily reaching out to potential donors. Both parties have call centers the sites were recently targeted for pipe bombs near the Capitol so that their members can take a break from being our voices in Washington to fuel campaigns and to gain influence because of the size of their war chest.

Elections require an end to the pay-to-play disenfranchisement of the many by the few. Elections are a public good worthy of protection from those who would bend it to their interests.

Jay Kaufman

Lexington

The writer is a former member of the Massachusetts House of Representatives and founding president of the nonprofit Beacon Leadership Collaborative.

Link:
We still give license to expensive speech in name of free speech - The Boston Globe

Posted in Free Speech | Comments Off on We still give license to expensive speech in name of free speech – The Boston Globe

Page 67«..1020..66676869..8090..»