Niall Ferguson is the Milbank Family Senior Fellow at the Hoover Institution at Stanford University and a Bloomberg Opinion columnist. He was previously a professor of history at Harvard, New York University and Oxford. He is the founder and managing director of Greenmantle LLC, a New York-based advisory firm.
Photographer: Olivier Douliery/AFP/Getty Images
Photographer: Olivier Douliery/AFP/Getty Images
When talking among themselves, Silicon Valley big shots sometimes say weird things. In an internal presentation in March 2018, Google executives were asked to imagine their company acting as a Good Censor, in order to limit the impact of users behaving badly.
In a 2016 internal video, Nick Foster, Googles head of design, envisioned a goal-driven ledger of all users data, endowed with its own volition or purpose, which would nudge us to take decisions (say, about shopping or travel) that would reflect Googles values as an organization.
If that doesnt strike you as weird like dialogue from some dystopian science-fiction novel then you need to read more dystopian science fiction. (Start with Yevgeny Zamyatins astonishingly prescient We.)
The lowliest employees of big tech companies the content moderators whose job it is to spot bad stuff online offer a rather different perspective. Remember Were the free speech wing of the free speech party? one of them asked Alex Feerst of OneZero last year, alluding to an early Twitter slogan. How vain and oblivious does that sound now? Well, its the morning after the free speech party, and the place is trashed.
More from
And how.
I dont know if, as the New York Post alleged last week, Democratic presidential nominee Joe Biden met with a Ukrainian energy executive named Vadym Pozharskyi in 2015. I dont know if Bidens son Hunter tried to broker such a meeting as part of his board directorship deal with Pozharskyis firm, Burisma Holdings. And I am pretty doubtful that the meeting, if indeed it happened, was the reason Biden demanded that the Ukrainian government fire its prosecutor general, Viktor Shokin, who was (allegedly but probably not)investigating Burisma. I am even open to the theory that the whole story is bunk, the emails fake, and the laptop and its hard-drive an infowars gift from Russia, with love.
What I do know is that if I read the story online and found it compelling, I should have been able to share it with friends. Instead, both Facebook and Twitter made a decision to try to kill the Posts scoop.
Andy Stone, the former Democratic Party staffer who is now Facebooks policy communications manager, announced that his company would be reducing the distribution of the Post story. Twitter barred its users from sharing it not only with followers but also through direct messages, locking the accounts of people including White House press secretary Kayleigh McEnany who retweeted it.
This is not an isolated incident. In May, Twitter attached a health warning to one of President Trumps Tweets. There was uproar at Facebook when chief executive Mark Zuckerberg declined to follow Twitters lead. Days later, Facebook was pressured into taking down 88 Trump campaign ads that used an inverted red triangle (a Nazi symbol) to attack antifa, the far-left movement. In August, Facebook removed a group with nearly 200,000 members for repeatedly posting content that violated our policies. The group promoted the QAnon conspiracy theory, which is broadly pro-Trump. Earlier this month, the company deleted all QAnon accounts from its platforms.
Google has been doing the same sort of thing. In June, it excluded the website ZeroHedge from its ad platform because of violations in the comments sections of stories about Black Lives Matter.
The remarkable thing is not that Silicon Valley is playing a highly questionable role in the election of 2020. It is that the same was true in 2016 and, despite a great many fine words and some minor pieces of legislation, Americans did nothing about it.
Far from addressing the glaring problems created by the rise of the network platforms that now dominate the American (and indeed the global) public sphere, we largely decided to shut our eyes and ears to them. In the past 10 months, Ive read as many op-ed articles and reports about this election as I can stand. Im staggered by how few even mention the role of the internet and social media. (Kevin Rooses work on the conservative dominance of Facebook shared content is an honorable exception.) You would think it was still the 1990s as if this contest will be decided by debates on television, newspaper endorsements or stump speeches, and accurately predicted by opinion polls. (Actually, make that the 1960s.)
Yet the new role of social media is staring us in the face (literally). The number of U.S. Facebook users was 240 million in 2019, more than 72% of the population. Adults spend an average of 75 minutes of each day on social media. Half that time is on Facebook. Google accounts for 88% of the U.S. search-engine market, and 95% of all mobile searches. Between them, Google and Facebook captured a combined 60% of U.S. digital-ad spending in 2018.
The top U.S. tech companies are now among the biggest businesses on earth by market capitalization. But their size is not the important thing about them. Earlier this month, the House Judiciary Committees Antitrust Subcommittee released the findings of its 16-month long investigation into Big Tech. The conclusion? Apple, Amazon, Google and Facebook each possess significant market power over large swaths of our economy. In recent years, each company has expanded and exploited their power of the marketplace in anticompetitive ways.
Cue years of antitrust actions that will enrich a great many lawyers and have minimal consequences for competition, like the ultimately failed attempt 20 years ago to prevent Microsoft from dominating software.
An antitrust action against Amazon is doomed. Consumers love the company. It has measurably reduced the prices of innumerable products as well as rendering shopping in bricks-and-mortar stores an obsolescent activity. Good luck, too, with breaking up Google. Even the much less trusted Facebook (according to polls) will be hard to dismantle, without a complete transformation of the way the courts apply competition law. Its free, for heavens sake. And there are network effects on the internet that cant be wished away by judges.
Is it stupidity or venality that has convinced Americas legislators that antitrust is the answer to the problem of Big Tech? A bit of both, I suspect. Either way, its the wrong answer.
The core problem is not a lack of competition in Silicon Valley. It is that the network platforms are now the public sphere. Every other part of what we call the media newspapers, magazines, even cable TV is now subordinated to them. In 2019, the average American spent 6 hours and 35 minutes a day using digital media, more than television, radio and print put together.
Not only do the big tech companies dominate ad revenue, they drive the news cycle. In 2017, two-thirds of American adults said they got news from social media sites. A Pew study showed that, at the end of 2019, 18% of them relied primarily on social media for political news. Among those aged 30 to 49, the share was 40%; among those aged 18 to 29, it was 48%. The pathologies that flow from this new reality are numerous. Antitrust actions address none of them.
I thought once everybody could speak freely and exchange information and ideas, the world is automatically going to be a better place, Evan Williams, one of the founders of Twitter, told the New York Times in 2017. I was wrong about that. Indeed, he was.
Subject to the most minimal regulation in their country of origin far less than the TV networks in their heyday the network platforms tend, because of their central imperative to sell the attention of their users to advertisers, to pollute national discourse with a torrent of fake news and extreme views. The effects on the democratic process, not only in the U.S. but all over the world, have been deeply destabilizing.
Moreover, the vulnerability of the network platforms to outside manipulation has posed and continues to pose a serious threat to national security. Yet half-hearted and ill-considered attempts by the companies to regulate themselves better have led to legitimate complaints that they are restricting free speech.
How did we arrive at this state of affairs when such important components of the public sphere could operate solely with regard to their own profitability as attention merchants? The answer lies in the history of American internet regulation to be precise, in section 230 of the 1934 Communications Act, as amended by the 1996 Telecommunications Act, which was enacted after a New York court held the online service provider Prodigy liable for a users defamatory posts.
Previously, a company that managed content was classified as a publisher, and subject to civil liability creating a perverse incentive not to manage content at all. Thus, Section 230c, Protection for Good Samaritan blocking and screening of offensive material, was written to encourage nascent firms to protect users and prevent illegal activity without incurring massive content-management costs. It states:
1. No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
2. No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.
In essence, Section 230 gave and still gives websites immunity from liability for what their users post (under-filtering), but it also protects them when they choose to remove content (over-filtering). The idea was to split the difference between publishers liability, which would have stunted the growth of the fledgling internet, and complete lack of curation, which would have led to a torrent of filth. The surely unintended result is that some of the biggest companies in the world today are utilities when they are acting as publishers, but publishers when acting as utilities, in a way rather reminiscent of Joseph Hellers Catch-22.
Heres how Catch-22 works. If one of the platforms hosts content that is mendacious, defamatory or in some other way harmful, and you sue, the Big Tech lawyers will cite Section 230: Hey, were just a tech company, its not our malicious content. But if you write something that falls afoul of their content-moderation rules and duly vanishes from the internet, theyll cite Section 230 again: Hey, were a private company, the First Amendment doesnt apply to us.
Remember the good censor? Another influential way of describing the network platforms is as the New Governors. That creeps me out the way Zuckerbergs admiration of Augustus Caesar creeps me out.
For years, of course, the big technology companies have filtered out child pornography and (less successfully) terrorist propaganda. But there has been mission creep. In 2015, Twitter added a new line to its rules that barred promoting violence against others on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. Repeatedly throughout the Trump presidency for example, after the violence in Charlottesville, Virginia, in 2017 there have been further modifications to the platforms terms of service and community standards, as well as to their non-public content moderation policies.
There is no need to detail all the occasions in recent years when mostly right-leaning content was censored, buried far down the search results, or demonetized. The key point is that, in the absence of a coherent reform of the way the network platforms are themselves governed, there has been a dysfunctional tug-of-war between the platforms spasmodic and not wholly sincere efforts to fix themselves and the demands of outside actors (ranging from the German government to groups of left-wing activists) for more censorship of whatever they deem to be hate speech.
At the same time, the founding generation of Silicon Valley entrepreneurs, most of whom had libertarian inclinations, have repeatedly yielded to internal pressure from their younger employees, schooled in the modern campus culture of no-platforming any individuals whose ideas they consider unsafe. In the words of Brian Amerige, whose career at Facebook ended not long after he created a FBers for Political Diversity group, the companys employees are quick to attack often in mobs anyone who presents a view that appears to be in opposition to leftleaning ideology.
The net result seems to be the worst of both worlds. On the one hand, conspiracy theories such as Plandemic flourish on Facebook and elsewhere. On the other, the network platforms arbitrarily intervene when a legitimate article triggers the hate-speech-spotting algorithms and the content-moderating grunts. (As one of them described the process, I was like, I can just block this entire domain, and they wont be able to serve ads on it? And the answer was, Yes. I was like, But Im in my mid-twenties.)
At a lecture at Georgetown University in October 2019, Zuckerberg pledged to continue to stand for free expression and against an ever-expanding definition of what speech is harmful. But even Facebook has had to ramp up the censorship this year. The bottom line is that the good censors are not very good and the new governors cant even govern themselves.
Two years ago, I wrote a lengthy paper on all this with a well-worn title, What Is to Be Done? Since then, almost nothing has been done, beyond some legislative tinkering at the margins. The public has been directed down a series of blind alleys: not only antitrust, but also net neutrality and an inchoate notion of tighter regulation. In reality, as I argued then, only two reforms will fix this godawful mess.
First, we need to repeal or significantly amend Section 230, making the network platforms legally liable for the content they host, and leaving the rest to the courts. Second, we need to impose the equivalent of First Amendment obligations on the network platforms, recognizing that they are too dominant a part of the public sphere to be able to regulate access to it on the basis of their own privately determined and almost certainly skewed community standards.
To such proposals, Big Tech lawyers respond by lamenting that they would massively increase their clients legal liabilities. Yes. That is the whole idea. The platforms will finally discover that there are risks to being a publisher and responsibilities that come with near-universal usage.
In recent few years, these ideas have won growing support and not only among Republican legislators such as Senator Josh Hawley. In the words of Judge Alex Kozinski in Fair Housing Council v. Roommate.com (2008), the Internet has outgrown its swaddling clothes and no longer needs to be so gently coddled. He was referring to Section 230, which gives the tech giants a now-indefensible advantage over traditional publishers, while at the same time empowering them to act as censors.
While Section 230 protects internet companies from liability over removing any content that they believed to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable, successive court rulings have clearly established that the last two words werent intended to permit discrimination against particular political viewpoints.
Meanwhile, in Packingham v. North Carolina (2017), the Supreme Court overturned a state law that banned sex offenders from using social media. In the opinion, Justice Anthony Kennedy likened internet platforms to the modern public square, arguing that it was therefore unconstitutional to prevent even sex offenders from accessing, and expressing opinions, on social-network platforms. In other words, despite being private companies, the big tech companies have a public function.
If the network platforms are the modern public square, then it cannot be their responsibility to remove hateful content (as 19 prominent civil rights groups demanded of Facebook in October 2017) because hateful content unless it explicitly instigates violence against a specific person is protected by the First Amendment.
Unfortunately, this sea change has come too late for root-and-branch reform to be enacted under the Trump administration. And, contemplating the close links between Silicon Valley and Senator Kamala Harris, I see little prospect of progress other than down the antitrust cul-de-sac if she is elected vice president next month. Quite apart from the bountiful campaign contributions Harris and the rest of Democratic Party elite receive from Big Tech, they have no problem at all with Facebook, Twitter and company seeking to kill stories like Huntergate.
In 1931, British Prime Minister Stanley Baldwin accused the principal newspaper barons of the day, Lords Beaverbrook and Rothermere, of aiming at power, and power without responsibility the prerogative of the harlot throughout the ages. (The phrase was his cousin Rudyard Kiplings.) As I contemplate the under-covered and overmighty role that Big Tech continues to play in the American political process, I dont see good censors. I see big, bad harlots.
(Updated to clarify details of Ukrainian prosecutor's investigation in sixth paragraph of article published Oct. 18)
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:Niall Ferguson at nferguson23@bloomberg.net
To contact the editor responsible for this story:Tobin Harshaw at tharshaw@bloomberg.net
Before it's here, it's on the Bloomberg Terminal.
Niall Ferguson is the Milbank Family Senior Fellow at the Hoover Institution at Stanford University and a Bloomberg Opinion columnist. He was previously a professor of history at Harvard, New York University and Oxford. He is the founder and managing director of Greenmantle LLC, a New York-based advisory firm.
See the original post here: