What Ever Happened to the AI Apocalypse? – New York Magazine

Posted: June 6, 2024 at 8:48 am

Photo: Intelligencer; Photo: Getty Images

For a few years now, lots of people have been wondering what Sam Altman thinks about the future or perhaps what he knows about it as the CEO of OpenAI, the company that kicked off the recent AI boom. Hes been happy to tell them about the end of the world. If this technology goes wrong, it can go quite wrong, he told a Senate committee in May 2023. What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT, he said last June. A misaligned superintelligent AGI could cause grievous harm to the world, he wrote in a blog post on OpenAIs website that year.

Before the success of ChatGPT thrust him into the spotlight, he was even less circumspect. AI will probably, like, most likely lead to the end of the world, but in the meantime, therell be great companies, he cracked during an interview in 2015. Probably AI will kill us all, he joked at an event in New Zealand around the same time; soon thereafter, he would tell a New Yorker reporter about his plans to flee there with friend Peter Thiel in the event of an apocalyptic event (either there or a big patch of land in Big Sur he could fly to).Then Altman wrote on his personal blog that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Returning, again, to last year: The bad case and I think this is important to say is like lights out for all of us. He wasnt alone in expressing such sentiments. In his capacity as CEO of OpenAI, he signed his name to a group statement arguing that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, alongside a range of people in and interested in AI, including notable figures at Google, OpenAI, Microsoft, and xAI.

The tech industrys next big thing might be a doomsday machine, according to the tech industry, and the race is on to summon a technology that might end the world. Its a strange mixed message, to say the least, but its hard to overstate how thoroughly the apocalypse invoked as a serious worry or a reflexive aside has permeated the mainstream discourse around AI. Unorthodox thinkers and philosophers have seen longstanding theories and concerns about superintelligence get mainstream consideration. But the end of the world has also become product-event material, fundraising fodder. In discussions about artificial intelligence, acknowledging the outside chance of ending human civilization has come to resemble a tic. On AI-startup websites, the prospect of human annihilation appears as boilerplate.

In the last few months, though, companies including OpenAI have started telling a slightly different story. After years of warning about infinite downside risk and acting as though they had no choice but to take it theyre focusing on the positive. The doomsday machine were working on? Actually, its a powerful enterprise software platform. From the Financial Times:

The San Francisco-based company said on Tuesday that it had started producing a new AI system to bring us to the next level of capabilities and that its development would be overseen by a new safety and security committee.

But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a superintelligence far more advanced than humans.

Anna Makanju, OpenAIs vice-president of global affairs, told the Financial Times in an interview that its mission was to build artificial general intelligence capable of cognitive tasks that are what a human could do today.

Our mission is to build AGI; I would not say our mission is to build superintelligence, Makanju said.

The story also notes that in November, in the context of seeking more money from OpenAI partner Microsoft, Altman said he was spending a lot of time thinking about how to build superintelligence, but also, more gently, that his companys core product was, rather than a fearsome self-replicating software organism with unpredictable emergent traits, a form of magic intelligence in the sky.

Shortly after that statement, Altman would be temporarily ousted from OpenAI by a board that deemed him not sufficiently candid, a move that triggered external speculation that a major AI breakthrough had spooked safety-minded members. (More recent public statements from former board members were forceful but personal, accusing Altman of a pattern of lying and manipulation.)

After his return, Altman consolidated his control of the company, and some of his internal antagonists left or were pushed out. OpenAI then dissolved the team charged with achieving superalignment in the companys words, managing risks that could lead to the disempowerment of humanity or even human extinction and replaced it with a new safety team run by Altman himself, who also stood accused of voice theft by Scarlett Johansson. Its safety announcement was terse and notably lacking in evocative doomsaying. This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations, the company said. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment. Its the sort of careful, vague corporate language you might expect from a company thats comprehensively dependent on one tech giant (Microsoft) and is closing in on a massive licensing deal with its competitor (Apple).

In other news, longtime AI doomsayer Elon Musk, who co-founded OpenAI but split with the firm and later (incoherently and perhaps disingenuously) sued it for abandoning its nonprofit mission in pursuit of profit, raised $6 billion for his unapologetically for-profit competitor xAI. His grave public warnings about superintelligence now take the form of occasional X posts about memes:

There are a few different ways to process this shift. If youre deeply worried about runaway AI, this is just a short horror story in which a superintelligence is manifesting itself right in front of our eyes, helped along by the few who both knew better and were in any sort of position to stop it, in some sort of short-sighted exchange for wealth. Whats happened so far is basically compatible with your broad prediction and well-articulated warnings that far predated the current AI boom: All it took for mankind to summon a vengeful machine god was the promise of ungodly sums of money.

Similarly, if you believe in and are excited about runaway AI, this is all basically great. The system is working, the singularity is effectively already here, and failed attempts to alter or slow AI development were, in fact, near misses with another sort of disaster (this perspective exists among at least a few people at OpenAI).

If youre more skeptical of AI-doomsday predictions, you might generously credit this shift to a gradual realization among industry leaders that current generative-AI technologynow receiving hundreds of billions of dollars of investment and deployed in the wild at scaleis not careening toward superintelligence, consciousness, or rogue malice. Theyre simply adjusting their story to fit the facts of what theyre seeing.

Or maybe, for at least some in the industry, apocalyptic stories were plausible in the abstract, compelling, attention-grabbing, and interesting to talk about, and turned out to be useful marketing devices. They werestories that dovetailed nicely with the concerns of some of the domain experts they needed to work at the companies, but which seemed like harmless and ultimately cautious intellectual exercises to domain experts who didnt share them (Altman, it should be noted, is an investor and executive, not a machine-learning engineer or AI researcher). Apocalyptic warnings were an incredible framing device for a class of companies that needed to raise enormous amounts of money to function, a clever and effective way to make an almost cartoonishly brazen proposal to investors we are the best investment of all time, with infinite upside in the disarming passive voice, as concerned observers with inside knowledge of an unstoppable trend and an ability to accept capital. Routine acknowledgments of abstract danger were also useful for feigning openness to theoretical regulation help us help you avoid the end of the world! while fighting material regulation in private. They raised the stakes to intoxicating heights.

As soon as AI companies made actual contact with users, clients, and the general public, though, this apocalyptic framing flipped into a liability. It suggested risk where risk wasnt immediately evident. In a world where millions of people engage casually with chatbots, where every piece of software suddenly contains an awkward AI assistant, and where Google is pumping AI content into search pages for hundreds of millions of users to see and occasionally laugh at, the AI apocalypse can, somewhat counterintuitively, feel a bit like a non sequitur. Encounters with modern chatbots and LLM-powered software might cause users to wonder about their jobs, or trigger a general sense of wonder or unease about the future; they do not, in their current state, seem to strike fear in users hearts. Mostly, theyre showing up as new features in old software used at work.

The AI industrys sudden disinterest in the end of the world might also be understood as an exaggerated version of corporate Americas broader turn away from talking about ESG and DEI: as profit-driven, sure, but also as evidence that initial commitments to mitigating harmful externalities were themselves disingenuous and profit motivated at the time, and simply outlived their usefulness as marketing stories. It signals a loss of narrative control. In 2022, OpenAI could frame the future however it wanted. In 2024, its dealing with external expectations about the present, from partners and investors that are less interested in speculating about the future of mankind, or conceptualizing intelligence, than they are getting returns on their considerable investments, preferably within the fiscal year.

Again, none of this is particularly comforting if you think that Altman and Musk were right to warn about ending the world, even by accident, even out of craven self-interest, or if youre concerned about the merely very bad externalities the many small apocalypses that AI deployment is already producing and is likely to produce.

But AIs sudden rhetorical downgrade might be clarifying, too, at least about the behaviors of the largest firms and their leaders. If OpenAI starts communicating more like a company, it will be less tempting to mistake it for something else, as it argues for the imminence of benign but barely less speculative variation of AGI, with its softer implication of infinite returns by way of semi-apocalyptic workplace automation. If its current leadership ever believed what they were saying, theyre certainly not acting like it, and in hindsight, they never really were. The apocalypse was just another pitch. Let it be a warning about the next one.

Daily news about the politics, business, and technology shaping our world.

By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.

More here:

What Ever Happened to the AI Apocalypse? - New York Magazine

Related Posts