Photo: Intelligencer; Photo: Getty Images
For a few years now, lots of people have been wondering what Sam Altman thinks about the future or perhaps what he knows about it as the CEO of OpenAI, the company that kicked off the recent AI boom. Hes been happy to tell them about the end of the world. If this technology goes wrong, it can go quite wrong, he told a Senate committee in May 2023. What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT, he said last June. A misaligned superintelligent AGI could cause grievous harm to the world, he wrote in a blog post on OpenAIs website that year.
Before the success of ChatGPT thrust him into the spotlight, he was even less circumspect. AI will probably, like, most likely lead to the end of the world, but in the meantime, therell be great companies, he cracked during an interview in 2015. Probably AI will kill us all, he joked at an event in New Zealand around the same time; soon thereafter, he would tell a New Yorker reporter about his plans to flee there with friend Peter Thiel in the event of an apocalyptic event (either there or a big patch of land in Big Sur he could fly to).Then Altman wrote on his personal blog that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Returning, again, to last year: The bad case and I think this is important to say is like lights out for all of us. He wasnt alone in expressing such sentiments. In his capacity as CEO of OpenAI, he signed his name to a group statement arguing that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, alongside a range of people in and interested in AI, including notable figures at Google, OpenAI, Microsoft, and xAI.
The tech industrys next big thing might be a doomsday machine, according to the tech industry, and the race is on to summon a technology that might end the world. Its a strange mixed message, to say the least, but its hard to overstate how thoroughly the apocalypse invoked as a serious worry or a reflexive aside has permeated the mainstream discourse around AI. Unorthodox thinkers and philosophers have seen longstanding theories and concerns about superintelligence get mainstream consideration. But the end of the world has also become product-event material, fundraising fodder. In discussions about artificial intelligence, acknowledging the outside chance of ending human civilization has come to resemble a tic. On AI-startup websites, the prospect of human annihilation appears as boilerplate.
In the last few months, though, companies including OpenAI have started telling a slightly different story. After years of warning about infinite downside risk and acting as though they had no choice but to take it theyre focusing on the positive. The doomsday machine were working on? Actually, its a powerful enterprise software platform. From the Financial Times:
The San Francisco-based company said on Tuesday that it had started producing a new AI system to bring us to the next level of capabilities and that its development would be overseen by a new safety and security committee.
But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a superintelligence far more advanced than humans.
Anna Makanju, OpenAIs vice-president of global affairs, told the Financial Times in an interview that its mission was to build artificial general intelligence capable of cognitive tasks that are what a human could do today.
Our mission is to build AGI; I would not say our mission is to build superintelligence, Makanju said.
The story also notes that in November, in the context of seeking more money from OpenAI partner Microsoft, Altman said he was spending a lot of time thinking about how to build superintelligence, but also, more gently, that his companys core product was, rather than a fearsome self-replicating software organism with unpredictable emergent traits, a form of magic intelligence in the sky.
Shortly after that statement, Altman would be temporarily ousted from OpenAI by a board that deemed him not sufficiently candid, a move that triggered external speculation that a major AI breakthrough had spooked safety-minded members. (More recent public statements from former board members were forceful but personal, accusing Altman of a pattern of lying and manipulation.)
After his return, Altman consolidated his control of the company, and some of his internal antagonists left or were pushed out. OpenAI then dissolved the team charged with achieving superalignment in the companys words, managing risks that could lead to the disempowerment of humanity or even human extinction and replaced it with a new safety team run by Altman himself, who also stood accused of voice theft by Scarlett Johansson. Its safety announcement was terse and notably lacking in evocative doomsaying. This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations, the company said. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment. Its the sort of careful, vague corporate language you might expect from a company thats comprehensively dependent on one tech giant (Microsoft) and is closing in on a massive licensing deal with its competitor (Apple).
In other news, longtime AI doomsayer Elon Musk, who co-founded OpenAI but split with the firm and later (incoherently and perhaps disingenuously) sued it for abandoning its nonprofit mission in pursuit of profit, raised $6 billion for his unapologetically for-profit competitor xAI. His grave public warnings about superintelligence now take the form of occasional X posts about memes:
There are a few different ways to process this shift. If youre deeply worried about runaway AI, this is just a short horror story in which a superintelligence is manifesting itself right in front of our eyes, helped along by the few who both knew better and were in any sort of position to stop it, in some sort of short-sighted exchange for wealth. Whats happened so far is basically compatible with your broad prediction and well-articulated warnings that far predated the current AI boom: All it took for mankind to summon a vengeful machine god was the promise of ungodly sums of money.
Similarly, if you believe in and are excited about runaway AI, this is all basically great. The system is working, the singularity is effectively already here, and failed attempts to alter or slow AI development were, in fact, near misses with another sort of disaster (this perspective exists among at least a few people at OpenAI).
If youre more skeptical of AI-doomsday predictions, you might generously credit this shift to a gradual realization among industry leaders that current generative-AI technologynow receiving hundreds of billions of dollars of investment and deployed in the wild at scaleis not careening toward superintelligence, consciousness, or rogue malice. Theyre simply adjusting their story to fit the facts of what theyre seeing.
Or maybe, for at least some in the industry, apocalyptic stories were plausible in the abstract, compelling, attention-grabbing, and interesting to talk about, and turned out to be useful marketing devices. They werestories that dovetailed nicely with the concerns of some of the domain experts they needed to work at the companies, but which seemed like harmless and ultimately cautious intellectual exercises to domain experts who didnt share them (Altman, it should be noted, is an investor and executive, not a machine-learning engineer or AI researcher). Apocalyptic warnings were an incredible framing device for a class of companies that needed to raise enormous amounts of money to function, a clever and effective way to make an almost cartoonishly brazen proposal to investors we are the best investment of all time, with infinite upside in the disarming passive voice, as concerned observers with inside knowledge of an unstoppable trend and an ability to accept capital. Routine acknowledgments of abstract danger were also useful for feigning openness to theoretical regulation help us help you avoid the end of the world! while fighting material regulation in private. They raised the stakes to intoxicating heights.
As soon as AI companies made actual contact with users, clients, and the general public, though, this apocalyptic framing flipped into a liability. It suggested risk where risk wasnt immediately evident. In a world where millions of people engage casually with chatbots, where every piece of software suddenly contains an awkward AI assistant, and where Google is pumping AI content into search pages for hundreds of millions of users to see and occasionally laugh at, the AI apocalypse can, somewhat counterintuitively, feel a bit like a non sequitur. Encounters with modern chatbots and LLM-powered software might cause users to wonder about their jobs, or trigger a general sense of wonder or unease about the future; they do not, in their current state, seem to strike fear in users hearts. Mostly, theyre showing up as new features in old software used at work.
The AI industrys sudden disinterest in the end of the world might also be understood as an exaggerated version of corporate Americas broader turn away from talking about ESG and DEI: as profit-driven, sure, but also as evidence that initial commitments to mitigating harmful externalities were themselves disingenuous and profit motivated at the time, and simply outlived their usefulness as marketing stories. It signals a loss of narrative control. In 2022, OpenAI could frame the future however it wanted. In 2024, its dealing with external expectations about the present, from partners and investors that are less interested in speculating about the future of mankind, or conceptualizing intelligence, than they are getting returns on their considerable investments, preferably within the fiscal year.
Again, none of this is particularly comforting if you think that Altman and Musk were right to warn about ending the world, even by accident, even out of craven self-interest, or if youre concerned about the merely very bad externalities the many small apocalypses that AI deployment is already producing and is likely to produce.
But AIs sudden rhetorical downgrade might be clarifying, too, at least about the behaviors of the largest firms and their leaders. If OpenAI starts communicating more like a company, it will be less tempting to mistake it for something else, as it argues for the imminence of benign but barely less speculative variation of AGI, with its softer implication of infinite returns by way of semi-apocalyptic workplace automation. If its current leadership ever believed what they were saying, theyre certainly not acting like it, and in hindsight, they never really were. The apocalypse was just another pitch. Let it be a warning about the next one.
Daily news about the politics, business, and technology shaping our world.
By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.
More here:
What Ever Happened to the AI Apocalypse? - New York Magazine
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Top Philippine universities - Philstar.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AIs Impact on Journalism - Signals AZ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI in taxation: Transforming or replacing? - Times of Malta [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Analyzing the Future of AI - Legal Service India [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Roundup: AI and the Resurrection of Usability - Substack [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The 3 phases of AI evolution that could play out this century - Big Think [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]