A few weeks ago, I was having a chat with my neighbor Tom, an amateur chemist who conducts experiments in his apartment. I have a longtime fascination with chemistry, and always enjoy talking with him. But this conversation was scary. If his latest experiment was successful, he informed me, it might have some part to play in curing cancer. If it was a failure, however, there was a reasonable chance, according to his calculations, that the experiment would trigger an explosion that levels the entire apartment complex.
Perhaps Tom was lying, or maybe hes delusional. But what if he really was just one test tube clink away from blowing me and dozens of our fellow building residents sky high? What should one do in this situation? After a brief deliberation, I decided to call 911. The police rushed over, searched his apartment and decided after an investigation to confiscate all of his chemistry equipment and bring him in for questioning.
The above scenario is a thought experiment. As far as I know, no one in my apartment complex is an amateur chemist experimenting with highly combustible compounds. Ive spun this fictional tale because its a perfect illustration of the situation that we all of us are in with respect to the AI companies trying to build artificial general intelligence, or AGI. The list of such companies includes DeepMind, OpenAI, Anthropic and xAI, all of which are backed by billions of dollars. Many leading figures at these very companies have claimed, in public, while standing in front of microphones, that one possible outcome of the technology they are explicitly trying to build is that everyone on Earth dies. The only sane response to this is to immediately call 911 and report them to the authorities. They are saying that their own technology might kill you, me, our family members and friends the entire human population. And almost no one is freaking out about this.
Its crucial to note that you dont have to believe that AGI will actually kill everyone on Earth to be alarmed. I myself am skeptical of these claims. Even if one suspects Tom of lying about his chemistry experiments, the fact of his telling me that his actions could kill everyone in our apartment complex is enough to justify dialing 911.
One doesnt need to accept this line of reasoning to be alarmed when the CEO of the most powerful AI company thats trying to build AGI says that superintelligent machines might kill us.
What exactly are AI companies saying about the potential dangers of AGI? During a 2023 talk, OpenAI CEO Sam Altman was asked about whether AGI could destroy humanity, and he responded, the bad case and I think this is important to say is, like, lights out for all of us. In some earlier interviews, he declared that I think AI willmost likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning, and probably AI will kill us all, but until then were going to turn out a lot of great students. The audience laughed at this. But was he joking? If he was, he was also serious: the OpenAI website itself states in a 2023 article that the risks of AGI may be existential, meaning roughly that they could wipe out the entire human species. Another article on their website affirms that a misaligned superintelligent AGI could cause grievous harm to the world.
In a 2015 post on his personal blog, Altman wrote that development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Whereas AGI refers to any artificial system that is at least as competent as humans in every cognitive domain of importance, such as science, mathematics, social manipulation and creativity, a SMI is a type of AGI that is superhuman in its capabilities. Many researchers in the field of AI safety believe that once we have AGI, we will have superintelligent machines very shortly after. The reason is that designing increasingly capable machines is an intellectual task, so the smarter these systems become, the better able theyll become at designing even smarter systems. Hence, the first AGIs will design the next generation of even smarter AGIs, until those systems reach superhuman levels.
Again, one doesnt need to accept this line of reasoning to be alarmed when the CEO of the most powerful AI company thats trying to build AGI says that superintelligent machines might kill us.
Just the other day, an employee at OpenAI who goes by roon on Twitter/X, tweeted that things are accelerating. Pretty much nothing needs to change course to achieve AGI Worrying about timelines that is, worrying about whether AGI will be built later this year or 10 years from now is idle anxiety, outside your control. You should be anxious about stupid mortal things instead. Do your parents hate you? Does your wife love you? In other words, AGI is right around the corner and its development cannot be stopped. Once created, it will bring about the end of the world as we know it, perhaps by killing everyone on the planet. Hence, you should be thinking not so much about when exactly this might happen, but on more mundane things that are meaningful to us humans: Do we have our lives in order? Are we on good terms with our friends, family and partners? When youre flying on a plane and it begins to nosedive toward the ground, most people turn to their partner and say I love you or try to send a few last text messages to loved ones to say goodbye. That is, according to someone at OpenAI, what we should be doing right now.
A similar sentiment has been echoed by other notable figures at OpenAI, such as Altmans co-founder, Ilya Sutskever. The future is going to be good for the AIs regardless, he said in 2019. It would be nice if it would be good for humans as well. He adds, ominously, that I think its pretty likely the entire surface of the Earth will be covered with solar panels and data centers once we create AGI, referencing the idea that AGI is dangerous partly because it will seek to harness every resource it can. In the process, humanity could be destroyed as an unintended side effect. Indeed, Sutskever tells us that the AGI his own company is trying to build probably isnt,
going to actively hate humans and want to harm them, but its just going to be too powerful, and I think a good analogy would be the way humans treat animals. Its not that we hate animals. I think humans love animals and have a lot of affection for them, but when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because its important for us. And I think by default thats the kind of relationship thats going to be between us and AGIs, which are truly autonomous and operating on their own behalf.
The good folks by which I mean quasi-homicidal folks at OpenAI arent the only ones being honest about how their work could lead to the annihilation of our species. Dario Amodei, the CEO of Anthropic, which recently received $4 billion in funding from Amazon, said in 2017 that theres a long tail of things of varying degrees of badness that could happen after building AGI. I think at the extreme end is the fear that an AGI could destroy humanity. I cant see any reason in principle why that couldnt happen. Similarly, Elon Musk, the co-founder of OpenAI who recently started his own company to build AGI, named xAI, declared in 2023 that one of the biggest risks to the future of civilization is AI, and has previously said that, being very close to the cutting edge in AI scares the hell out of me. Why? Because advanced AI is capable of vastly more than almost anyone knows and the rate of improvement is exponential.
Even the CEO of Google, Sundar Pichai, told Sky News last year that advanced AI can be very harmful if deployed wrongly, and that with respect to safety issues, we dont have all the answers there yet, and the technology is moving fast. So does that keep me up at night? Absolutely.
Google currently owns DeepMind, which was cofounded in 2010 by a computer scientist named Shane Legg. During a talk one year before DeepMind was founded, Legg claimed that if we can build human level AI, then we can almost certainly scale up to well above human level. A machine well above human level will understand its design and be able to design even more powerful machines, which gestures back at the idea that AGI could take over the job of designing even more advanced AI systems than itself. We have almost no idea how to deal with this, he adds. During the same talk, Legg said that we arent going to develop a theory about how to keep AGI safe before AGI is developed. Ive spoken to a bunch of people, he reports, none of them, that Ive ever spoken to, think they will have a practical theory of friendly artificial intelligence in about 10 years time. We have no idea how to solve this problem.
Either these AI companies need to show, right now, that the systems theyre building are completely safe, or they need to stop, right now.
Thats worrying because many researchers at the major AI companies argue that as roon suggested AGI may be just around the corner. In a recent interview, Demis Hassabis, another co-founder of DeepMind, says that when we started DeepMind back in 2010, we thought of it as a 20-year project, and actually I think were on track. So, I wouldnt be surprised if we had AGI-like systems within the next decade. When asked what it would take to make sure that an AGI thats smarter than a human is safe, his answer was, as one commentator put it, a grab bag of half-baked ideas. Maybe, he says, we can use less capable AIs to help us keep the AGIs in check. But maybe that wont work who knows? Either way, DeepMind and the other AI companies are plowing ahead with their efforts to build AGI, while simultaneously acknowledging, in public, on record, that their products could destroy the entire world.
This is, in a word, madness. If youre driving in a car with me, and I tell you that earlier today I attached a bomb to the bottom of the car, and it might or might not! go off if we hit a pothole, then whether or not you believe me, you should be extremely alarmed. That is a very scary thing to hear someone say at 60 miles an hour on a highway. You should, indeed, turn to me and scream, Stop this damn car right now. Let me out immediately I dont want to ride with you anymore!
Right now, were in that car with these AI companies driving. They have turned to us on numerous occasions over the past decade and a half and admitted that theyve attached a bomb to the car, and that it might or might not! explode in the near future, killing everyone inside. Thats an outrageous situation to be in, and more people should be screaming at them to stop what theyre doing immediately. More people should be dialing 911 and reporting the incident to the authorities, as I did with Tom in the fictional scenario above.
I do not know if AGI will kill everyone on Earth Im more focused on the profound harms that these AI companies have already caused through worker exploitation, massive intellectual property theft, algorithmic bias and so on. The point is that it is completely unacceptable that the people leading or working for these AI companies believe that what theyre doing could kill you, your family, your friends and even your pets (who will feed your fluffy companions if you cease to exist?) yet continue to do it anyway. One doesnt need to completely buy-into the AGI might destroy humanity claim to see that someone who says their work might destroy humanity should not be doing whatever it is theyre doing. As Ive shown before, there have been several episodes in recent human history where scientists have declared that were on the verge of creating a technology that would destroy the world and nothing came of it. But thats irrelevant. If someone tells you that they have a gun and might shoot you, that should be more than enough to sound the alarm even if you believe that they dont, in fact, have a gun hidden under their bed.
Either these AI companies need to show, right now, that the systems theyre building are completely safe, or they need to stop, right now, trying to build those systems. Something needs to change about the situation immediately.
Independent journalism is under threat and overshadowed by heavily funded mainstream media.
You can help level the playing field. Become a member.
Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.
Give today to support our courageous, independent journalists.
Read the rest here:
The Madness of the Race to Build Artificial General Intelligence - Truthdig
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Top Philippine universities - Philstar.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AIs Impact on Journalism - Signals AZ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI in taxation: Transforming or replacing? - Times of Malta [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Analyzing the Future of AI - Legal Service India [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Roundup: AI and the Resurrection of Usability - Substack [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The 3 phases of AI evolution that could play out this century - Big Think [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]