A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity and demanded a right to warn the public in an open letter Tuesday.
Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that AI companies have strong financial incentives to avoid effective oversight and cited a lack of federal rules on developing advanced AI.
The workers point to potential risks including the spread of misinformation, worsening inequality and even loss of control of autonomous AI systems potentially resulting in human extinction especially as OpenAI and other firms pursue so-called advanced general intelligence, with capacities on par with or surpassing the human mind.
Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI, former OpenAI employee Daniel Kokotajlo, one of the letters organizers, said in a statement. I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.
They and others have bought into the move fast and break things approach and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo added.
Kokotajlo, who joined OpenAI in 2022 as a researcher focused on charting AI advancements before leaving in April, has placed the probability that advanced AI will destroy or severely harm humanity in the future at a whopping 70%, according tothe New York Times, which first reported on the letter.
He believes theres a 50% chance that researchers will achieve artificial general intelligence by 2027.
The letter drew endorsements by two prominent experts known as the Godfathers of AI Geoffrey Hinton, who warned last year that the threat of rogue AI was more urgent to humanity than climate change, and Canadian computer scientist Yoshua Bengio. Famed British AI researcher Stuart Russell also backed the letter.
The letter asks AI giants to commit to four principles designed to boost transparency and protect whistleblowers who speak out publicly.
Those include an agreement not to retaliate against employees who speak out about safety concerns and to support an anonymous system for whistleblowers to alert the public and regulators about risks.
The AI firms are also asked to allow a culture of open criticism so long as no trade secrets are disclosed, and pledge not to enter into or enforce non-disparagement agreements or non-disclosure agreements.
As of Tuesday morning, the letters signers include a total of 13 AI workers. Of that total, 11 are formerly or currently employed by OpenAI, including Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler.
There should be ways to share information about risks with independent experts, governments, and the public, said Saunders. Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.
Other signers included former Google DeepMind employee Ramana Kumar and current employee Neel Nanda, who formerly worked at Anthropic.
When reached for comment, an OpenAI spokesperson said the company has a proven track record of not releasing AI products until necessary safeguards were in place.
Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, OpenAI said in a statement.
We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world, the company added
Google and Anthropic did not immediately return requests for comment.
The letter was published just days after revelations that OpenAI has dissolved its Superalignment safety team, whose responsibilities included creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction.
Subscribe to our daily Business Report newsletter!
Two OpenAI executives who led the team, co-founder Ilya Sutskever and Jan Leike, have since resigned from the company. Leike blasted the firm on his way out the door, claiming that safety had taken a backseat to shiny products.
Elsewhere, former OpenAI board member Helen Toner who was part of the group that briefly succeeded in ousting Sam Altman as the firms CEO last year alleged that he had repeatedly lied during her tenure.
Toner claimed that she and other board members did not learn about ChatGPTs launch in November 2022 from Altman and instead found out about its debut on Twitter.
OpenAI has since established a new safety oversight committee that includes Altman as it begins training the new version of the AI model that powers ChatGPT.
The company pushed back on Toners allegations, noting that an outside review had determined that safety concerns were not a factor in Altmans removal.
Read this article:
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Top Philippine universities - Philstar.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AIs Impact on Journalism - Signals AZ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI in taxation: Transforming or replacing? - Times of Malta [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Analyzing the Future of AI - Legal Service India [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Roundup: AI and the Resurrection of Usability - Substack [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The 3 phases of AI evolution that could play out this century - Big Think [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]