Workers at some of the worlds leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed.
The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI.
Read More: Exclusive: U.S. Must Move Decisively To Avert Extinction-Level Threat from AI, Government-Commissioned Report Says
The reports authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropicleading AI labs that are all working towards artificial general intelligence, a hypothetical technology that could perform most tasks at or above the level of a human. The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment.
We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes, Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME.
One individual at an unspecified AI lab shared worries with the reports authors that the lab has what the report characterized as a lax approach to safety stemming from a desire to not slow down the labs work to build more powerful systems. Another individual expressed concern that their lab had insufficient containment measures in place to prevent an AGI from escaping their control, even though the lab believes AGI is a near-term possibility.
Still others expressed concerns about cybersecurity. By the private judgment of many of their own technical staff, the security measures in place at many frontier AI labs are inadequate to resist a sustained IP exfiltration campaign by a sophisticated attacker, the report states. Given the current state of frontier lab security, it seems likely that such model exfiltration attempts are likely to succeed absent direct U.S. government support, if they have not already.
Many of the people who shared those concerns did so while wrestling with the calculation that whistleblowing publicly would likely result in them losing their ability to influence key decisions in the future, says Harris. The level of concern from some of the people in these labs, about the decisionmaking process and how the incentives for management translate into key decisions, is difficult to overstate, he tells TIME. The people who are tracking the risk side of the equation most closely, and are in many cases the most knowledgeable, are often the ones with the greatest levels of concern.
Are you an employee at an AI lab and have concerns that you might consider sharing with a journalist? You can contact the author of this piece on Signal at billyperrigo.01
The fact that todays AI systems have not yet led to catastrophic outcomes for humanity, the authors say, is not evidence that bigger systems will be safe in the future. One of the big themes weve heard from individuals right at the frontier, on the stuff being developed under wraps right now, is that its a bit of a Russian roulette game to some extent, says Edouard Harris, Gladstones chief technology officer who also co-authored the report. Look, we pulled the trigger, and hey, were fine, so lets pull the trigger again.
Read More: How We Can Have AI Progress Without Sacrificing Safety or Democracy
Many of the worlds governments have woken up to the risk posed by advanced AI systems over the last 12 months. In November, the U.K. hosted an AI Safety Summit where world leaders committed to work together to set international norms for the technology, and in October President Biden issued an executive order setting safety standards for AI labs based in the U.S. Congress, however, is yet to pass an AI law, meaning there are few legal restrictions on what AI labs can and cant do when it comes to training advanced models.
Bidens executive order calls on the National Institute of Standards and Technology to set rigorous standards for tests that AI systems should have to pass before public release. But the Gladstone report recommends that government regulators should not rely heavily on these kinds of AI evaluations, which are today a common practice for testing whether an AI system has dangerous capabilities or behaviors. Evaluations, the report says, can be undermined and manipulated easily, because AI models can be superficially tweaked, or fine tuned, by their creators to pass evaluations if the questions are known in advance. Crucially it is easier for these tweaks to simply teach a model to hide dangerous behaviors better, than to remove those behaviors altogether.
The report cites a person described as an expert with direct knowledge of one AI labs practices, who judged that the unnamed lab is gaming evaluations in this way. AI evaluations can only reveal the presence, but not confirm the absence, of dangerous capabilities, the report argues. Over-reliance on AI evaluations could propagate a false sense of security among AI developers [and] regulators.
Read more:
Employees at Top AI Labs Fear Safety Is an Afterthought - TIME
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Top Philippine universities - Philstar.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AIs Impact on Journalism - Signals AZ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI in taxation: Transforming or replacing? - Times of Malta [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Analyzing the Future of AI - Legal Service India [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Roundup: AI and the Resurrection of Usability - Substack [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The 3 phases of AI evolution that could play out this century - Big Think [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What aren't the OpenAI whistleblowers saying? - Platformer [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]