Eleven current and former employees of OpenAI, along with two more from Google DeepMind, posted an open letter today stating that they are unable to voice concerns about risks created by their employees due to confidentiality agreements. Today lets talk about what they said, what they left out, and why lately the AI safety conversation feels like its going nowhere.
Heres a dynamic weve seen play out a few times now at companies including Meta, Google, and Twitter. First, in a bid to address potential harms created by their platforms, companies hire idealistic workers and charge them with building safeguards into their systems. For a while, the work of these teams gets prioritized. But over time, executives enthusiasm wanes, commercial incentives take over, and the team is gradually de-funded.
When those roadblocks go up, some of the idealistic employees will speak out, either to a reporter like me, or via the sort of open letter that the AI workers published today. And the company responds by reorganizing the team out of existence, while putting out a statement saying that whatever that team used to work on is now everyones responsibility.
At Meta, this process gave us the whistleblower Frances Haugen. On Googles AI ethics team, a slightly different version of the story played out after the firing of researcher Timnit Gebru. And in 2024, the story came to the AI industry.
OpenAI arguably set itself up for this moment more than those other tech giants. After all, it was established not as a traditional for-profit enterprise, but as a nonprofit research lab devoted to safely building an artificial general intelligence.
OpenAIs status as a relatively obscure nonprofit changed forever in November 2022. Thats when it released ChatGPT, a chatbot based on the latest version of its large language model, which by some estimates soon became the fastest-growing consumer product in history.
ChatGPT took a technology that had been exclusively the province of nerds and put it in the hands of everyone from elementary school children to state-backed foreign influence operations. And OpenAI soon barely resembled the nonprofit that was founded out of a fear that AI poses an existential risk to humanity.
This OpenAI placed a premium on speed. It pushed the frontier forward with tools like plugins, which connected ChatGPT to the wider internet. It aggressively courted developers. Less than a year after ChatGPTs release, the company a for-profit subsidiary of its nonprofit parent was valued at $90 billion.
That transformation, led by CEO Sam Altman, gave many in the company whiplash. And it was at the heart of the tensions that led the nonprofit board to fire Altman last year, for reasons related to governance.
The five-day interregnum between Altmans firing and his return marked a pivotal moment for the company. The board could have recommitted to its original vision of slow, cautious development of powerful AI systems. Or it could endorse the post-ChatGPT version of OpenAI, which closely resembled a traditional Silicon Valley venture-backed startup.
Almost immediately, it became clear that a vast majority of employees preferred working at a more traditional startup. Among other things, that startups commercial prospects meant that their (unusual) equity in the company would be worth millions of dollars. The vast majority of OpenAI employees threatened to quit if Altman didnt return.
And so Altman returned. Most of the old board left. New, more business-minded board members replaced them. And that board has stood by Altman in the months that followed, even as questions mount about his complex business dealings and conflicts of interest.
Most employees seem content under the new regime; positions at OpenAI are still highly sought after. But like Meta and Google before it, OpenAI had its share of conscientious objectors. And increasingly, were hearing what they think.
The latest wave began last month when OpenAI co-founder Ilya Sutskever, who initially backed Altmans firing and who had focused on AI safety efforts, quit the company. He was followed out the door by Jan Leike, who led the superalignment team, and a handful of other employees who worked on safety.
Then on Tuesday a new group of whistleblowers came forward to complain. Heres handsome podcaster Kevin Roose in the New York Times:
They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there, said Daniel Kokotajlo, a former researcher in OpenAIs governance division and one of the groups organizers.
Anyone looking for jaw-dropping allegations from the whistleblowers will likely leave disappointed. Kokotajlos sole specific complaint in the article is that some employees believed Microsoft had released a new version of GPT-4 in Bing without proper testing; Microsoft denies that this happened.
But the accompanying letter offers one possible explanation for why the charges feel so thin: employees are forbidden from saying more by various agreements they signed as a condition of working at the company. (The company has said it is removing some of the more onerous language from its agreements, after Vox reported on them last month.)
Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, an OpenAI spokeswoman told the Times. We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world.
The company also created a whistleblower hotline for employees to anonymously voice their concerns.
So how should we think about this letter?
I imagine that it will be a Rorschach test for whoever reads it, and what they see will depend on what they think of the AI safety movement in general.
For those who believe that AI poses existential risk, I imagine this letter will provide welcome evidence that at least some employees inside the big AI makers are taking those risks seriously. And for those who dont, I imagine it will provide more ammunition for the argument that the AI doomers are once again warning about dire outcomes without providing any compelling evidence for their beliefs.
As a journalist, I find myself naturally sympathetic to people inside companies who warn about problems that havent happened yet. Journalism often serves a similar purpose, and every once in a while, it can help prevent those problems from occurring. (This can often make the reporter look foolish, since they spent all that time warning about a scenario that never unfolded, but thats a subject for another day.)
At the same time, theres no doubt that the AI safety argument has begun to feel a bit tedious over the past year, when the harms caused by large language models have been funnier than they have been terrifying. Last week, when OpenAI put out the first account of how its products are being used in covert influence operations, there simply wasnt much there to report.
Weve seen plenty of problematic misuse of AI, particularly deepfakes in elections and in schools. (And of women in general.) And yet people who sign letters like the one released today fail to connect high-level hand-wringing about their companies to the products and policy decisions that their companies make. Instead, they speak through opaque open letters that have surprisingly little to say about what safe development might actually look like in practice.
For a more complete view of the problem, I preferred another (and much longer) piece of writing that came out Tuesday. Leopold Aschenbrenner, who worked on OpenAIs superalignment team and was reportedly fired for leaking in April, published a 165-page paper today laying out a path from GPT-4 to superintelligence, the dangers it poses, and the challenge of aligning that intelligence with human intentions.
Weve heard a lot of this before, and the hypotheses remain as untestable (for now) as they always have. But I find it difficult to read the paper and not come away believing that AI companies ought to prioritize alignment research, and that current and former employees ought to be able to talk about the risks they are seeing.
Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered, Aschenbrenner concludes. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.
And if those who feel the weight of what is coming work for an AI company, it seems important that they be able to talk about what theyre seeing now, and in the open.
For more good posts every day, follow Caseys Instagram stories.
(Link)
(Link)
Send us tips, comments, questions, and situational awareness: casey@platformer.news and zoe@platformer.news.
Excerpt from:
What aren't the OpenAI whistleblowers saying? - Platformer
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Top Philippine universities - Philstar.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AIs Impact on Journalism - Signals AZ [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider [Last Updated On: May 18th, 2023] [Originally Added On: May 18th, 2023]
- AI in taxation: Transforming or replacing? - Times of Malta [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com [Last Updated On: January 2nd, 2024] [Originally Added On: January 2nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co [Last Updated On: February 22nd, 2024] [Originally Added On: February 22nd, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times [Last Updated On: February 24th, 2024] [Originally Added On: February 24th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business [Last Updated On: March 14th, 2024] [Originally Added On: March 14th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today [Last Updated On: March 18th, 2024] [Originally Added On: March 18th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph [Last Updated On: March 29th, 2024] [Originally Added On: March 29th, 2024]
- Analyzing the Future of AI - Legal Service India [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Roundup: AI and the Resurrection of Usability - Substack [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- The 3 phases of AI evolution that could play out this century - Big Think [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- What Ever Happened to the AI Apocalypse? - New York Magazine [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]
- Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post [Last Updated On: June 6th, 2024] [Originally Added On: June 6th, 2024]