The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders – CMSWire

Posted: June 6, 2024 at 8:48 am

The Gist

Leading artificial intelligence companies avoid effective oversight because of money and operate without sufficient accountability government or other industry standards, former and current employees said in a letter published today.

In other words, they get away with a lot and that's not great news for a technology that comes with risks including human extinction.

"We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public," the group wrote in the letter titled, "A Right to Warn about Advanced Artificial Intelligence." "However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

The letter was signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee. It was also endorsed by AI powerhousesYoshua Bengio, Geoffrey Hinton and Stuart Russell.

While the group believes in the potential of AI technology to deliver unprecedented benefits to humanity, it says risks include:

"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the group wrote. "However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."

The list of employees who shared their names (others were listed anonymously) includes: Jacob Hilton, formerly OpenAI; Daniel Kokotajlo, formerly OpenAI; Ramana Kumar, formerly Google DeepMind; Neel Nanda, currently Google DeepMind formerly Anthropic; William Saunders, formerly OpenAI; Carroll Wainwright, formerly OpenAI; and Daniel Ziegler, formerly OpenAI.

This isn't the first time Hilton spoke publicly about his former company. And he was pretty vocal today on X as well.

Kokotajlo, who worked on OpenAI, quit last month and was vocal about it in a public forum as well. He said he "Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI (artificial general intelligence)." Saunders, also on the governance team, departed along with Kokotajlo.

Wainright's time at OpenAI dates back at least to the debut of ChatGPT. Ziegler, according to this LinkedIn profile, was with OpenAI from 2018 to 2021.

Related Article: Musk, Wozniak and Thousands of Others: 'Pause Giant AI Experiments'

Leading AI companies won't give up critical information surrounding the development of AI technologies on their own, according to this group. Today, it's up to current and former employees rather than governments that can hold them accountable to the public.

"Yet," the group wrote, "broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated."

These employees fear various forms of retaliation, given the history of such cases across the industry.

Related Article: OpenAI Names Sam Altman CEO 5 Days After It Fired Him

Here's the gist of what this group calls on leading AI companies to do:

AI companies should not:

AI companies should:

OpenAI had no public response to the group's letter. In its most recent tweet, it shared its post about deceptive uses of AI.

"OpenAI is committed to enforcing policies that prevent abuse and to improving transparency around AI-generated content," the company wrote May 30. "That is especially true with respect to detecting and disrupting covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them."

Have a tip to share with our editorial team? Drop us a line:

Original post:

The Dark Side of AI: Financial Gains Lead to Oversight Evasion, Say Insiders - CMSWire

Related Posts