OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics – TechRadar

OpenAI, the tech company behind ChatGPT, has announced that its formed a Safety and Security Committee thats intended to make the firms approach to AI more responsible and consistent in terms of security.

Its no secret that OpenAI and CEO Sam Altman - who will be on the committee - want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple modes) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it.

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety superalignment team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence.

This has seemingly given OpenAI a lot to reflect on and its formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a robust debate at this important moment. The first job of the committee will be to evaluate and further develop OpenAIs processes and safeguards over the next 90 days, and then share recommendations with the companys board.

The recommendations that are subsequently agreed upon to be adopted will be shared publicly in a manner that is consistent with safety and security.

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam DAngelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process.

Sign up for breaking news, reviews, opinion, top tech deals, and more.

Ill reserve my judgment for when OpenAIs adopted recommendations are published, and I can see how theyre implemented, but intuitively, I dont have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

Thats a shame, and its unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way - even if large numbers of people are potentially going to be affected.

Ill be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether theyre in the AI race or not, should prioritize the ethics and safety of what theyre doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where Im standing, and unless there are real consequences, I dont see companies like OpenAI being swayed that much to change their overall ethos or behavior.

See the article here:

OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar

Related Posts

Comments are closed.