OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman … – Fortune

A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altmans recent ouster as CEO of OpenAI.

According to a Reutersreportciting two sources acquainted with the matter, several staff researchers wrote a letter to the organizations board warning of a discovery that could potentially threaten the human race.

The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid tocommercialize the technology.

Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.

Four times now in the history of OpenAIthe most recent time was just in the last couple of weeksIve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, said Altmanat a discussion during the Asia-Pacific Economic Cooperation.

He has since beenreinstated as CEOin a spectacular reversal of events after staffthreatened to mutinyagainst the board.

According to one of the sources, after being contacted by Reuters, OpenAIs chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.

OpenAI could not be reached immediately by Fortune for a statement, but it declined to provide a comment to Reuters.

So why is all of this special, let alone alarming?

Machines have been solving mathematical problems for decades going back to the pocket calculator.

The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.

Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.

By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.

Think of Googles helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probabilitythis is a very rudimentary form of generative AI.

Thats why Meredith Whittaker, a leading expert in the field, describesneural netslike ChatGPT as probabilistic engines designed to spit out what seems plausible.

Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.

This could potentially be the first step towards developing artificial general intelligence, a form of AI that can surpass humans.

The fear is that an AGI needs guardrails since it one day might come to view humanity as a threat to its existence.

See the article here:

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman ... - Fortune

Related Posts

Comments are closed.