Page 11234..»

Category Archives: Artificial General Intelligence

Employees claim OpenAI, Google ignoring risks of AI and should give them ‘right to warn’ public – New York Post

Posted: June 6, 2024 at 8:48 am

A group of AI whistleblowers claim tech giants like Google and ChatGPT creator OpenAI are locked in a reckless race to develop technology that could endanger humanity and demanded a right to warn the public in an open letter Tuesday.

Signed by current and former employees of OpenAI, Google DeepMind and Anthropic, the open letter cautioned that AI companies have strong financial incentives to avoid effective oversight and cited a lack of federal rules on developing advanced AI.

The workers point to potential risks including the spread of misinformation, worsening inequality and even loss of control of autonomous AI systems potentially resulting in human extinction especially as OpenAI and other firms pursue so-called advanced general intelligence, with capacities on par with or surpassing the human mind.

Companies are racing to develop and deploy ever more powerful artificial intelligence, disregarding the risks and impact of AI, former OpenAI employee Daniel Kokotajlo, one of the letters organizers, said in a statement. I decided to leave OpenAI because I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence.

They and others have bought into the move fast and break things approach and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo added.

Kokotajlo, who joined OpenAI in 2022 as a researcher focused on charting AI advancements before leaving in April, has placed the probability that advanced AI will destroy or severely harm humanity in the future at a whopping 70%, according tothe New York Times, which first reported on the letter.

He believes theres a 50% chance that researchers will achieve artificial general intelligence by 2027.

The letter drew endorsements by two prominent experts known as the Godfathers of AI Geoffrey Hinton, who warned last year that the threat of rogue AI was more urgent to humanity than climate change, and Canadian computer scientist Yoshua Bengio. Famed British AI researcher Stuart Russell also backed the letter.

The letter asks AI giants to commit to four principles designed to boost transparency and protect whistleblowers who speak out publicly.

Those include an agreement not to retaliate against employees who speak out about safety concerns and to support an anonymous system for whistleblowers to alert the public and regulators about risks.

The AI firms are also asked to allow a culture of open criticism so long as no trade secrets are disclosed, and pledge not to enter into or enforce non-disparagement agreements or non-disclosure agreements.

As of Tuesday morning, the letters signers include a total of 13 AI workers. Of that total, 11 are formerly or currently employed by OpenAI, including Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler.

There should be ways to share information about risks with independent experts, governments, and the public, said Saunders. Today, the people with the most knowledge about how frontier AI systems work and the risks related to their deployment are not fully free to speak because of possible retaliation and overly broad confidentiality agreements.

Other signers included former Google DeepMind employee Ramana Kumar and current employee Neel Nanda, who formerly worked at Anthropic.

When reached for comment, an OpenAI spokesperson said the company has a proven track record of not releasing AI products until necessary safeguards were in place.

Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, OpenAI said in a statement.

We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world, the company added

Google and Anthropic did not immediately return requests for comment.

The letter was published just days after revelations that OpenAI has dissolved its Superalignment safety team, whose responsibilities included creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction.

Subscribe to our daily Business Report newsletter!

Two OpenAI executives who led the team, co-founder Ilya Sutskever and Jan Leike, have since resigned from the company. Leike blasted the firm on his way out the door, claiming that safety had taken a backseat to shiny products.

Elsewhere, former OpenAI board member Helen Toner who was part of the group that briefly succeeded in ousting Sam Altman as the firms CEO last year alleged that he had repeatedly lied during her tenure.

Toner claimed that she and other board members did not learn about ChatGPTs launch in November 2022 from Altman and instead found out about its debut on Twitter.

OpenAI has since established a new safety oversight committee that includes Altman as it begins training the new version of the AI model that powers ChatGPT.

The company pushed back on Toners allegations, noting that an outside review had determined that safety concerns were not a factor in Altmans removal.

Read this article:

Employees claim OpenAI, Google ignoring risks of AI and should give them 'right to warn' public - New York Post

Posted in Artificial General Intelligence | Comments Off on Employees claim OpenAI, Google ignoring risks of AI and should give them ‘right to warn’ public – New York Post

Former OpenAI researcher foresees AGI reality in 2027 – Cointelegraph

Posted: at 8:48 am

Leopold Aschenbrenner, a former safety researcher at ChatGPT creator OpenAI, has doubled down on artificial general intelligence (AGI) in his newest essay series on artificial intelligence.

Dubbed Situational Awareness, the series offers a glance at the state of AI systems and their promising potential in the next decade. The full series of essays is collected in a 165-page PDF file updated on June 4.

In the essays, the researcher paid specific attention to AGI, a type of AI that matches or surpasses human capabilities across a wide range of cognitive tasks. AGI is one of many different types of artificial intelligence, including artificial narrow intelligence (ANI) and artificial superintelligence (ASI).

AGI by 2027 is strikingly plausible, Aschenbrenner declared, predicting that AGI machines will outpace college graduates by 2025 or 2026. He wrote:

According to Aschenbrenner, AI systems could potentially possess intellectual capabilities comparable to those of a professional computer scientist. He also made another bold prediction that AI labs would be able to train general-purpose language models within minutes, stating:

Predicting the success of AGI, Aschenbrenner called on the community to face its reality. According to the researcher, the smartest people in the AI industry have converged on a perspective he calls AGI realism, which is based on three foundational principles tied to the national security and AI development of the United States.

Related: Former OpenAI, Anthropic employees call for right to warn on AI risks

Aschenbrenners AGI series comes a while after he was reportedly fired for allegedly leaking information from OpenAI. Aschenbrenner was also reportedly an ally of OpenAI chief scientist Ilya Sutskever, who reportedly participated in a failed effort to oust OpenAI CEO Sam Altman in 2023. Aschenbrenners latest series is also dedicated to Sutskever.

Aschenbrenner also recently founded an investment firm focused on AGI, with anchor investments from figures like Stripe CEO Patrick Collison, his blog reads.

Magazine: Crypto voters are already disrupting the 2024 election and its set to continue

See original here:

Former OpenAI researcher foresees AGI reality in 2027 - Cointelegraph

Posted in Artificial General Intelligence | Comments Off on Former OpenAI researcher foresees AGI reality in 2027 – Cointelegraph

AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine – InvestorPlace

Posted: at 8:48 am

These companies are building the future of AI and the world we'll inhabit in coming years

Source: shutterstock.com/Tex vector

Many people are predicting that computers will be smarter than their human creators as early as 2030. Known as artificial general intelligence, this would be the point at which humans are no longer the most intelligent things on the planet. Analysts and scientists say within a decade, machines could be making decisions independent of human control. This is a brave new world that has the potential for both positive and negative outcomes.

Regardless of how things shakeout, theres no question that artificial intelligence (AI) will shape our collective future and is fast emerging as the dominant technology of our time. Certain companies are leading the charge and pushing the frontiers of AI, as well as working to infuse AI into all kinds of machines and technologies. Here are AI ambassadors: three stocks bridging the gap between humanity and machine.

Source: JHVEPhoto / Shutterstock

Adobe (NASDAQ:ADBE), the software company behind popular creative products such as Photoshop and Illustrator, is adding AI where it can. In recent months, Adobe has launched AI assistant for both its Reader and Acrobat products. Management has called AI the new digital camera. The company is leading in terms of finding practical ways for AI to help humans enhance their creativity.

AI has proven to be a bit of a double edged sword for Adobe. ADBE stock has been hurt by news that privately held OpenAI has developed Sora, an AI platform that can generate videos based on written descriptions, similar to some Adobe products. The stock has also been dinged by news that Adobe canceled its planned $20 billion acquisition of design software start-up Figma and had to pay a $1 billion termination fee.

Despite near-term headwinds, Adobe stock should be fine in the long run. Investors might want to take advantage of the fact that ADBE stock is down 23% year to date.

Source: ToyW / Shutterstock

Few companies are doing as much to enable the AI revolution as Taiwan Semiconductor Manufacturing Co. (NYSE:TSM). The microchip foundry currently makes about three-quarters (75%) of all the chips and semiconductors used in the world today. Most AI applications and models run on chips that are produced by TSMC, as the company is known. It is a highly specialized business that is red hot right now.

TSMCs services are so in-demand that the U.S. government has given the company $6.60 billion to fund the construction of microchip fabrication plants in Arizona. TSMC is spending $65 billion to build three cutting-edge fabrication plants in Phoenix. The plants are expected to be operational in 2027 and will provide microchips to customers such as Nvidia (NASDAQ:NVDA) and Apple (NASDAQ: AAPL).

TSM stock has risen 50% so far in 2024.

Source: Vitaliy Karimov / Shutterstock.com

As its main electric vehicle manufacturing business struggles, Tesla (NASDAQ:TSLA) is pivoting to focus on AI. CEO Elon Musk has pledged to spend $10 billion this year alone on AI and has moved the company to focus both on AI and robotics with apparent plans to combine the two in the future. A few of the companys projects in this regard include a humanoid robot called Optimus and a super computer called Dojo.

Additionally, Musk has launched xAI, a separate company that aims to use AI to advance our collective understanding of the universe. Musk has said that xAI aims to compete with OpenAI and its various chatbots. Given the continued decline in Teslas sales and market share, switching to focus on AI, super computers, and humanoid robots might be the companys future.

Musks enthusiasm for electric vehicles appears to be waning along with the publics interest. He seems much more focused on AI. TSLA stock has declined 28% on the year.

On the date of publication, Joel Bagloleheld a long position in NVDA. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Joel Baglole has been a business journalist for 20 years. He spent five years as a staff reporter at The Wall Street Journal, and has also written for The Washington Post and Toronto Star newspapers, as well as financial websites such as The Motley Fool and Investopedia.

Read the original:

AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine - InvestorPlace

Posted in Artificial General Intelligence | Comments Off on AI Ambassadors: 3 Stocks Bridging the Gap Between Humanity and Machine – InvestorPlace

What Ever Happened to the AI Apocalypse? – New York Magazine

Posted: at 8:48 am

Photo: Intelligencer; Photo: Getty Images

For a few years now, lots of people have been wondering what Sam Altman thinks about the future or perhaps what he knows about it as the CEO of OpenAI, the company that kicked off the recent AI boom. Hes been happy to tell them about the end of the world. If this technology goes wrong, it can go quite wrong, he told a Senate committee in May 2023. What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT, he said last June. A misaligned superintelligent AGI could cause grievous harm to the world, he wrote in a blog post on OpenAIs website that year.

Before the success of ChatGPT thrust him into the spotlight, he was even less circumspect. AI will probably, like, most likely lead to the end of the world, but in the meantime, therell be great companies, he cracked during an interview in 2015. Probably AI will kill us all, he joked at an event in New Zealand around the same time; soon thereafter, he would tell a New Yorker reporter about his plans to flee there with friend Peter Thiel in the event of an apocalyptic event (either there or a big patch of land in Big Sur he could fly to).Then Altman wrote on his personal blog that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Returning, again, to last year: The bad case and I think this is important to say is like lights out for all of us. He wasnt alone in expressing such sentiments. In his capacity as CEO of OpenAI, he signed his name to a group statement arguing that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, alongside a range of people in and interested in AI, including notable figures at Google, OpenAI, Microsoft, and xAI.

The tech industrys next big thing might be a doomsday machine, according to the tech industry, and the race is on to summon a technology that might end the world. Its a strange mixed message, to say the least, but its hard to overstate how thoroughly the apocalypse invoked as a serious worry or a reflexive aside has permeated the mainstream discourse around AI. Unorthodox thinkers and philosophers have seen longstanding theories and concerns about superintelligence get mainstream consideration. But the end of the world has also become product-event material, fundraising fodder. In discussions about artificial intelligence, acknowledging the outside chance of ending human civilization has come to resemble a tic. On AI-startup websites, the prospect of human annihilation appears as boilerplate.

In the last few months, though, companies including OpenAI have started telling a slightly different story. After years of warning about infinite downside risk and acting as though they had no choice but to take it theyre focusing on the positive. The doomsday machine were working on? Actually, its a powerful enterprise software platform. From the Financial Times:

The San Francisco-based company said on Tuesday that it had started producing a new AI system to bring us to the next level of capabilities and that its development would be overseen by a new safety and security committee.

But while OpenAI is racing ahead with AI development, a senior OpenAI executive seemed to backtrack on previous comments by its chief executive Sam Altman that it was ultimately aiming to build a superintelligence far more advanced than humans.

Anna Makanju, OpenAIs vice-president of global affairs, told the Financial Times in an interview that its mission was to build artificial general intelligence capable of cognitive tasks that are what a human could do today.

Our mission is to build AGI; I would not say our mission is to build superintelligence, Makanju said.

The story also notes that in November, in the context of seeking more money from OpenAI partner Microsoft, Altman said he was spending a lot of time thinking about how to build superintelligence, but also, more gently, that his companys core product was, rather than a fearsome self-replicating software organism with unpredictable emergent traits, a form of magic intelligence in the sky.

Shortly after that statement, Altman would be temporarily ousted from OpenAI by a board that deemed him not sufficiently candid, a move that triggered external speculation that a major AI breakthrough had spooked safety-minded members. (More recent public statements from former board members were forceful but personal, accusing Altman of a pattern of lying and manipulation.)

After his return, Altman consolidated his control of the company, and some of his internal antagonists left or were pushed out. OpenAI then dissolved the team charged with achieving superalignment in the companys words, managing risks that could lead to the disempowerment of humanity or even human extinction and replaced it with a new safety team run by Altman himself, who also stood accused of voice theft by Scarlett Johansson. Its safety announcement was terse and notably lacking in evocative doomsaying. This committee will be responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations, the company said. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment. Its the sort of careful, vague corporate language you might expect from a company thats comprehensively dependent on one tech giant (Microsoft) and is closing in on a massive licensing deal with its competitor (Apple).

In other news, longtime AI doomsayer Elon Musk, who co-founded OpenAI but split with the firm and later (incoherently and perhaps disingenuously) sued it for abandoning its nonprofit mission in pursuit of profit, raised $6 billion for his unapologetically for-profit competitor xAI. His grave public warnings about superintelligence now take the form of occasional X posts about memes:

There are a few different ways to process this shift. If youre deeply worried about runaway AI, this is just a short horror story in which a superintelligence is manifesting itself right in front of our eyes, helped along by the few who both knew better and were in any sort of position to stop it, in some sort of short-sighted exchange for wealth. Whats happened so far is basically compatible with your broad prediction and well-articulated warnings that far predated the current AI boom: All it took for mankind to summon a vengeful machine god was the promise of ungodly sums of money.

Similarly, if you believe in and are excited about runaway AI, this is all basically great. The system is working, the singularity is effectively already here, and failed attempts to alter or slow AI development were, in fact, near misses with another sort of disaster (this perspective exists among at least a few people at OpenAI).

If youre more skeptical of AI-doomsday predictions, you might generously credit this shift to a gradual realization among industry leaders that current generative-AI technologynow receiving hundreds of billions of dollars of investment and deployed in the wild at scaleis not careening toward superintelligence, consciousness, or rogue malice. Theyre simply adjusting their story to fit the facts of what theyre seeing.

Or maybe, for at least some in the industry, apocalyptic stories were plausible in the abstract, compelling, attention-grabbing, and interesting to talk about, and turned out to be useful marketing devices. They werestories that dovetailed nicely with the concerns of some of the domain experts they needed to work at the companies, but which seemed like harmless and ultimately cautious intellectual exercises to domain experts who didnt share them (Altman, it should be noted, is an investor and executive, not a machine-learning engineer or AI researcher). Apocalyptic warnings were an incredible framing device for a class of companies that needed to raise enormous amounts of money to function, a clever and effective way to make an almost cartoonishly brazen proposal to investors we are the best investment of all time, with infinite upside in the disarming passive voice, as concerned observers with inside knowledge of an unstoppable trend and an ability to accept capital. Routine acknowledgments of abstract danger were also useful for feigning openness to theoretical regulation help us help you avoid the end of the world! while fighting material regulation in private. They raised the stakes to intoxicating heights.

As soon as AI companies made actual contact with users, clients, and the general public, though, this apocalyptic framing flipped into a liability. It suggested risk where risk wasnt immediately evident. In a world where millions of people engage casually with chatbots, where every piece of software suddenly contains an awkward AI assistant, and where Google is pumping AI content into search pages for hundreds of millions of users to see and occasionally laugh at, the AI apocalypse can, somewhat counterintuitively, feel a bit like a non sequitur. Encounters with modern chatbots and LLM-powered software might cause users to wonder about their jobs, or trigger a general sense of wonder or unease about the future; they do not, in their current state, seem to strike fear in users hearts. Mostly, theyre showing up as new features in old software used at work.

The AI industrys sudden disinterest in the end of the world might also be understood as an exaggerated version of corporate Americas broader turn away from talking about ESG and DEI: as profit-driven, sure, but also as evidence that initial commitments to mitigating harmful externalities were themselves disingenuous and profit motivated at the time, and simply outlived their usefulness as marketing stories. It signals a loss of narrative control. In 2022, OpenAI could frame the future however it wanted. In 2024, its dealing with external expectations about the present, from partners and investors that are less interested in speculating about the future of mankind, or conceptualizing intelligence, than they are getting returns on their considerable investments, preferably within the fiscal year.

Again, none of this is particularly comforting if you think that Altman and Musk were right to warn about ending the world, even by accident, even out of craven self-interest, or if youre concerned about the merely very bad externalities the many small apocalypses that AI deployment is already producing and is likely to produce.

But AIs sudden rhetorical downgrade might be clarifying, too, at least about the behaviors of the largest firms and their leaders. If OpenAI starts communicating more like a company, it will be less tempting to mistake it for something else, as it argues for the imminence of benign but barely less speculative variation of AGI, with its softer implication of infinite returns by way of semi-apocalyptic workplace automation. If its current leadership ever believed what they were saying, theyre certainly not acting like it, and in hindsight, they never really were. The apocalypse was just another pitch. Let it be a warning about the next one.

Daily news about the politics, business, and technology shaping our world.

By submitting your email, you agree to our Terms and Privacy Notice and to receive email correspondence from us.

More here:

What Ever Happened to the AI Apocalypse? - New York Magazine

Posted in Artificial General Intelligence | Comments Off on What Ever Happened to the AI Apocalypse? – New York Magazine

What aren’t the OpenAI whistleblowers saying? – Platformer

Posted: at 8:48 am

Eleven current and former employees of OpenAI, along with two more from Google DeepMind, posted an open letter today stating that they are unable to voice concerns about risks created by their employees due to confidentiality agreements. Today lets talk about what they said, what they left out, and why lately the AI safety conversation feels like its going nowhere.

Heres a dynamic weve seen play out a few times now at companies including Meta, Google, and Twitter. First, in a bid to address potential harms created by their platforms, companies hire idealistic workers and charge them with building safeguards into their systems. For a while, the work of these teams gets prioritized. But over time, executives enthusiasm wanes, commercial incentives take over, and the team is gradually de-funded.

When those roadblocks go up, some of the idealistic employees will speak out, either to a reporter like me, or via the sort of open letter that the AI workers published today. And the company responds by reorganizing the team out of existence, while putting out a statement saying that whatever that team used to work on is now everyones responsibility.

At Meta, this process gave us the whistleblower Frances Haugen. On Googles AI ethics team, a slightly different version of the story played out after the firing of researcher Timnit Gebru. And in 2024, the story came to the AI industry.

OpenAI arguably set itself up for this moment more than those other tech giants. After all, it was established not as a traditional for-profit enterprise, but as a nonprofit research lab devoted to safely building an artificial general intelligence.

OpenAIs status as a relatively obscure nonprofit changed forever in November 2022. Thats when it released ChatGPT, a chatbot based on the latest version of its large language model, which by some estimates soon became the fastest-growing consumer product in history.

ChatGPT took a technology that had been exclusively the province of nerds and put it in the hands of everyone from elementary school children to state-backed foreign influence operations. And OpenAI soon barely resembled the nonprofit that was founded out of a fear that AI poses an existential risk to humanity.

This OpenAI placed a premium on speed. It pushed the frontier forward with tools like plugins, which connected ChatGPT to the wider internet. It aggressively courted developers. Less than a year after ChatGPTs release, the company a for-profit subsidiary of its nonprofit parent was valued at $90 billion.

That transformation, led by CEO Sam Altman, gave many in the company whiplash. And it was at the heart of the tensions that led the nonprofit board to fire Altman last year, for reasons related to governance.

The five-day interregnum between Altmans firing and his return marked a pivotal moment for the company. The board could have recommitted to its original vision of slow, cautious development of powerful AI systems. Or it could endorse the post-ChatGPT version of OpenAI, which closely resembled a traditional Silicon Valley venture-backed startup.

Almost immediately, it became clear that a vast majority of employees preferred working at a more traditional startup. Among other things, that startups commercial prospects meant that their (unusual) equity in the company would be worth millions of dollars. The vast majority of OpenAI employees threatened to quit if Altman didnt return.

And so Altman returned. Most of the old board left. New, more business-minded board members replaced them. And that board has stood by Altman in the months that followed, even as questions mount about his complex business dealings and conflicts of interest.

Most employees seem content under the new regime; positions at OpenAI are still highly sought after. But like Meta and Google before it, OpenAI had its share of conscientious objectors. And increasingly, were hearing what they think.

The latest wave began last month when OpenAI co-founder Ilya Sutskever, who initially backed Altmans firing and who had focused on AI safety efforts, quit the company. He was followed out the door by Jan Leike, who led the superalignment team, and a handful of other employees who worked on safety.

Then on Tuesday a new group of whistleblowers came forward to complain. Heres handsome podcaster Kevin Roose in the New York Times:

They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.

OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there, said Daniel Kokotajlo, a former researcher in OpenAIs governance division and one of the groups organizers.

Anyone looking for jaw-dropping allegations from the whistleblowers will likely leave disappointed. Kokotajlos sole specific complaint in the article is that some employees believed Microsoft had released a new version of GPT-4 in Bing without proper testing; Microsoft denies that this happened.

But the accompanying letter offers one possible explanation for why the charges feel so thin: employees are forbidden from saying more by various agreements they signed as a condition of working at the company. (The company has said it is removing some of the more onerous language from its agreements, after Vox reported on them last month.)

Were proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk, an OpenAI spokeswoman told the Times. We agree that rigorous debate is crucial given the significance of this technology, and well continue to engage with governments, civil society and other communities around the world.

The company also created a whistleblower hotline for employees to anonymously voice their concerns.

So how should we think about this letter?

I imagine that it will be a Rorschach test for whoever reads it, and what they see will depend on what they think of the AI safety movement in general.

For those who believe that AI poses existential risk, I imagine this letter will provide welcome evidence that at least some employees inside the big AI makers are taking those risks seriously. And for those who dont, I imagine it will provide more ammunition for the argument that the AI doomers are once again warning about dire outcomes without providing any compelling evidence for their beliefs.

As a journalist, I find myself naturally sympathetic to people inside companies who warn about problems that havent happened yet. Journalism often serves a similar purpose, and every once in a while, it can help prevent those problems from occurring. (This can often make the reporter look foolish, since they spent all that time warning about a scenario that never unfolded, but thats a subject for another day.)

At the same time, theres no doubt that the AI safety argument has begun to feel a bit tedious over the past year, when the harms caused by large language models have been funnier than they have been terrifying. Last week, when OpenAI put out the first account of how its products are being used in covert influence operations, there simply wasnt much there to report.

Weve seen plenty of problematic misuse of AI, particularly deepfakes in elections and in schools. (And of women in general.) And yet people who sign letters like the one released today fail to connect high-level hand-wringing about their companies to the products and policy decisions that their companies make. Instead, they speak through opaque open letters that have surprisingly little to say about what safe development might actually look like in practice.

For a more complete view of the problem, I preferred another (and much longer) piece of writing that came out Tuesday. Leopold Aschenbrenner, who worked on OpenAIs superalignment team and was reportedly fired for leaking in April, published a 165-page paper today laying out a path from GPT-4 to superintelligence, the dangers it poses, and the challenge of aligning that intelligence with human intentions.

Weve heard a lot of this before, and the hypotheses remain as untestable (for now) as they always have. But I find it difficult to read the paper and not come away believing that AI companies ought to prioritize alignment research, and that current and former employees ought to be able to talk about the risks they are seeing.

Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered, Aschenbrenner concludes. As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.

And if those who feel the weight of what is coming work for an AI company, it seems important that they be able to talk about what theyre seeing now, and in the open.

For more good posts every day, follow Caseys Instagram stories.

(Link)

(Link)

Send us tips, comments, questions, and situational awareness: casey@platformer.news and zoe@platformer.news.

Excerpt from:

What aren't the OpenAI whistleblowers saying? - Platformer

Posted in Artificial General Intelligence | Comments Off on What aren’t the OpenAI whistleblowers saying? – Platformer

Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? – The New York Times

Posted: at 8:48 am

The advent of A.I. artificial intelligence is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?

In Can We Have Pro-Worker A.I.? Choosing a Path of Machines in Service of Minds, three economists at M.I.T., Daron Acemoglu, David Autor and Simon Johnson, looked at this epochal innovation last year:

The private sector in the United States is currently pursuing a path for generative A.I. that emphasizes automation and the displacement of labor, along with intrusive workplace surveillance. As a result, disruptions could lead to a potential downward cascade in wage levels, as well as inefficient productivity gains.

Before the advent of artificial intelligence, automation was largely limited to blue-collar and office jobs using digital technologies while more complex and better-paying jobs were left untouched because they require flexibility, judgment and common sense.

Now, Acemoglu, Autor and Johnson wrote, A.I. presents a direct threat to those high-skill jobs: A major focus of A.I. research is to attain human parity in a vast range of cognitive tasks and, more generally, to achieve artificial general intelligence that fully mimics and then surpasses capabilities of the human mind.

The three economists make the case that

There is no guarantee that the transformative capabilities of generative A.I. will be used for the betterment of work or workers. The bias of the tax code, of the private sector generally, and of the technology sector specifically, leans toward automation over augmentation.

But there are also potentially powerful A.I.-based tools that can be used to create new tasks, boosting expertise and productivity across a range of skills. To redirect A.I. development onto the human-complementary path requires changes in the direction of technological innovation, as well as in corporate norms and behavior. This needs to be backed up by the right priorities at the federal level and a broader public understanding of the stakes and the available choices. We know this is a tall order.

Tall is an understatement.

In an email elaborating on the A.I. paper, Acemoglu contended that artificial intelligence has the potential to improve employment prospects rather than undermine them:

It is quite possible to leverage generative A.I. as an informational tool that enables various different types of workers to get better at their jobs and perform more complex tasks. If we are able to do this, this would help create good, meaningful jobs, with wage growth potential, and may even reduce inequality. Think of a generative A.I. tool that helps electricians get much better at diagnosing complex problems and troubleshoot them effectively.

This, however, is not where we are heading, Acemoglu continued:

The preoccupation of the tech industry is still automation and more automation, and the monetization of data via digital ads. To turn generative A.I. pro-worker, we need a major course correction, and this is not something thats going to happen by itself.

Acemoglu pointed out that unlike the regional trade shock that decimated manufacturing employment after China entered the World Trade Organization in 2001, The kinds of tasks impacted by A.I. are much more broadly distributed in the population and also across regions. In other words, A.I. threatens employment at virtually all levels of the economy, including well-paid jobs requiring complex cognitive capabilities.

Four technology specialists Tyna Eloundou and Pamela Mishkin, both on the staff of OpenAI, with Sam Manning, a research fellow at the Centre for the Governance of A.I., and Daniel Rock at the University of Pennsylvania provided a detailed case study on the employment effects of artificial intelligence in their 2023 paper, GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.

More here:

Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? - The New York Times

Posted in Artificial General Intelligence | Comments Off on Opinion | Will A.I. Be a Creator or a Destroyer of Worlds? – The New York Times

AGI in Less Than 5 years, Says Former OpenAI Employee – – 99Bitcoins

Posted: at 8:48 am

Dig into the latest AI Crypto news as we explore the future of artificial general intelligence (AGI) and uncover its potential to surpass human abilities, according to Leopold Aschenbrenners essay.

AGI in less than 5 years. How do you intend to spend your last few years alive? (kidding).

The internets on fire after former OpenAI safety researcher Leopold Aschenbrenner unleashed Situational Awareness, a no-holds-barred essay series on the future of Artificial General Intelligence.

It is 165 pages long and fresh as of June 4. It examines where AI stands now and where its headed.

(Twitter)

In some ways, this is: LINE GOES UP OOOOOOOOOOH ITS HAPPENING ITS HAPPENING.

Reminiscent in some ways to this old Simpsons joke:

But Aschenbrenner envisions AGI systems becoming smarter than you or I by the decades end, ushering in an era of true superintelligence. Alongside this rapid advancement, he warns of significant national security implications not seen in decades.

AGI by 2027 is strikingly plausible, Aschenbrenner claims, suggesting that AGI machines will outperform college graduates by 2025 or 2026. To put this in perspective, suppose GPT-4 training took 3 months. In 2027, a leading AI lab will be able to train a GPT-4-level model in a minute.

Aschenbrenner urges the AI community to adopt what he terms AGI realism, a viewpoint grounded in three core principles related to national security and AI development in the U.S.

He argues that the industrys smartest minds, like Ilya Sutskever, who famously failed to unseat CEO Sam Altman in 2023, are converging on this perspective, acknowledging the imminent reality of AGI.

Aschenbrenners latest insights follow his controversial exit from OpenAI amid accusations of leaking info.

DISCOVER: The Best AI Crypto to Buy in Q2 2024

On Tuesday a dozen-plus staffers from AI heavyweights like OpenAI, Anthropic, and Googles DeepMind have raised red flags against AGI.

Their open letter cautions that without extra protections, AI might become an existential threat.

We believe in the potential of AI technology to deliver unprecedented benefits to humanity, the letter states. We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.

The letter takes aim at AI giants for dodging oversight in favor of fat profits. DeepMinds Neel Nanda was the only one to break ranks as the only internal researcher to endorse the letter.

AI is quickly becoming a battleground, but the message of the letter is simple: Dont punish employees for speaking out on AI dangers.

On the one hand, it can be scary to think that human creativity and the boundaries of thought are being closed in by politically correct code monkies tinkering with matrix multiplication.

On the other, the power of artificial intelligence is currently incomprehensible because it is unlike anything we have understood before.

It could be a revolution, just as the first man discovered the spark or the spinning of a stone wheel one moment, it didnt exist, and the next, it changed the face of humanity. Well see.

EXPLORE:A Complete List of Bitcoin-Friendly Countries

Disclaimer: Crypto is a high-risk asset class. This article is provided for informational purposes and does not constitute investment advice. You could lose all of your capital.

Continue reading here:

AGI in Less Than 5 years, Says Former OpenAI Employee - - 99Bitcoins

Posted in Artificial General Intelligence | Comments Off on AGI in Less Than 5 years, Says Former OpenAI Employee – – 99Bitcoins

The 3 phases of AI evolution that could play out this century – Big Think

Posted: at 8:48 am

Excerpted from Our Next Reality 2024 Nicholas Brealey Publishing. Reprinted with permission. This article may not be reproduced for any other use without permission.

Its clear theres a lot of fear and misinformation about the risks and role of AI and the metaverse in our society going forward. It may be helpful to take a three-phase view of how to approach the problem. In the next 1-10 years, we should look at AI as tools to support our lives and our work, making us more efficient and productive. In this period, the proto-metaverse will be the spatial computing platform we go to learn, work, and play in more immersive ways.

In the following 11-50 years, as more and more people are liberated from the obligation of employment, we should look at AI as our patron which supports us to explore our interests in arts, culture, and science, or whatever field we want to pursue. Most will also turn to the metaverse as a creative playground for expression, leisure, and experimentation. In the third phase, after 50+ years (if not sooner), I would expect the worlds many separate AGI (artificial general intelligence) systems will have converged into a single ASI (artificial superintelligence) with the wisdom to unite the worlds approximately 200 nations and help us manage a peaceful planet with all its citizens provided for and given the choice of how they want to contribute to the society.

At this point, the AI system will have outpaced our biological intelligence and limitations, and we should find ways to deploy it outside our solar system and spread intelligence life into all corners of the galaxy and beyond. At this third stage, we should view AI as our children, for these AI beings will all have a small part of us in them.

Just like we possess in our genes a small part of all the beings that preceded us in the tree of life. They will hence forth be guided by all the memes humans have created and compiled throughout our history, from our morals and ethics to our philosophy and arts. The metaverse platform will then become an interface for us to explore and experience the far reaches of the Universe together with our children although our physical bodies may still be on Earth. Hopefully, these children will view us as their honorable ancestors and treat us the way Eastern cultures treat their elderly, with respect and care. As with all children, they will learn their values and morals by observing us. Its best we start setting a better example for them by treating each other as we would like AIs to treat us in the future.

Of course, the time frames above are only estimates, so could happen faster or slower than described, but the phases will likely occur in that order, if we are able to sustainably align future AGI/ASI systems. If for some reason, we are not able to align AGI/ASI, or they are misused by bad actors to catastrophic outcomes then the future could be quite dark.

I must however reiterate that my biggest concerns have always been around the risk of misuse of all flavors of AI by bad actor humans (vs. an evil AGI), and we need to do all in our power to prevent those scenarios. On the other hand, Ive increasingly become more confident that any superintelligent being we create will more likely be innately ethical and caring, rather than aggressive and evil.

If we take the right calculated actions in the coming decade, it could very well be the beginning of a new age of prosperity for mankind and all life everywhere.

Carl Jung said, The more you understand psychology, the less you tend to blame others for their actions. I think we can all attest that there is truth in this statement simply by observing our own mindset when interacting with young children. Remember the last time you played a board game with kids; did you do all possible to crush them and win? Of course not. When we dont fear something, we gain added patience and understanding. Well, the ASI we are birthing wont just understand psychology fully, but all arts, sciences, history, ethics, and philosophy. With that level of wisdom, it should be more enlightened than any possible human, and attain a level of understanding we cant even imagine.

A 2022 paper from a group of respected researchers in the space also found linkages between compassion and intelligence. In July 2023, Elon Musk officially entered the AI race with a new company called xAI, and the objective function of their foundational model is simply stated as understand the Universe. So it seems he shares my view that giving AI innate curiosity and a thirst for knowledge can help bring forth some level of increased alignment. Thus, you can understand why I reserve my fear, mainly for our fellow man. Still, it certainly couldnt hurt if we all started to set a better example for our budding prodigy and continue to investigate more direct means to achieve sustainable alignment.

We are near the end of the hundred-thousand-year ignorance and aimless toil phase of the Anthropocene epoch and will soon turn the page to start a new age of enlightenment far beyond our dreams.

There are many today who are calling for the end of civilization or even the end of humans on Earth due to recent technological progress. If we take the right calculated actions in the coming decade, it could very well be the beginning of a new age of prosperity for mankind and all life everywhere. We are near the end of something. We are near the end of the hundred-thousand-year ignorance and aimless toil phase of the Anthropocene epoch and will soon turn the page to start a new age of enlightenment far beyond our dreams.

When we do find a solution for AI alignment, and peacefully transition our world to the next phase of progress, the societal benefits will be truly transformational. It could lead us to an exponential increase in human understanding and capabilities. It will bring us near-infinite productivity and limitless clean energy to the world. The inequality, health, and climate issues that plague the world today could disappear within a relatively short period. And we can start to think more about plans at sci-fi time scales to go boldly where no one has gone before.

Read this article:

The 3 phases of AI evolution that could play out this century - Big Think

Posted in Artificial General Intelligence | Comments Off on The 3 phases of AI evolution that could play out this century – Big Think

Can AI ever be smarter than humans? | Context – Context

Posted: at 8:48 am

Whats the context?

"Artificial general intelligence" (AGI) - the benefits, the risks to security and jobs, and is it even possible?

LONDON - When researcher Jan Leike quit his job at OpenAI last month, he warned the tech firm's "safety culture and processes (had) taken a backseat" while it trained its next artificial intelligence model.

He voiced particular concern about the company's goal to develop "artificial general intelligence", a supercharged form of machine learning that it says would be "smarter than humans".

Some industry experts say AGI may be achievable within 20 years, but others say it will take many decades, if at all.

But what is AGI, how should it be regulated and what effect will it have on people and jobs?

OpenAI defines AGI as a system "generally smarter than humans". Scientists disagree on what this exactly means.

"Narrow" AI includes ChatGPT, which can perform a specific, singular task. This works by pattern matching, akin to putting together a puzzle without understanding what the pieces represent, and without the ability to count or complete logic puzzles.

"The running joke, when I used to work at Deepmind (Google's artificial intelligence research laboratory), was AGI is whatever we don't have yet," Andrew Strait, associate director of the Ada Lovelace Institute, told Context.

IBM has suggested that artificial intelligence would need at least seven critical skills to reach AGI, including visual and auditory perception, making decisions with incomplete information, and creating new ideas and concepts.

Narrow AI is already used in many industries, but has been responsible for many issues, like lawyers citing "hallucinated" - made up - legal precedents and recruiters using biased services to check potential employees.

AGI still lacks definition, so experts find it difficult to describe the risks that it might pose.

It is possible that AGI will be better at filtering out bias and incorrect information, but it is also possible new problems will arise.

One "very serious risk", Strait said, was an over-reliance on the new systems, "particularly as they start to mediate more sensitive human-to-human relationships".

AI systems also need huge amounts of data to train on and this could result in a massive expansion of surveillance infrastructure. Then there are security risks.

"If you collect (data), it's more likely to get leaked," Strait said.

There are also concerns over whether AI will replace human jobs.

Carl Frey, a professor of AI and work at the Oxford Internet Institute, said an AI apocalypse was unlikely and that "humans in the loop" would still be needed.

But there may be downward pressure on wages and middle-income jobs, especially with developments in advanced robotics.

"I don't see a lot of focus on using AI to develop new products and industries in the ways that it's often being portrayed. All applications boil down to some form of automation," Frey told Context.

As AI develops, governments must ensure there is competition in the market, as there are significant barriers to entry for new companies, Frey said.

There also needs to be a different approach to what the economy rewards, he added. It is currently in the interest of companies to focus on automation and cut labour costs, rather than create jobs.

"One of my concerns is that the more we emphasise the downsides, the more we emphasise the risks with AI, the more likely we are to get regulation, which means that we restrict entry and that we solidify the market position of incumbents," he said.

Last month, the U.S. Department of Homeland Security announced a board comprised of the CEOs of OpenAI, Microsoft, Google, and Nvidia to advise the government on AI in critical infrastructure.

"If your goal is to minimise the risks of AI, you don't want open source. You want a few incumbents that you can easily control, but you're going to end up with a tech monopoly," Frey said.

AGI does not have a precise timeline. Jensen Huang, the chief executive of Nvidia, predicts that today's models could advance to the point of AGI within five years.

Huang's definition of AGI would be a program that can improve on human logic quizzes and exams by 8%.

OpenAI has indicated that a breakthrough in AI is coming soon with Q* (pronounced Q-Star), a secretive project reported in November last year.

Microsoft researchers have said that GPT-4, one of OpenAI's generative AI models, has "sparks of AGI". However, it does not "(come) close to being able to do anything that a human can do", nor does it have "inner motivation and goals" - another key aspect in some definitions of AGI.

But Microsoft President Brad Smith has rejected claims of a breakthrough.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said in November.

Frey suggested there would need to be significant innovation to get to AGI, due to both limitations in hardware and the amount of training data available.

"There are real question marks around whether we can develop AI on the current path. I don't think we can just scale up existing models (with) more compute, more data, and get to AGI."

Read more from the original source:

Can AI ever be smarter than humans? | Context - Context

Posted in Artificial General Intelligence | Comments Off on Can AI ever be smarter than humans? | Context – Context

Roundup: AI and the Resurrection of Usability – Substack

Posted: at 8:48 am

Support Independent Journalism: WorkforceAI provides independent coverage of artificial intelligence and HR. Get a paid subscription today to support our efforts and become exceptionally well-informed.

Mays product announcements by Google and OpenAI emphasized a number of embedded audio and visual capabilities, but none were more important than the faster processing times and conversational voice interfaces they enable. With these products, AI takes a dramatic step toward an almost natural interaction between users and systems, and thats critical improving usability and, along with it, effectiveness, adoption and business results.

This isnt so much about AI so much as its about UI. Users are sure to see natural language and more important, spoken natural language as a simpler, flexible and more useful approach to advanced technology. As AI Business wrote, they shift a products focus toward the consumer rather than the loftier goal of developing artificial general intelligence or an AI that thinks like a human.

This is important because usability directly affects business results. The great majority of enterprise software errors 92% are related to users, design or processes, according to research by in question Knoa Software. When users have less to learn about how a UI works, they complete their work more quickly and efficiently while requiring less training and support.

And todays UIs do require a certain amount of knowledge in order to make them do anything. This button prints a document, that button saves work to disk, for example. But the rules governing even simple tasks vary: To close a window, the Macs operating system requires clicking a red dot in a windows top left corner. Windows wants you to click an X in the top right. To capture a screen shot, Mac users enter command-shift-3. Windows users enter WindowsKey-shift-s.

By introducing more sophisticated voice interfaces, AI developers are moving toward an environment where each user can develop their own approach. To resume a paused video, one user might say OK. Keep going, while another says resume play. Some might say set a timer for 45 minutes while others say wake me at 2:45.

When it comes to increasing efficiency, every little bit helps.

Last edition, we asked about brand recognition in AI. Sorry, Google, but OpenAI got the most attention, named by 75% of our readers. The rest said they followed somebody else.

AI: Perception and Reality with Author and Analyst Geoff Webb [Podcast]

Author, industry leader and analyst Geoff Webb discusses AI and how vendors market it, along with reskilling and some of the unexpected challenges that go along with it. [WorkforceAI]

AI and Assessments, With PDRI's Elaine Pulakos [Podcast]

I speak with Elaine Pulakos, the CEO of PDRI by Pearson. She spends a lot of time thinking about AI and its impact on assessments. Theres a lot of questions in the space, covering everything from AIs role in best practices to how it can help develop assessments themselves. We cover that and more. [WorkforceAI]

Why SMBs are Good Prospects for the Right AI Solutions

Large employers get most of the attention, but small businesses present a solid opportunity for solutions providers who are developing AI products. For one thing, SMBs are heavy users of technology. Nearly all of Americas small companies have put at least one technology platform to use. Plus, the market has room for growth. Although they recognize the value of AI, most SMBs have yet to jump in. [WorkforceAI]

HR Departments Believe in AI, But Most Have Yet to Adopt

More than two-thirds of HR professionals have yet to adopt AI, and only a third believe they understand how they could incorporate the technology into their work, according to research by Brightmine. Nearly a quarter of HR departments havent been involved in discussions with executives about the technology and its use. The company says demystification is needed. [WorkforceAI]

Generative AI Leads in Business Adoption

Generative AI is the most frequently deployed AI solution in business, Gartner said, but concerns about measuring and demonstrating its value is a major barrier to adoption. Embedding generative AI into existing applications is the most-often used approach to leveraging the technology, with some 34% of respondents calling it their primary way of using AI. [WorkforceAI]

If you have news to share, send a press release or email to our editors at news@workforceai.news.

Image: iStock

Share

Continue reading here:

Roundup: AI and the Resurrection of Usability - Substack

Posted in Artificial General Intelligence | Comments Off on Roundup: AI and the Resurrection of Usability – Substack

Page 11234..»