At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.
Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.
Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.
Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.
Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.
That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.
As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.
Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.
Dont count on the normal channels of government to save us from that.
Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.
A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.
Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo
Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?
Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.
One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.
Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.
The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.
With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:
1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.
2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.
3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.
4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.
5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.
The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.
The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.
The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.
Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?
Originally posted here:
Opinion | We Need a Manhattan Project for AI Safety - POLITICO
- How would a Victorian author write about generative AI? - Verdict [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is the current regulatory system equipped to deal with AI? - The Hindu [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GCHQ chiefs warning to ministers over risks of AI - The Independent [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Dangers Viewed Through the Perspective of Don't Look Up - BeInCrypto [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT, artificial intelligence, and the news - Columbia Journalism Review [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is 'Generative' AI the Way of the Future? - Northeastern University [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is artificial intelligence approaching science fiction? - The Trail - The Puget Sound Trail [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Towards Artificial General Intelligence, ChatGPT 5 is on Track - Analytics Insight [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- New AGI hardware in progress for artificial general intelligence - Information Age [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT is impressive, but it may slow the emergence of AGI - TechTalks [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How smart is ChatGPT really and how do we judge intelligence in AIs? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Singapore and the Digital and Intelligence Service Sign ... - MINDEF Singapore [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath - DATAQUEST [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- As AutoGPT released, should we be worried about AI? - Cosmos [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- The Role of Artificial Intelligence in the Future of Media - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- AI and the Next Phase of Human Evolution: What Can We Expect? - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- IT pros mull observability tools, devx and generative AI - TechTarget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The future of learning and skilling with AI in the picture - Chief Learning Officer [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Economic Case for Generative AI and Foundation Models - Andreessen Horowitz [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Exploring the future of AI: The power of decentralization - Cointelegraph [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Convergence of Brain-Inspired AI and AGI: Exploring the Path to ... - Newswise [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Development of GPT-5: The Next Step in AI Technology - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Executive Q&A: Andrew Cardno, QCI - Indian Gaming [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future is Now: Understanding and Harnessing Artificial ... - North Forty News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Twin Convergence: AGI And Superconductors Ushering Humanity's Inflection Point - Medium [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Moonshot: Coexisting with AI holograms - The Edge Malaysia [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- A large computing cluster at sea might have big implications for AI ... - XDA Developers [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- ChatGPT or not ChatGPT? That was the question, briefly, as ... - GeekWire [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The impact of AI and Language Models - Girton College [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Startup gaining investment traction for AI clinician productivity tool - Mobihealth News [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- What OpenAI's latest batch of chips says about the future of AI - Quartz [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How AI Ecosystems Are Transforming the Future of Business - Entrepreneur [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Understanding Artificial Intelligence: Definition, Applications, and ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Europe's weaknesses, opportunities facing the AI revolution - EURACTIV [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How the AI Executive Order and OMB memo introduce ... - Brookings Institution [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Artificial intelligence: the world is waking up to the risks - InCyber [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How to win the artificial general intelligence race and not end ... - The Strategist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Sam Altman Seems to Imply That OpenAI Is Building God - Futurism [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Amazon reportedly preparing paid Alexa version powered by its own Titan AI model - SiliconANGLE News [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- What is artificial general intelligence, and is it a useful concept? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- 22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Why AI Won't Take Over The World Anytime Soon - Bernard Marr [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- The AI revolution is coming to robots: how will it change them? - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]