In 2016, I witnessed DeepMinds artificial-intelligence model AlphaGo defeat Go champion Lee Sedol in Seoul. That event was a milestone, demonstrating that an AI model could beat one of the worlds greatest Go players, a feat that was thought to be impossible. Not only was the model making clever strategic moves but, at times, those moves were beautiful in a very deep and humanlike way.
Other scientists and world leaders took note and, seven years later, the race to control AI and its governance is on. Over the past month, US President Joe Biden has issued an executive order on AI safety, the G7 announced the Hiroshima AI Process and 28 countries signed the Bletchley Declaration at the UKs AI Safety Summit. Even the Chinese Communist Party is seeking to carve out its own leadership role with the Global AI Governance Initiative.
These developments indicate that governments are starting to take the potential benefits and risks of AI equally seriously. But as the security implications of AI become clearer, its vital that democracies outcompete authoritarian political systems to ensure future AI models reflect democratic values and are not concentrated in institutions beholden to the whims of dictators. At the same time, countries must proceed cautiously, with adequate guardrails, and shut down unsafe AI projects when necessary.
Whether AI models will outperform humans in the near future and pose existential risks is a contentious question. For some researchers who have studied these technologies for decades, the performance of AI models like AlphaGo and ChatGPT are evidence that the general foundations for human-level AI have been achieved and that an AI system thats more intelligent than humans across a range of tasks will likely be deployed within our lifetimes. Those systems are known as artificial general intelligence (AGI), artificial superintelligence or general AI.
For example, most AI models now use neural networks, an old machine-learning technique created in the 1940s that was inspired by the biological neural networks of animal brains. The abilities of modern neural networks like AlphaGo werent fully appreciated until computer chips used mostly for gaming and video rendering, known as graphics processing units, became powerful enough in the 21st century to process the computations needed for specific human-level tasks.
The next step towards AGI was the arrival of large-language models, such as OpenAIs GPT-4, which are created using a version of neural networks known as transformers. OpenAIs previous version of its chatbot, GPT-3, surprised everyone in 2020 by generating text that was indistinguishable from that written by people and performinga range of language-based tasks with few or no examples. GPT-4, the latest model, has demonstrated human-level reasoning capabilities and outperformed human test-takers on the US bar exam, a notoriously difficult test for lawyers. Future iterations are expected to have the ability to understand, learn and apply knowledge at a level equal to, or beyond, humans across all useful tasks.
AGI would be the most disruptive technology humanity has created. An AI system that can automate human analytical thinking, creativity and communication at a large scale and generate insights, content and reports from huge datasets would bring about enormous social and economic change. It would be our generations Oppenheimer moment, only with strategic impacts beyond just military and security applications. The first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. For those reasons, long-term geopolitical competition between liberal democracies and authoritarian countries is fuelling an arms race to develop and control AGI.
At the core of this race is ideological competition, which pushes governments to support the development of AGI in their country first, since the technology will likely reflect the values of the inventor and set the standards for future applications. This raises important questions about what world views we want AGIs to express. Should an AGI value freedom of political expression above social stability? Or should it align itself with a rule-by-law or rule-of-law society? With our current methods, researchers dont even know if its possible to predetermine those values in AGI systems before theyre created.
Its promising that universities, corporations and civil research groups in democracies are leading the development of AGI so far. Companies like OpenAI, Anthropic and DeepMind are household names and have been working closely with the US government to consider a range of AI safety policies. But startups, large corporations and research teams developing AGI in China, under the authoritarian rule of the CCP, are quickly catching up and pose significant competition. China certainly has the talent, the resources and the intent but faces additional regulatory hurdles and a lack of high-quality, open-source Chinese-language datasets. In addition, large-language models threaten the CCPs monopoly on domestic information control by offering alternative worldviews to state propaganda.
Nonetheless, we shouldnt underestimate the capacity of Chinese entrepreneurs to innovate under difficult regulatory conditions. If a research team in China, subject to the CCPs National Intelligence Law, were to develop and tame AGI or near-AGI capabilities first, it would further entrench the partys power to repress its domestic population and ability to interfere with the sovereignty of other countries. Chinas state security system or the Peoples Liberation Army could deploy it to supercharge their cyberespionage operations or automate the discovery of zero-day vulnerabilities. The Chinese government could embed it as a superhuman adviser in its bureaucracies to make better operational, military, economic or foreign-policy decisions and propaganda. Chinese companies could sell their AGI services to foreign government departments and companies with back doors into their systems or covertly suppress content and topics abroad at the direction of Chinese security services.
At the same time, an unfettered AGI arms race between democratic and authoritarian systems could exacerbate various existential risks, either by enabling future malign use by state and non-state actors or through poor alignment of the AIs own objectives. AGI could, for instance, lower the impediments for savvy malicious actors to develop bioweapons or supercharge disinformation and influence operations. An AGI could itself become destructive if it pursues poorly described goals or takes shortcuts such as deceiving humans to achieve goals more efficiently.
When Meta trained Cicero to play the board game Diplomacy honestly by generating only messages that reflected its intention in each interaction, analysts noted that it could still withhold information about its true intentions or not inform other players when its intentions changed. These are serious considerations with immediate risks and have led many AI experts and people who study existential risk to call for a pause on advanced AI research. But policymakers worldwide are unlikely to stop given the strong incentives to be a first mover.
This all may sound futuristic, but its not as far away as you might think. In a 2022 survey, 352 AI experts put a 50% chance of human-level machine intelligence arriving in 37 yearsthat is, 2059. The forecasting community on the crowd-sourced platform Metaculus, which has a robust track record of AI-related forecasts, is even more confident of the imminent development of AGI. The aggregation of more than 1,000 forecasters suggests2032 as the likely year general AI systems will be devised, tested and publicly announced. But thats just the current estimateexperts and the amateurs on Metaculus have shortened their timelines each year as new AI breakthroughs are publicly announced.
That means democracies have a lead time of between 10 and 40 years to prepare for the development of AGI. The key challenge will be how to prevent AI existential risks while innovating faster than authoritarian political systems.
First, policymakers in democracies must attract global AI talent, including from China and Russia, to help align AGI models with democratic values. Talent is also needed within government policymaking departments and think tanks to assess AGI implications and build the bureaucratic capacity to rapidly adapt to future developments.
Second, governments should be proactively monitoring all AGI research and development activity and should pass legislation that allows regulators to shut down or pause exceptionally risky projects. We should remember that Beijing has more to worry about with regard to AI alignment because the CCP is too worried about its own political safety to relax its strict rules on AI development.
We therefore shouldnt see government involvement only in terms of its potential to slow us down. At a minimum, all countries, including the US and China, should be transparent about their AGI research and advances. That should include publicly disclosing their funding for AGI research and safety policies and identifying their leading AGI developers.
Third, liberal democracies must collectively maintain as large a lead as possible in AI development and further restrict access to high-end technology, intellectual property, strategic datasets and foreign investments in Chinas AI and national-security industries. Impeding the CCPs AI development in its military, security and intelligence industries is also morally justifiable in preventing human rights violations.
For example, Midu, an AI company based in Shanghai that supports Chinas propaganda and public-security work, recently announced the use of large-language models to automate reporting on public opinion analysis to support surveillance of online users. While Chinas access to advanced US technologies and investment has been restricted, other like-minded countries such as Australia should implement similar outbound investment controls into Chinas AI and national-security industries.
Finally, governments should create incentives for the market to develop safe AGI and solve the alignment problem. Technical research on AI capabilities is outpacing technical research on AI alignment and companies are failing to put their money where their mouth is. Governments should create prizes for research teams or individuals to solve difficult AI alignment problems. One model potential model could be like the Clay Institutes Millennium Prize Problems, which provides awards for solutions to some of the worlds most difficult mathematics problems.
Australia is an attractive destination for global talent and is already home to many AI safety researchers. The Australian government should capitalise on this advantage to become an international hub for AI safety and alignment research. The Department of Industry, Science and Resources should set up the worlds first AGI prize fund with at least $100 million to be awarded to the first global research team to align AGI safely.
The National Artificial Intelligence Centre should oversee a board that manages this fund and work with the research community to create a list of conditions and review mechanisms to award the prize. With $100 million, the board could adopt a similar investment mandate as Australias Future Fund to achieve an average annual return of at least the consumer price index plus 45% per annum over the long term. Instead of being reinvested into the fund, the 45% interest accrued each year on top of CPI should be used as smaller awards for incremental achievements in AI research each year. These awards could also be used to fund AI PhD scholarships or attract AI postdocs to Australia. Other awards could be given to research, including research conducted outside Australia, in annual award ceremonies, like the Nobel Prize, which will bring together global experts on AI to share knowledge and progress.
A $100 million fund may seem a lot for AI research but, as a comparison, Microsoft is rumoured to have invested US$10 billion into OpenAI this year alone. And $100 million pales in comparison to the contributions safely aligned AGI would have on the national economy.
The stakes are high for getting AGI right. If properly aligned and developed, it could bring an epoch of unimaginable human prosperity and enlightenment. But AGI projects pursued recklessly could pose real risks of creating dangerous superhuman AI systems or bringing about global catastrophes. Democracies must not cede leadership of AGI development to authoritarian systems, but nor should they rush to secure a Pyrrhic victory by going ahead with models that fail to embed respect for human rights, liberal values and basic safety.
This tricky balance between innovation and safety is the reason policymakers, intelligence agencies, industry, civil society and researchers must work together to shape the future of AGIs and cooperate with the global community to navigate an uncertain period of elevated human-extinction risks.
Read the original here:
How to win the artificial general intelligence race and not end ... - The Strategist
- How would a Victorian author write about generative AI? - Verdict [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is the current regulatory system equipped to deal with AI? - The Hindu [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Tim Sweeney, CD Projekt, and Other Experts React to AI's Rise, and ... - IGN [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GCHQ chiefs warning to ministers over risks of AI - The Independent [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Solving The Mystery Of How ChatGPT And Generative AI Can Surprisingly Pick Up Foreign Languages, Says AI Ethics And AI Law - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- SenseAuto Empowers Nearly 30 Mass-produced Models Exhibited at Auto Shanghai 2023 and Unveils Six Intelligent Cabin Products - Yahoo Finance [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Dangers Viewed Through the Perspective of Don't Look Up - BeInCrypto [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT, artificial intelligence, and the news - Columbia Journalism Review [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Unpacking AI: "an exponential disruption" with Kate Crawford: podcast and transcript - MSNBC [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is 'Generative' AI the Way of the Future? - Northeastern University [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Is artificial intelligence approaching science fiction? - The Trail - The Puget Sound Trail [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- Towards Artificial General Intelligence, ChatGPT 5 is on Track - Analytics Insight [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- New AGI hardware in progress for artificial general intelligence - Information Age [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- ChatGPT is impressive, but it may slow the emergence of AGI - TechTalks [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- How smart is ChatGPT really and how do we judge intelligence in AIs? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: April 20th, 2023]
- AI Singapore and the Digital and Intelligence Service Sign ... - MINDEF Singapore [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath - DATAQUEST [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- Opinion | We Need a Manhattan Project for AI Safety - POLITICO [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- As AutoGPT released, should we be worried about AI? - Cosmos [Last Updated On: August 18th, 2024] [Originally Added On: May 14th, 2023]
- The Role of Artificial Intelligence in the Future of Media - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- AI and the Next Phase of Human Evolution: What Can We Expect? - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- IT pros mull observability tools, devx and generative AI - TechTarget [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The future of learning and skilling with AI in the picture - Chief Learning Officer [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Economic Case for Generative AI and Foundation Models - Andreessen Horowitz [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Exploring the future of AI: The power of decentralization - Cointelegraph [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Convergence of Brain-Inspired AI and AGI: Exploring the Path to ... - Newswise [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Development of GPT-5: The Next Step in AI Technology - Fagen wasanni [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Executive Q&A: Andrew Cardno, QCI - Indian Gaming [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Future is Now: Understanding and Harnessing Artificial ... - North Forty News [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- The Twin Convergence: AGI And Superconductors Ushering Humanity's Inflection Point - Medium [Last Updated On: August 4th, 2023] [Originally Added On: August 4th, 2023]
- Moonshot: Coexisting with AI holograms - The Edge Malaysia [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- A large computing cluster at sea might have big implications for AI ... - XDA Developers [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- ChatGPT or not ChatGPT? That was the question, briefly, as ... - GeekWire [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The impact of AI and Language Models - Girton College [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Startup gaining investment traction for AI clinician productivity tool - Mobihealth News [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- What OpenAI's latest batch of chips says about the future of AI - Quartz [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How AI Ecosystems Are Transforming the Future of Business - Entrepreneur [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Understanding Artificial Intelligence: Definition, Applications, and ... - Medium [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Europe's weaknesses, opportunities facing the AI revolution - EURACTIV [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- How the AI Executive Order and OMB memo introduce ... - Brookings Institution [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Artificial intelligence: the world is waking up to the risks - InCyber [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- Sam Altman Seems to Imply That OpenAI Is Building God - Futurism [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat [Last Updated On: August 18th, 2024] [Originally Added On: November 16th, 2023]
- The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Era of AI: 2023's Landmark Year - CMSWire [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI consciousness: scientists say we urgently need answers - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: December 27th, 2023]
- Amazon reportedly preparing paid Alexa version powered by its own Titan AI model - SiliconANGLE News [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Meta AI Head: ChatGPT Will Never Reach Human Intelligence - PYMNTS.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- What is artificial general intelligence, and is it a useful concept? - New Scientist [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- 22 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - Livescience.com [Last Updated On: August 18th, 2024] [Originally Added On: May 25th, 2024]
- Why AI Won't Take Over The World Anytime Soon - Bernard Marr [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics - TechRadar [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Responsible AI needs further collaboration - Chinadaily.com.cn - China Daily [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI says it's charting a "path to AGI" with its next frontier AI model - ITPro [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- The AI revolution is coming to robots: how will it change them? - Nature.com [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- Can AI ever be smarter than humans? | Context - Context [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]
- OpenAI revives its robotic research team, plans to build dedicated AI - Interesting Engineering [Last Updated On: August 18th, 2024] [Originally Added On: June 3rd, 2024]