How to win the artificial general intelligence race and not end … – The Strategist

In 2016, I witnessed DeepMinds artificial-intelligence model AlphaGo defeat Go champion Lee Sedol in Seoul. That event was a milestone, demonstrating that an AI model could beat one of the worlds greatest Go players, a feat that was thought to be impossible. Not only was the model making clever strategic moves but, at times, those moves were beautiful in a very deep and humanlike way.

Other scientists and world leaders took note and, seven years later, the race to control AI and its governance is on. Over the past month, US President Joe Biden has issued an executive order on AI safety, the G7 announced the Hiroshima AI Process and 28 countries signed the Bletchley Declaration at the UKs AI Safety Summit. Even the Chinese Communist Party is seeking to carve out its own leadership role with the Global AI Governance Initiative.

These developments indicate that governments are starting to take the potential benefits and risks of AI equally seriously. But as the security implications of AI become clearer, its vital that democracies outcompete authoritarian political systems to ensure future AI models reflect democratic values and are not concentrated in institutions beholden to the whims of dictators. At the same time, countries must proceed cautiously, with adequate guardrails, and shut down unsafe AI projects when necessary.

Whether AI models will outperform humans in the near future and pose existential risks is a contentious question. For some researchers who have studied these technologies for decades, the performance of AI models like AlphaGo and ChatGPT are evidence that the general foundations for human-level AI have been achieved and that an AI system thats more intelligent than humans across a range of tasks will likely be deployed within our lifetimes. Those systems are known as artificial general intelligence (AGI), artificial superintelligence or general AI.

For example, most AI models now use neural networks, an old machine-learning technique created in the 1940s that was inspired by the biological neural networks of animal brains. The abilities of modern neural networks like AlphaGo werent fully appreciated until computer chips used mostly for gaming and video rendering, known as graphics processing units, became powerful enough in the 21st century to process the computations needed for specific human-level tasks.

The next step towards AGI was the arrival of large-language models, such as OpenAIs GPT-4, which are created using a version of neural networks known as transformers. OpenAIs previous version of its chatbot, GPT-3, surprised everyone in 2020 by generating text that was indistinguishable from that written by people and performinga range of language-based tasks with few or no examples. GPT-4, the latest model, has demonstrated human-level reasoning capabilities and outperformed human test-takers on the US bar exam, a notoriously difficult test for lawyers. Future iterations are expected to have the ability to understand, learn and apply knowledge at a level equal to, or beyond, humans across all useful tasks.

AGI would be the most disruptive technology humanity has created. An AI system that can automate human analytical thinking, creativity and communication at a large scale and generate insights, content and reports from huge datasets would bring about enormous social and economic change. It would be our generations Oppenheimer moment, only with strategic impacts beyond just military and security applications. The first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. For those reasons, long-term geopolitical competition between liberal democracies and authoritarian countries is fuelling an arms race to develop and control AGI.

At the core of this race is ideological competition, which pushes governments to support the development of AGI in their country first, since the technology will likely reflect the values of the inventor and set the standards for future applications. This raises important questions about what world views we want AGIs to express. Should an AGI value freedom of political expression above social stability? Or should it align itself with a rule-by-law or rule-of-law society? With our current methods, researchers dont even know if its possible to predetermine those values in AGI systems before theyre created.

Its promising that universities, corporations and civil research groups in democracies are leading the development of AGI so far. Companies like OpenAI, Anthropic and DeepMind are household names and have been working closely with the US government to consider a range of AI safety policies. But startups, large corporations and research teams developing AGI in China, under the authoritarian rule of the CCP, are quickly catching up and pose significant competition. China certainly has the talent, the resources and the intent but faces additional regulatory hurdles and a lack of high-quality, open-source Chinese-language datasets. In addition, large-language models threaten the CCPs monopoly on domestic information control by offering alternative worldviews to state propaganda.

Nonetheless, we shouldnt underestimate the capacity of Chinese entrepreneurs to innovate under difficult regulatory conditions. If a research team in China, subject to the CCPs National Intelligence Law, were to develop and tame AGI or near-AGI capabilities first, it would further entrench the partys power to repress its domestic population and ability to interfere with the sovereignty of other countries. Chinas state security system or the Peoples Liberation Army could deploy it to supercharge their cyberespionage operations or automate the discovery of zero-day vulnerabilities. The Chinese government could embed it as a superhuman adviser in its bureaucracies to make better operational, military, economic or foreign-policy decisions and propaganda. Chinese companies could sell their AGI services to foreign government departments and companies with back doors into their systems or covertly suppress content and topics abroad at the direction of Chinese security services.

At the same time, an unfettered AGI arms race between democratic and authoritarian systems could exacerbate various existential risks, either by enabling future malign use by state and non-state actors or through poor alignment of the AIs own objectives. AGI could, for instance, lower the impediments for savvy malicious actors to develop bioweapons or supercharge disinformation and influence operations. An AGI could itself become destructive if it pursues poorly described goals or takes shortcuts such as deceiving humans to achieve goals more efficiently.

When Meta trained Cicero to play the board game Diplomacy honestly by generating only messages that reflected its intention in each interaction, analysts noted that it could still withhold information about its true intentions or not inform other players when its intentions changed. These are serious considerations with immediate risks and have led many AI experts and people who study existential risk to call for a pause on advanced AI research. But policymakers worldwide are unlikely to stop given the strong incentives to be a first mover.

This all may sound futuristic, but its not as far away as you might think. In a 2022 survey, 352 AI experts put a 50% chance of human-level machine intelligence arriving in 37 yearsthat is, 2059. The forecasting community on the crowd-sourced platform Metaculus, which has a robust track record of AI-related forecasts, is even more confident of the imminent development of AGI. The aggregation of more than 1,000 forecasters suggests2032 as the likely year general AI systems will be devised, tested and publicly announced. But thats just the current estimateexperts and the amateurs on Metaculus have shortened their timelines each year as new AI breakthroughs are publicly announced.

That means democracies have a lead time of between 10 and 40 years to prepare for the development of AGI. The key challenge will be how to prevent AI existential risks while innovating faster than authoritarian political systems.

First, policymakers in democracies must attract global AI talent, including from China and Russia, to help align AGI models with democratic values. Talent is also needed within government policymaking departments and think tanks to assess AGI implications and build the bureaucratic capacity to rapidly adapt to future developments.

Second, governments should be proactively monitoring all AGI research and development activity and should pass legislation that allows regulators to shut down or pause exceptionally risky projects. We should remember that Beijing has more to worry about with regard to AI alignment because the CCP is too worried about its own political safety to relax its strict rules on AI development.

We therefore shouldnt see government involvement only in terms of its potential to slow us down. At a minimum, all countries, including the US and China, should be transparent about their AGI research and advances. That should include publicly disclosing their funding for AGI research and safety policies and identifying their leading AGI developers.

Third, liberal democracies must collectively maintain as large a lead as possible in AI development and further restrict access to high-end technology, intellectual property, strategic datasets and foreign investments in Chinas AI and national-security industries. Impeding the CCPs AI development in its military, security and intelligence industries is also morally justifiable in preventing human rights violations.

For example, Midu, an AI company based in Shanghai that supports Chinas propaganda and public-security work, recently announced the use of large-language models to automate reporting on public opinion analysis to support surveillance of online users. While Chinas access to advanced US technologies and investment has been restricted, other like-minded countries such as Australia should implement similar outbound investment controls into Chinas AI and national-security industries.

Finally, governments should create incentives for the market to develop safe AGI and solve the alignment problem. Technical research on AI capabilities is outpacing technical research on AI alignment and companies are failing to put their money where their mouth is. Governments should create prizes for research teams or individuals to solve difficult AI alignment problems. One model potential model could be like the Clay Institutes Millennium Prize Problems, which provides awards for solutions to some of the worlds most difficult mathematics problems.

Australia is an attractive destination for global talent and is already home to many AI safety researchers. The Australian government should capitalise on this advantage to become an international hub for AI safety and alignment research. The Department of Industry, Science and Resources should set up the worlds first AGI prize fund with at least $100 million to be awarded to the first global research team to align AGI safely.

The National Artificial Intelligence Centre should oversee a board that manages this fund and work with the research community to create a list of conditions and review mechanisms to award the prize. With $100 million, the board could adopt a similar investment mandate as Australias Future Fund to achieve an average annual return of at least the consumer price index plus 45% per annum over the long term. Instead of being reinvested into the fund, the 45% interest accrued each year on top of CPI should be used as smaller awards for incremental achievements in AI research each year. These awards could also be used to fund AI PhD scholarships or attract AI postdocs to Australia. Other awards could be given to research, including research conducted outside Australia, in annual award ceremonies, like the Nobel Prize, which will bring together global experts on AI to share knowledge and progress.

A $100 million fund may seem a lot for AI research but, as a comparison, Microsoft is rumoured to have invested US$10 billion into OpenAI this year alone. And $100 million pales in comparison to the contributions safely aligned AGI would have on the national economy.

The stakes are high for getting AGI right. If properly aligned and developed, it could bring an epoch of unimaginable human prosperity and enlightenment. But AGI projects pursued recklessly could pose real risks of creating dangerous superhuman AI systems or bringing about global catastrophes. Democracies must not cede leadership of AGI development to authoritarian systems, but nor should they rush to secure a Pyrrhic victory by going ahead with models that fail to embed respect for human rights, liberal values and basic safety.

This tricky balance between innovation and safety is the reason policymakers, intelligence agencies, industry, civil society and researchers must work together to shape the future of AGIs and cooperate with the global community to navigate an uncertain period of elevated human-extinction risks.

Read the original here:

How to win the artificial general intelligence race and not end ... - The Strategist

Related Posts

Comments are closed.