AI 2023: risks, regulation & an ‘existential threat to humanity’ – RTE.ie

Opinion: AI's quickening pace of development has led to a plethora of coverage and concern over what might come next

These days the public is inundated with news stories about the rise of artificial intelligence and the ever quickening pace of development in the field. The last year has been particularly noteworthy in this regard and the most noteworthy stories came as ChatGPT was introduced to the world in November 2022.

This is one of many Generative AI systems which can almost instantaneously create text on any topic, in any style, of any length, and at a human level of performance. Of course, the text might not be factual, nor might it make sense, but it almost always does.

ChatGPT is a "large language model". It's large in that it has been trained on enormous amounts of text almost everything that is available in a computer-readable form and it produces extremely sophisticated output of a level of competence we would expect of a human. This can be seen as a big sibling to the predictive text system on your smartphone that helps by predicting the next word you might want to type.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT 2fm's Dave Fanning Show, Prof Barry O'Sullivan on the rise of AI

But ChatGPT doesn't do this just at word level, but at the level of entire passages of text. It can also compose answers to complex queries from the user. For example, ChatGPT takes the prompt "how can I make something that flies from cardboard?" and answers with clear instructions, explains the principles of flight that can be utilised and how to incorporate them into your design.

The most powerful AI systems, those using machine learning, are built using huge amounts of data. Arthur C. Clarke said that "any sufficiently advanced technology is indistinguishable from magic". For many years now, there has been growing evidence that the manner in which these systems are created can have considerable negative consequences. For example, AI systems have been shown to replicate and magnify human biases. Some AI systems have been shown to amplify gender and racial biases, often due to hidden biases in the data used to train them. They have also been shown to be brittle in the sense that they can be easily fooled by carefully formulated or manipulated queries.

AI systems have also been built to perform tasks that raise considerable ethical questions such as, for example, predicting the sexual orientation of individuals. There is growing concern about the impact of AI on employment and the future of work. Will AI automate so many tasks that entire jobs will disappear and will this lead to an unemployment crisis? These risks are often referred to as the "short-term" risks of AI. On the back of issues like these, there is a considerable focus on the ethics of AI, how AI can be made trustworthy and safe and the many international initiatives related to the regulation of AI.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's Morning Ireland, Prof Barry O'Sullivan discusses an open letter signed by key figures in artificial intelligence who want powerful AI systems to be suspended amid fears of a threat to humanity.

We have recently also seen a considerable focus on the "long-term" risks of AI which tend to be far more dystopian. Some believe that general purpose AI and, ultimately, artificial general intelligence are on the horizon. Todays AI systems, often referred to as "narrow AI systems", tend to be capable of performing one task well, such as, for example, navigation, movie recommendation, production scheduling and medical diagnosis.

On the other hand, general purpose AI systems can perform many different tasks at a human-level of performance. Take a step further and artificial general intelligence systems would be able to perform all the tasks that a human can and with far greater reliability.

Whether we will ever get to that point, or even if we really would want to, is a matter of debate in the AI community and beyond. However, these systems will introduce a variety of risks, including the extreme situation where AI systems will be so advanced that they would pose an existential threat to humanity. Those who argue that we should be concerned about these risks sometimes compare artificial general intelligence to an alien race, that the existence of this extraordinarily advanced technology would be tantamount to us living with an advanced race of super-human aliens.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's This Week, fears over AI becoming too powerful and endangering humans has been a regular sci-fi theme in film and TV for decades, but could it become a reality?

While I strongly believe that we need to address both short-term and long-term risks associated with AI, we should not let the dystopian elements distract our focus from the very real issues raised by AI today. In terms of existential threat to humanity, the clear and present danger comes from climate change rather than artificial general intelligence. We already see the impacts of climate change across the globe and throughout society. Flooding, impacts on food production and the risks to human wellbeing are real and immediate concerns.

Just like the role AI played in the discovery of the Covid-19 vaccines, the technology has a lot to offer in dealing with climate change. For almost two decades the field of computational sustainability has used the methods of artificial intelligence, data science, mathematics, and computer science, to the challenges of balancing societal, economic, and environmental resources to secure the future well-being of humanity, very much addressing the Sustainable Development Goals agenda.

AI has been used to design sustainable and climate-friendly policies. It has been used to efficiently manage fisheries and plan and monitor natural resources and industrial production. Rather than being seen as an existential threat to humanity, AI should be seen as a tool to help with the greatest threat there exists to humanity today: climate change.

Of course, we cannot let AI develop in a way that is without guardrails and without proper oversight. I am confident that the fact that there is active debate about the risks of AI, and that there are regulatory frameworks being put in place internationally, that we will tame the genie that is AI.

Prof Barry O'Sullivan appears on Game Changer: AI & You which airs on RT One at 10:15pm tonight

The views expressed here are those of the author and do not represent or reflect the views of RT

More:

AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie

Related Posts

Comments are closed.