Artificial intelligence: the world is waking up to the risks – InCyber

All these documents refer to the risks linked to Artificial General Intelligence (AGI), which is level 2 of AI. Todays artificial intelligence, including generative AI systems like ChatGPT, fall within Artificial Narrow Intelligence (ANI), which is level 1. This artificial intelligence can do a single activity as well as a human, perhaps even better.

AGI and its level 3 successor, Artificial Super Intelligence (ASI), are AIs that can accomplish all informational activities to a quality level that equals or exceeds what humans can produce. Currently, the expert consensus is that AGI could arrive between 2030 and 2040. Tomorrow, basically.

These documents point to major risks for humanity, but are they right to warn us of these dangers? The answer is clearly yes. I urge you to read all five documents, but if you were to read just one, it would be the one by this group of 30 experts.

This excerpt gives the general tone of the document: AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity. It coolly suggests the extinction of mankind! The three ensuing documents each mostly resemble each other. They are very general declarations of intent full of goodwill but with little real impact.

They were published by the United Nations, the G7 as well as the Bletchley Summit, an international meeting organized by the United Kingdom that was held on November 1 and 2, 2023.

No one will argue against the ideas expressed in the Bletchley Declaration signed by 28 countries with widely divergent interests, including the United States, China, India, Israel, Saudi Arabia and the European Union. The recognition of the need to take account of human rights protection, transparency and explicability, fairness, accountability, regulation, security, appropriate human oversight, ethics, bias mitigation, privacy and data protection.

The fifth document is different it is an executive order signed by Joe Biden on October 30, 2023. In 60 pages, the US president lists a hundred specific actions to be taken, and for each, the executive order names the public authorities in charge of carrying them out. Furthermore, the timetable is restrictive, with most of these actions being given between 45 and 365 days to be completed. It is far from a catalogue of good intentions: it demonstrates the United States clear desire to do everything it can to maintain its global leadership of AI.

The European Commission has been working on AI since 2020. In June 2023, it published a document, EU Legislation in Progress, detailing work on a European Artificial Intelligence Act (AIA) to follow the Digital Service Act and the Digital Market Act. The AIA must now be submitted to the Member States, who can make changes before its final approval. No one knows how long this could take.

To summarize, can we imagine what the future might hold for collaboration between humankind and AGI and ASI? If we are to believe Rich Sutton, professor at the University of Alberta in Canada and a recognized specialist in artificial intelligence, humanity must inevitably prepare to hand over the reins to AI, as this illustration from one of his recent lectures shows.

My recommendation: the challenges posed by the rapid arrival of AGIs and ASIs are among the questions that require quick reflection from directors of all organizations, public and private.

Furthermore, the best AI specialists are often asked, what is humanitys future in a world where AI performs better than humans?. The common answer? I dont know.But that is no reason not to think about it, all together, and very quickly.

See the original post:

Artificial intelligence: the world is waking up to the risks - InCyber

Related Posts

Comments are closed.