As AutoGPT released, should we be worried about AI? – Cosmos

A new artificial intelligence tool coming just months after ChatGPT appears to offer a big leap forward it can improve itself without human intervention.

The artificial intelligence (AI) tool AutoGPT was released by the same company, OpenAI, which brought us ChatGPT last year. AutoGPT promises to overcome the limitations of large language models (LLMs) such as ChatGPT.

ChatGPT exploded onto the scene at the end of 2022 for its ability to respond to text prompts in a (somewhat) human-like and natural way. It has, caused concern for occasionally including misleading or incorrect information in its responses and for its potential to be used for plagiarising assignments in schools and universities.

But its not these limitations that AutoGPT seeks to overcome.

AI is categorised as weak (narrow) or strong (general). As an AI tool designed to carry out a single task, ChatGPT is considered weak AI.

AutoGPT is created with a view to becoming a strong AI, or artificial general intelligence, theoretically capable of carrying out many different types of task, including those for which it wasnt originally designed to perform.

LLMs are designed to respond to prompts produced by human users. They then respond to that and await the next prompt.

AutoGPT is being designed to give itself prompts, creating a loop. Masa, a writer on AutoGPTs website, explains: It works by breaking a larger task into smaller sub-tasks and then spinning off independent Auto-GPT instances in order to work on them. The original instance acts as a kind of project manager, coordinating all of the work carried out and compiling it into a finished result.

But is a self-improving AI a good thing? Many experts are worried about the trajectory of artificial intelligence research.

The respected and influential British Medical Journal has published an article titled Threats by artificial intelligence to human health and human existence in which they explain three key reasons we should be concerned about AI.

Get an update of science stories delivered straight to your inbox.

Threats identified by the international team of doctors and public health experts, including those from Australia, relate to misuse of AI and the impact of the ongoing failure to adapt to and regulate the technology.

The authors note the significance of AI and its potential to have transformative effect on society. But they also warn that artificial general intelligence in particular poses an existential threat to humanity.

First, they warn of the ability of AI to clean, organise, and analyse massive data sets including of personal data such as images. Such capabilities could be used to manipulate and distort information and for AI surveillance. The authors note that such surveillance is in development in more than 75 countries ranging from liberal democracies to military regimes, [which] have been expanding such systems.

Second they say Lethal Autonomous Weapon Systems (LAWS) capable of locating, selecting, and engaging human targets without the need for human supervision, could lead to killing at an industrial scale.

Finally, the authors raise concern over the loss of jobs that will come from the spread of AI technology in many industries. Estimates are that tens to hundreds of millions of jobs will be lost in the coming decade.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, they write.

The authors highlight artificial general intelligence as a threat to the existence of human civilisation itself.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they write.

See the rest here:

As AutoGPT released, should we be worried about AI? - Cosmos

Related Posts

Comments are closed.