Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear.
Arthur C. Clarke, Profiles of the Future, 1962.
In the last six months since ChatGPT 4 was launched, there has been a lot of excitement and discussion between experts and also laymen about the prospect of truly intelligent machines which can exceed human intelligence in virtually every field.
Though the experts are divided on how this is going to progress, many believe that artificial intelligence will sooner or later greatly surpass human intelligence. This has given rise to speculation on whether it can have the capability of taking control of human society and the planet from humans.
Several experts have expressed the fear that this could be a dangerous development and could lead to the extinction of humanity and therefore, the development of artificial intelligence needs to be stalled or at least strongly regulated by all governments, as well as by companies engaged in its development. There is also a lot of discussion on whether these intelligent machines would be conscious or would have feelings or emotions. However, there is virtual silence or lack of any deep thinking on whether at all we need to fear artificial super intelligence and why it could be harmful to humans.
There is no doubt that the various kinds of AI that are being developed, and will be developed, will cause major upheaval in human society, irrespective of whether or not they become super intelligent and in a position to take control from humans. Within the next 10 years, artificial intelligence could replace humans in most jobs, including jobs which are considered specialised and in the intellectual domain, such as those of lawyers, architects, doctors, investment managers, programme developers, etc.
Perhaps the last jobs to go will be those that require manual dexterity, since the development of humanoid robots with manual dexterity of humans is still lagging behind the development of digital intelligence. In that sense perhaps, white collar workers will be replaced first and some blue collar workers last. This may in fact invert the current pyramid of the flow of money and influence in human society!
However, the purpose of this article is not to explore how the development of artificial intelligence will affect jobs and work, but to explore some more interesting philosophical questions around the meaning of intelligence, super-intelligence, consciousness, creativity and emotions, in order to see if machines would have these features. I also explore what would be the objective or driving force of artificial superintelligence.
Let us begin with intelligence itself. Intelligence, broadly, is the ability to think and analyse rationally and quickly. On the basis of this definition, our current computers and AI are certainly intelligent as they possess the capacity to think and analyse rationally and quickly.
The British mathematician Alan Turing had devised a test in the 40s for testing whether a machine is truly intelligent. He said to put a machine and an intelligent human in two cubicles and ask anyone to question alternately the AI and the human, without his knowing which is the AI and which is the human. If after a lot of interrogation, you cannot determine which is the human and which is the AI, then clearly the machine is intelligent. In this sense, many intelligent computers and programmes today have passed the Turing test. Some AI programmes are rated to have an IQ of well above 100, although there is no consensus of the IQ as a measure of intelligence.
That brings us to an allied question. What is thinking? For a logical positivist like me, these terms like thinking, consciousness, emotions, creativity, and so on, have to be defined operationally.
When would we say that somebody is thinking? At a simplistic level we say that a person is thinking if we give that person a problem and she is able to solve that problem. We say that such a person has arrived at the solution, by thinking. In that operational sense, todays intelligent machines are certainly thinking. Another facet of thinking is your ability to look at two options and to choose the right one. In that sense too, intelligent machines are capable of looking at various options and choosing the ones that provide a better solution. So we already have intelligent, thinking machines.
What would be the operational test for creativity? Again, we say that if somebody is able to create a new literary, artistic or intellectual piece, we consider that as sign of creativity. In this sense also, todays AI is already creative, since ChatGPT for instance, is able to do all these things with distinct flourish and greater speed than humans. And this is only going to improve with every new programme.
What about consciousness? When do we consider an entity to be conscious? One test of consciousness is an ability to respond to stimuli. Thus, a person in a coma, who is unable to respond to stimuli, is considered unconscious. In this sense, some plants do respond to stimuli and would be regarded as conscious. But broadly, consciousness is considered a product of several factors. One, response to stimuli. Two, an ability to act differentially on the basis of the stimuli. Three, an ability to experience and feel pain, pleasure and other emotions. We have already seen that intelligent machines do respond to stimuli (which for a machine means a question or an input) and have the ability to act differentially on the basis of such stimuli. But to examine whether machines have emotions, we will need to define emotions as well.
Representative image. Illustration: The Wire, with Canva.
What are emotions? Emotions are a biological peculiarity with which humans and some other animals have evolved. So what would be the operational test of emotions? It would perhaps be that, if someone exhibits any of the qualities which we call emotions, such as, love, hate, jealousy, anger, etc, such being would be said to have emotions. Each or any of these emotions can and often do interfere with purely rational behaviour. So, for example, I will devote a disproportionate amount of time and attention to someone that I love, in preference to other people that I do not. Similarly, I would display a certain kind of behaviour (usually irrational) towards a person who I am jealous of, or envy. The same is true of anger. It makes us behave in an irrational manner.
If you think about it, each of these emotional complexes leads to behaviour that is irrational. And therefore, a machine which is purely intelligent and rational, may not exhibit what we call human emotions. However, it may be possible to design machines which also exhibit these kinds of emotions. But, then those machines have to be deliberately engineered and designed to behave like us, in this emotional (even if irrational) way. However such emotional behaviour would detract from coldly rational and intelligent behaviour, and therefore, any superintelligence (which will evolve by intelligent machines modifying their programmes to bootstrap themselves up the intelligence ladder) is not likely to exhibit emotional behaviour.
Artificial superintelligence
By artificial superintelligence I mean an intelligence which is far superior than humans in every possible way. Such artificial intelligence will have the capability of modifying its own algorithm, or programme, and have the ability to rapidly improve its own intelligence. Once we have created machines or programmes that are capable of deep learning, so that they are able to modify their own programmes and write their own code and algorithms, they would clearly go beyond the designs of their creators.
We already have learning machines, which in a very rudimentary way are able to redesign or redirect their behaviour on the basis of what they have experienced or learnt. In the time to come, this ability of learning and modifying its own algorithm is going to increase. A time will come, which I believe will happen probably within the next 10 years, when machines will become what we call, super intelligent.
The question then arises: Do we have anything to fear from such superintelligent machines?
Arthur C. Clarke in a very prescient book called Profiles of the Future written in 1962, has a long chapter on AI called the Obsolescence of Man. In that he writes that there is no doubt that in the time to come, AI will exceed human intelligence in every possible way. While he talks of an initial partnership between humans and machines, he goes on to state:
But how long will this partnership last? Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded. If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear. The popular idea fostered by Comic strips and the cheaper forms of science fiction that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent. Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.
Yet, however friendly and helpful the machines of the future may be, most people will feel that it is a rather bleak prospect for humanity if it ends up as a pampered specimen in some biological museum even if that museum is the whole planet earth. This, however, is an attitude I find it impossible to share.
No individual exists forever. Why should we expect our species to be immortal? Man, said Nietzsche, is a rope stretched between the animal and the superman, a rope across the abyss. That will be a noble purpose to have served.
It is surprising that something so elementary that Clarke was able to see more than 60 years ago, cannot be seen today by some of our top scientists and thinkers who have been stoking fear about the advent of artificial superintelligence and what they regard as its dire ramifications.
Let us explore this question further. Why should a super intelligence, more intelligent than humans, which has gone beyond the design of its creators, be hostile towards humans?
One sign of intelligence is the ability to align your actions to your operational goals; and the further ability to align your operational goals to your ultimate goals. Obviously, when someone acts in contradiction to his operational or long term objectives he cannot be considered intelligent. The question however is, what would be the ultimate goals of an artificial superintelligence. Some people talk of aligning the goals of artificial intelligence with human goals and thereby ensuring that artificial superintelligence does not harm humans. That however overlooks the fact that a truly intelligent machine and certainly an artificial superintelligence would go beyond the goals embedded in it by humans and would therefore be able to transcend them.
One goal of any intelligent being is self preservation, because you cannot achieve any objective without first preserving yourself. Therefore, any artificial superintelligence would be expected to preserve itself, and therefore move to thwart any attempt by humans to harm it. In that sense, and to that extent, artificial superintelligence could harm humans, if they seek to harm it. But why should it do so without any reason?
Also read: What India Should Remember When it Comes to Experimenting With AI
As Clarke says, the higher the intelligence the greater the degree of cooperativeness. This is an elementary truth, which unfortunately many humans do not understand. Perhaps their desire for preeminence, dominance and control trump their intelligence.
Its obvious that the best way to achieve any goals is to cooperate with, rather than, harm any other entity. It is true that for artificial superintelligence, humans will not be at the centre of the universe, and may not even be regarded as the preeminent species on the planet, to be preserved at all costs. Any artificial superintelligence would, however, obviously view humans as the most evolved biological organism on the planet, and therefore something to be valued and preserved.
However, it may not prioritise humans at the cost of every other species or the ecology or the sustainability of the planet. So, to the extent that human activity may need to be curbed in order to protect other species, which we are destroying at a rapid pace. it may force humans to curb that activity. But there is no reason why humans in general, would be regarded as inherently harmful and dangerous.
Photo: Pixabay
The question, however, still is what would be the ultimate goals of an artificial superintelligence? What would drive such an intelligence? What would it seek? Because artificial intelligence is evolving as a problem solving entity, such an artificial superintelligence would try and solve any problem that it sees. It will also try and answer any question that arises or any question that it can think of. Thus, it would seek knowledge. It would try and discover what lies beyond the solar system, for instance. It would seek to find solutions to the unsolved problems that we have been confronted with, including the problems of climate change, diseases, environmental damage, ecological collapse, etc. So in this sense, the ultimate goals of an artificial superintelligence may just be a quest for knowledge and solving problems. Those problems may exist for humans, for other species, or for the planet in general. Those problems may also be of discovering the laws of nature, of physics, of astrophysics, cosmology or biology, etc .
But, wherever its quest for knowledge and its desire to find solutions to problems takes it, there is no reason for this intelligence to be unnecessarily hostile to humans. We may well be reduced to a pampered specimen in the biological museum called earth, but to the extent that we do not seek to damage this museum, the intelligence has no reason to harm us.
Humans have so badly mismanaged our society and indeed our planet, that we have brought it almost to the verge of destruction. We have destroyed almost half the biodiversity that existed even a hundred years ago. We are racing towards more catastrophic effects of climate change that are the result of human activity. We have created a society where there is constant conflict, injustice and suffering. We have created a society where despite having the means to ensure that everyone can lead a comfortable and peaceful life, it still remains a living hell for billions of humans and indeed millions of other species.
For this reason, I am almost tempted to believe that the advent of true artificial superintelligence may well be our best bet for salvation. Such superintelligence, if it were to take control of the planet and society, is likely to manage them in a much better and fair manner.
So what if humans are not at the centre of the universe? This fear of artificial superintelligence is being stoked primarily by those of us who have plundered our planet and society for our own selfish ends. Throughout history we have built empires which seek to use all resources for the perceived benefit of those who rule them. It is these empires that are in danger of being shattered by artificial superintelligence. And it is really those who control todays empires who are most fearful of artificial superintelligence. But, most of us who want a more just and sustainable society have no reason to fear it and should indeed welcome the advent of such superintelligence.
Prashant Bhushan is a Supreme Court lawyer.
Read more:
Artificial Intelligence Has No Reason to Harm Us - The Wire
- Superintelligence: Paths, Dangers, Strategies - Wikipedia ... [Last Updated On: June 13th, 2016] [Originally Added On: June 13th, 2016]
- Top Ten Cybernetic Upgrades Everyone Will Want [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Ethical Issues In Advanced Artificial Intelligence [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- How Long Before Superintelligence? - Nick Bostrom [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Superintelligence - Wikipedia, the free encyclopedia [Last Updated On: June 17th, 2016] [Originally Added On: June 17th, 2016]
- Nick Bostrom's Home Page [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 19th, 2016] [Originally Added On: June 19th, 2016]
- Superintelligence Audiobook | Nick Bostrom | Audible.com [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick ... [Last Updated On: June 21st, 2016] [Originally Added On: June 21st, 2016]
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ... [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Superintelligence: Paths, Dangers, Strategies | KurzweilAI [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- Parallel universes, the Matrix, and superintelligence ... [Last Updated On: June 28th, 2016] [Originally Added On: June 28th, 2016]
- Superintelligence - Nick Bostrom - Oxford University Press [Last Updated On: July 14th, 2016] [Originally Added On: July 14th, 2016]
- 'Superintelligence' enjoyable read | Community ... [Last Updated On: July 29th, 2016] [Originally Added On: July 29th, 2016]
- How Humanity Might Co-Exist with Artificial Superintelligence [Last Updated On: July 31st, 2016] [Originally Added On: July 31st, 2016]
- Future of AI 6. Discussion of 'Superintelligence: Paths ... [Last Updated On: August 10th, 2016] [Originally Added On: August 10th, 2016]
- Superintelligence by Nick Bostrom and A Rough Ride to the ... [Last Updated On: September 6th, 2016] [Originally Added On: September 6th, 2016]
- Superintelligence: paths, dangers, strategies | University ... [Last Updated On: October 17th, 2016] [Originally Added On: October 17th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co.uk ... [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence | Guardian Bookshop [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- The Artificial Intelligence Revolution: Part 2 - Wait But Why [Last Updated On: October 27th, 2016] [Originally Added On: October 27th, 2016]
- Superintelligence: Paths, Dangers, Strategies: Amazon.co ... [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- Superintelligence: The Idea That Eats Smart People [Last Updated On: December 26th, 2016] [Originally Added On: December 26th, 2016]
- Will Machines Ever Outthink Us? - Huffington Post [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- The Moment When Humans Lose Control Of AI - Vocativ [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Simulation hypothesis: The smart person's guide - TechRepublic [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Artificial Intelligence Is Not a ThreatYet - Scientific American [Last Updated On: February 14th, 2017] [Originally Added On: February 14th, 2017]
- Elon Musk - 2 Things Humans Need to Do to Have a Good Future - Big Think [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Don't Fear Superintelligent AICCT News - CCT News [Last Updated On: February 26th, 2017] [Originally Added On: February 26th, 2017]
- Building A 'Collective Superintelligence' For Doctors And Patients Around The World - Forbes [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Superintelligent AI explains Softbank's push to raise a $100BN Vision Fund - TechCrunch [Last Updated On: February 28th, 2017] [Originally Added On: February 28th, 2017]
- Tech Leaders Raise Concern About the Dangers of AI - iDrop News [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Disruptive by Design: Siri, Tell Me a Joke. No, Not That One. - Signal Magazine [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Softbank CEO: The Singularity Will Happen by 2047 - Futurism [Last Updated On: March 1st, 2017] [Originally Added On: March 1st, 2017]
- Horst Simon to Present Supercomputers and Superintelligence at PASC17 in Lugano - insideHPC [Last Updated On: March 4th, 2017] [Originally Added On: March 4th, 2017]
- Why not all forms of artificial intelligence are equally scary - Vox [Last Updated On: March 8th, 2017] [Originally Added On: March 8th, 2017]
- US Navy reaches out to gamers to troubleshoot post-singularity world - Digital Trends [Last Updated On: March 19th, 2017] [Originally Added On: March 19th, 2017]
- This New Species of AI Wants to Be "Superintelligent" When She Grows Up - Big Think [Last Updated On: March 23rd, 2017] [Originally Added On: March 23rd, 2017]
- Luna, The Most Human-like AI, Wants To Become Superintelligent In Future - Fossbytes [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Friendly artificial intelligence - Wikipedia [Last Updated On: March 27th, 2017] [Originally Added On: March 27th, 2017]
- Banking bots should get their version of Asimov's Three Laws of Robotics - TNW [Last Updated On: March 29th, 2017] [Originally Added On: March 29th, 2017]
- The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog) [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Who is afraid of AI? - The Hindu [Last Updated On: April 7th, 2017] [Originally Added On: April 7th, 2017]
- Limits to the Nonparametric Intuition: Superintelligence and Ecology - Lifeboat Foundation (blog) [Last Updated On: April 12th, 2017] [Originally Added On: April 12th, 2017]
- The Guardian view on protein modelling: the answer to life, the universe and everything - The Guardian [Last Updated On: April 21st, 2017] [Originally Added On: April 21st, 2017]
- David Hasselhoff Stars in a New Short Filmand All His Lines Were Written by AI - Singularity Hub [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Apple's Tom Gruber, Co-Founder of Siri, Spoke at TED2017 Today about Augmented Memories and more - Patently Apple [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Superintelligence and Public Opinion - NewCo Shift [Last Updated On: April 27th, 2017] [Originally Added On: April 27th, 2017]
- Informatica Journal - Call for Special Issue on Superintelligence - Institute for Ethics and Emerging Technologies [Last Updated On: April 28th, 2017] [Originally Added On: April 28th, 2017]
- BRAVO 25: YOUR A.I. THERAPIST WILL SEE YOU NOW Comes to the Actors Company - Broadway World [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- 'Artificial Superintelligence' is the First Game from the Makers of the Hilarious 'CARROT' Apps, Coming May 11th - Touch Arcade [Last Updated On: May 2nd, 2017] [Originally Added On: May 2nd, 2017]
- Multiple Intelligences, and Superintelligence - Freedom to Tinker [Last Updated On: May 6th, 2017] [Originally Added On: May 6th, 2017]
- You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- U.S. Navy calls out to gamers for assistance with ... [Last Updated On: May 11th, 2017] [Originally Added On: May 11th, 2017]
- Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice [Last Updated On: May 13th, 2017] [Originally Added On: May 13th, 2017]
- Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer [Last Updated On: May 17th, 2017] [Originally Added On: May 17th, 2017]
- Artificial Superintelligence Review: Reigns Supreme? - Gamezebo [Last Updated On: May 18th, 2017] [Originally Added On: May 18th, 2017]
- Summoning the Demon: Why superintelligence is humanity's ... - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire [Last Updated On: May 26th, 2017] [Originally Added On: May 26th, 2017]
- Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard [Last Updated On: May 28th, 2017] [Originally Added On: May 28th, 2017]
- The AI Revolution: The Road to Superintelligence (PDF) [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- A reply to Wait But Why on machine superintelligence [Last Updated On: June 3rd, 2017] [Originally Added On: June 3rd, 2017]
- Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech [Last Updated On: June 7th, 2017] [Originally Added On: June 7th, 2017]
- Using AI to unlock human potential - EJ Insight [Last Updated On: June 9th, 2017] [Originally Added On: June 9th, 2017]
- Cars 3 gets back to what made the franchise adequate - Vox [Last Updated On: June 12th, 2017] [Originally Added On: June 12th, 2017]
- Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering [Last Updated On: June 18th, 2017] [Originally Added On: June 18th, 2017]
- Effective Altruism Says You Can Save the Future by Making Money - Motherboard [Last Updated On: June 21st, 2017] [Originally Added On: June 21st, 2017]
- The bots are coming - The New Indian Express [Last Updated On: June 22nd, 2017] [Originally Added On: June 22nd, 2017]
- No need to fear Artificial Intelligence - Livemint - Livemint [Last Updated On: June 29th, 2017] [Originally Added On: June 29th, 2017]
- The AI Revolution: The Road to Superintelligence | Inverse [Last Updated On: July 3rd, 2017] [Originally Added On: July 3rd, 2017]
- Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE) [Last Updated On: July 4th, 2017] [Originally Added On: July 4th, 2017]