Tech writer Ben Dickson poses the question:
Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smartor smarterthan us?
Philosopher Borna Jalenjak (above right) of the Luxembourg School of Business has been thinking about that. He has a chapter, The Artificial Intelligence Singularity: What It Is and What It Is Not, in Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, in which he explores the case for thinking machines being alive, even if they are machines. The book as a whole presents unique perspectives on ideas in deep learning and artificial intelligence, and their historical and philosophical roots.
Dickson explains,
Singularity is a term that comes up often in discussions about general AI. And as is wont with everything that has to do with AGI, theres a lot of confusion and disagreement on what the singularity is. But a key thing that most scientists and philosophers agree that it is a turning point where our AI systems become smarter than ourselves. Another important aspect of the singularity is time and speed: AI systems will reach a point where they can self-improve in a recurring and accelerating fashion.
Said in a more succinct way, once there is an AI which is at the level of human beings and that AI can create a slightly more intelligent AI, and then that one can create an even more intelligent AI, and then the next one creates even more intelligent one and it continues like that until there is an AI which is remarkably more advanced than what humans can achieve, Jalsenjak writes.
No. Wait. Is there clear evidence that less intelligent entities can simply create more intelligent ones? Consider,
A recent paper on the evolution of learning explores how computers could begin to evolve learning in the same way as natural organisms did. The authors use Avida, a software program for simulating evolution, to support their claim.
Avida was originally intended to demonstrate how Darwinian evolution, which could occur without design in nature, is supposed to work. However, as many have shown, the program actually ended up demonstrating quite conclusively the need for design. This latest paper on using Avida to simulate the evolution of learning has shown the same thing.
Many people do sincerely believe that higher intelligence can just somehow evolve from lower intelligence. But sincere belief isnt evidence. And Dickson stresses, To be clear, the artificial intelligence technology we have today, known as narrow AI, is nowhere near achieving such feat. So we are talking about whether superintelligent AI, if it ever arrives, can be considered alive.
And that is a more complex question than we might at first suppose. First there are 123 definitions of life out there, with different sciences tending to prefer their own:
It is surprisingly difficult to pin down the difference between living and non-living things
To make matters worse, different kinds of scientist have different ideas about what is truly necessary to define something as alive. While a chemist might say life boils down to certain molecules, a physicist might want to discuss thermodynamics.
The classic borderline case is viruses. They are not cells, they have no metabolism, and they are inert as long as they do not encounter a cell, so many people (including many scientists) conclude that viruses are not living, says Patrick Forterre, a microbiologist at the Pasteur Institute in Paris, France.
For his part, Forterre thinks viruses are alive, but he acknowledges that the decision really depends on where you decide to place the cut-off point.
Arguing that for the panpsychist view that electrons may be conscious, Tam Hunt makes the point that
Many biologists and philosophers have recognized that there is no hard line between animate and inanimate. J.B.S. Haldane, the eminent British biologist, supported the view that there is no clear demarcation line between what is alive and what is not: We do not find obvious evidence of life or mind in so-called inert matter; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.
Niels Bohr, the Danish physicist who was seminal in developing quantum theory, stated that the very definitions of life and mechanics are ultimately a matter of convenience. [T]he question of a limitation of physics in biology would lose any meaning if, instead of distinguishing between living organisms and inanimate bodies, we extended the idea of life to all natural phenomena.
So there isnt a simple rule we can apply.
That said, some of the arguments for AI as a form of life sound suspiciously like the arguments around extraterrestrial beings:
Theres great tendency in the AI community to view machines as humans, especially as they develop capabilities that show signs of intelligence. While that is clearly an overestimation of todays technology, Jasenjak also reminds us that artificial general intelligence does not necessarily have to be a replication of the human mind.
That there is no reason to think that advanced AI will have the same structure as human intelligence if it even ever happens, but since it is in human nature to present states of the world in a way that is closest to us, a certain degree of anthropomorphizing is hard to avoid, he writes in his essays footnote.
Very well, but thats what they tell us about the so-far undetected extraterrestrials: They might be a form of life we dont recognize as such. One can never disprove such a proposition but, as before, it does not amount to evidence for anything.
Then there is the question of purpose:
There are different levels to life, and as the trend shows, AI is slowly making its way toward becoming alive. According to philosophical anthropology, the first signs of life take shape when organisms develop toward a purpose, which is present in todays goal-oriented AI. The fact that the AI is not aware of its goal and mindlessly crunches numbers toward reaching it seems to be irrelevant, Jalsenjak says, because we consider plants and trees as being alive even though they too do not have that sense of awareness.
Again, wait. Sophisticated computers have exclusively the purposes that humans program into them in our own interests, as do smart ovens and self-driving cars. These objects have no intrinsic purpose.
Plants have their own intrinsic purposes, which humans did not create, of growing and producing seeds. Humans can use plants and even trick them into doing something that is not part of their intrinsic purpose (seedless grapes, for example). But the original purpose is theirs. So we can give plants, but not computers, credit for purpose in life.
Jalenjak goes on to argue that AI can be alive even though it does not need to reproduce itself because it can, after all, just replace worn-out parts. But that fact alone is evidence that an AI entity is not alive. Life forms must reproduce themselves in a vast variety of ways because they are, generally, unitary beings, not a collection of swappable parts.
And what about self-improvement, which is regarded by some as part of a definition for life?
Todays machine learning algorithms are, to a degree, capable of adapting their behavior to their environment. They tune their many parameters to the data collected from the real-world, and as the world changes, they can be retrained on new information. For instance, the coronavirus pandemic disrupted may AI systems that had been trained on our normal behavior. Among them are facial recognition algorithms that can no longer detect faces because people are wearing masks. These algorithms can now retune their parameters by training on images of mask-wearing faces. Clearly, this level of adaptation is very small when compared to the broad capabilities of humans and higher-level animals, but it would be comparable to, say, trees that adapt by growing deeper roots when they cant find water at the surface of the ground.
Tree roots? Digging deeper for water is hardly their greatest accomplishment. They are very complex systems, used by the trees for, among other things, exchanging information with other trees:
Researchers are unearthing evidence that, far from being unresponsive and uncommunicative organisms, plants engage in regular conversation. In addition to warning neighbors of herbivore attacks, they alert each other to threatening pathogens and impending droughts, and even recognize kin, continually adapting to the information they receive from plants growing around them. Moreover, plants can talk in several different ways: via airborne chemicals, soluble compounds exchanged by roots and networks of threadlike fungi, and perhaps even ultrasonic sounds. Plants, it seems, have a social life that scientists are just beginning to understand.
Plants are not thought by botanists to be conscious but they do communicate extensively without a mind or brain. Nor, and this is the main point, do they need humans to program them or teach them anything. It all happens with or without our knowledge, let alone our involvement.
Jalenjak seems undeterred. He challenges us, Are characteristics described here regarding live beings enough for something to be considered alive or are they just necessary but not sufficient?
And Dickson responds,
Having just read I Am a Strange Loop by philosopher and scientist Douglas Hofstadter, I can definitely say no. Identity, self-awareness, and consciousness are other concepts that discriminate living beings from one another. For instance, is a mindless paperclip-builder robot that is constantly improving its algorithms to turn the entire universe into paperclips alive and deserving of its own rights?
So Dickson doesnt seem convinced. Still, he offers,
But like many other scientists, Jalsenjak reminds us that the time to discuss these topics is today, not when its too late. These topics cannot be ignored because all that we know at the moment about the future seems to point out that human society faces unprecedented change, he writes.
Maybe. But then again, maybe not.
The time to discuss this is now! implies that the scenario described must happen so we have no choice but to prepare. Perhaps the discussion we should have first is, how plausible are the arguments that whatever AI apocalypse is proposed must happen? In this case, Jalenjak didnt succeed in convincing Dickson that super AI should be considered alive. Maybe we dont need to have the discussion now, except as Sci-Fi Saturday food for thought.
The whole field could probably benefit from a dose of common sense and skepticism.
You may also enjoy:
Which is smarter? Babies or AI? Not a trick question Humans learn to generalize from the known to the unknown without prior programming and do not get stuck very often in endless feedback loops.
AI expert: Artificial intelligences are NOT electronic people. AI makes mistakes no human makes, so some experts are trying to adapt human cognitive psychology to machines. David Watson of the Alan Turing Institute fills us in on some of the limitations of AI and proposes fixes based on human thinking.
and
AI will fail, like everything else, eventually The more powerful the AI, the more serious the consequences of failure Overall, we predict that AI failures and premeditated malevolent AI incidents will increase in frequency and severity proportionate to AIs capability.
Link:
Could Super Artificial Intelligence Be, in Some Sense ...