Will Machines Ever Outthink Us? – Huffington Post

Posted: February 6, 2017 at 3:38 pm

This post is hosted on the Huffington Post's Contributor platform. Contributors control their own work and post freely to our site. If you need to flag this entry as abusive, send us an email.

As Artificial Intelligence (AI) evolves by becoming smarter and more creative, will machines ever outthink us? How this question is answered may determine whether society will ultimately accept the further evolution of AI, or demand that it is stopped an outcome not dissimilar to the banning of human cloning. Many have argued about the inevitability of artificial superinteligence, by suggesting that once a machine becomes capable of self-learning and setting its own goals it will teleologically surpass the capability of any human brain. If this argument is true then we ought to be fearful of AI ever reaching that level, for who knows what will happen after that. Perhaps superintelligent AIs will decide the extermination of humans. But should we trust this argument for AI superiniteligence to be true?

George Zarkadakis

There is a fundamental philosophical assumption made by those who believe that artificial superintelligence is inevitable, and we need to examine it carefully. As I have argued in my book In Our Own Image: the history and future of Artificial Intelligence, to believe that a machine can be intelligent - in the same way that a human is intelligent - means that you take for granted that intelligence is something independent of the physicality of the brain. That bits and atoms are two different worlds; and that intelligence is all about bits and not about atoms. This is an idea that has roots in the philosophy of Plato. Plato believed that physical forms (e.g. a brain) are projections of non-physical ideals; and that those non-physical, immaterial, ideals are what constitute the ultimate truth. Applying this Platonic idea to Artificial Intelligence is what mainstream AI researchers do. They believe that it is possible to decode intelligence by studying the human brain; and then transfer the decoded pattern (the ideal, the bits, the master algorithm) to any other physicality they wish, for instance the hardware of a computational machine. For them the decoded pattern of intelligence can be discovered, given enough intellect and investment, just like mathematics is discovered according to Platonists because it always pre-exists. This worldview of intelligence is the foundation of the so-called computational theory of mind. If this theory is true, then it is indeed possible to create superintelligent machines, or rather superintelligent programs that can run on any general-purpose computational architecture.

The challenge to the computational theory of mind comes from an Aristotelian, or empirical, view of the world. In this worldview the form is the physical; there is no external, ideal, non-materialistic world. For Aristotelians the question whether mathematics is invented or discovered is answered emphatically as invented. Numbers do not pre-exist. It is the action of enumerating physical objects that requires the invention of numbers. If one takes the empiricist view then intelligence is a biological phenomenon, and not a mathematical phenomenon. It can be simulated in a computer but it cannot be replicated. To replicate intelligence in a computer would be similar to replicating say metabolism, or reproduction, which are also biological phenomena.

As I argued in my book, I am resolutely siding with the Aristotelians, however a minority they may be in the debate over the future, and nature, of Artificial Intelligence. I do so not because of some deep-seated materialistic conviction, but because it seems to me that when we speak about intelligence we often miss, or purposely ignore, how inseparable this concept is to consciousness. By intelligence I mean how competent an organism is in finding a novel solution to a new problem; and by consciousness its level of self-awareness or comprehension of its actions and internal states. But let me explain more why I am an Aristotelian when it comes to AI.

When I look at the natural world I see intelligence and consciousness as one, manifesting in varying degrees over the wide spectrum of life that begins with unicellular organisms and ends with more complex creatures such as dolphins, whales, octopuses, and primates. What I see in that spectrum is how awareness emerges out of biological automation. The level of awareness seems dependent on the number and sophistication of feedback loops inside a biological organism. Nature allows the evolution towards increasing levels of self-awareness because, for some species, higher levels of self-awareness provide significant survival advantages. We humans are not the only species with self-awareness, although we seem to be the species with the highest level of self-awareness. Perhaps the reason why we have this biological function so developed is because it is necessary in order to create civilizations, which in turn allow for more degrees of freedom for inventing strategies and technology for survival.

If my side of the argument is true, then it is impossible to decode biological intelligence in an artificial artefact. At best, one can only simulate some aspect of intelligence but never the whole thing. To have the whole thing, or something greater, you need biology and evolution. You need atoms and molecules. Maths and algorithms would never be enough. Nevertheless, as Turing demonstrated, you can have competence without comprehension. Intelligent machines will probably outthink us in nearly everything, except comprehension of what is that they are being competent of; for which consciousness is sine qua non.

Join me in debating Will Machines Ever Outthink Us? at Mathscon, at Imperial College, London on February 11, 2017.

Read the original:

Will Machines Ever Outthink Us? - Huffington Post

Related Posts