The Singularity Could Be Closer Than You Think – Inverse

Posted: March 31, 2017 at 7:27 am

Sci-fi depictions of the future would have us thinking of the singularity when A.I surpasses human intelligence as either a technological messiah or the apocalypse. Experts in the field hold varying opinions about just how good or bad, gradual or sudden it will be, but most agree that it will happen in the next hundred years and probably sooner.

Futurists like Ray Kurzweil, a computer scientist and Google director of engineering, are proponents of a hard singularity, the kind that will happen at particular date in time. Kurzweil has made 147 predictions since the Nineties about when the singularity will occur, his most recent being 2045. Hed previously predicted that artificial intelligence would achieve human levels of intelligence by 2029. Now hes set 2045 as the year when the singularity will happen, at which time he says we will multiply our effective intelligence a billionfold by merging with the intelligence we have created.

Kurzweil upholds a positive take on the singularity, championing the event as a way to overcome age-old human problems and magnify human intelligence, as he details in his 2005 book The Singularity is Near. [The singularity] will result from the merger of the vast knowledge embedded in our own brains with the vastly greater capacity, speed, and knowledge-sharing ability of our technology, he writes. The fifth epoch [when the singularity happens] will enable our human-machine civilization to transcend the human brains limitations of a mere hundred trillion extremely show connections.

Meanwhile, Masayoshi Son, CEO of Softbank Robotics, argues that the singularity will happen by 2047, when the 10,000 point IQ of a single computer chip far surpasses an IQ belonging to even the worlds most intelligent human beings.

For some people, the singularity is the moment in which we have an artificial super intelligence which has all the capabilities of the human mind, and is slightly better than the human mind, says Damien Scott, a graduate student at Stanford University in the schools of business and energy and earth sciences. But I think that is a big ask. That argument predicts that not only will the singularity software or other platform be smarter than people, but that it also will have the ability to improve itself, he explains. When you get into territory when the system is smarter than the smartest human, and you cant comprehend what it will do or what its objective is.

Thats the more classical version of the singularity, Scott says, the hard take on a particular moment that may never actually occur all at once. But then, theres another, softer take on the singularity, which seems to be more widely accepted.

Well start to see narrow artificial intelligence domains that keep getting better than the best human, Scott says. We already have calculators that can outperform any person, and within two to three years, the worlds best radiologist will be computer systems, he says. Rather than an all encompassing generalized intelligence thats better than humans at every single thing, Scott says the singularity will occur and is occurring piecemeal, across various fields of artificial intelligence.

Will it be self-aware or self-improving? Not necessarily, he says. But that might be the kind of creep of the singularity across a whole bunch of different domains: All these things getting better and better, as an overall a set of services that collectively surpass our collective human capabilities.

The gradual singularity is the kind Scott and others interviewed for this article predict happening, rather than a momentous event on a random afternoon 28 years from now.

The singularity is already happening, according to Aaron Frank, principal faculty at Singularity University, a technology learning center. Its a way of describing this rapid pace of change that the world is now experiencing, he says. The term singularity that I subscribe to is a term borrowed from physics to describe an environment in which no longer really understand much of whats happening around us. For example, the event horizon of a black hole is referred to as a singularity because the laws of physics dont apply there.

Youll see signatures of the singularity whenever youre surprised by certain technological events, Frank says. From image recognition to artificial chess or poker players outperforming humans, its not always quite clear why technology has come to function better than those who created it, even though people mostly know what to expect from it. But if you look inside these algorithms, theyre already far more complex than any one human can truly understand, he says. We used to think technology gave us total control and knowledge on whats happening, but now we no longer understand whats happening.

Its an exchange, Frank says: At the expense of total understanding, people are freed up to manage other tasks while AI takes care of the rest. If we can turn over the diagnosing of cancer to a machine learning algorithm, that frees up doctors to do other things like providing care to a patient, he says.

Echoing Franks perspective on the singularity, Ed Hesse, CEO at GridSingularity, a blockchain technology and smart contract developer, points to intelligence like Ethereum, a decentralized platform, run on a custom-made blockchain for smart contracts. Its a starting point, he says, another present day manifestation of the singularity. Once you have decentralized programs to facilitate all types of transactions and eradicate the middle man, the next level is AI and machine learning. Everything becomes one.

A platform like Ethereum can be used to find all kinds of patterns for specific purposes that people wouldnt have been able to see before, or on their own, Hesse explains. Open, yet secure, Ethereum elucidates generalized information that otherwise wouldnt have been made available.

At the moment, [because] AI is very domain specific, its not generally smart in any way, but can learn to do something particularly well, says Dr. Mark Sagar, CEO and founder of Soul Machines, an artificial intelligence company that creates emotionally responsive avatars. The way it holistically comes together into something that can actually control itself and be autonomous takes many different components. It makes sense that could happen by 2048.

So in the event of the singularity, be it sudden or gradual, will the hallmarks of human intelligence be preserved, in spite of AI thats smarter than us?A platform like Ethereum can be used to find all kinds of patterns for specific purposes that people wouldnt have been able to see before, or on their own, Hesse explains. Open, yet secure, Ethereum elucidates generalized information that otherwise wouldnt have been made available.

A lot of philosophers describe the essence of humanity as being able to feel emotions, says Scott. Theres a difference between experiencing emotions and understanding emotions. A machine might be able to understand emotions, based on indicators like facial recognition or language, but its inability to feel cognizant empathy would separate purely artificial intelligence from that which is human, he says.

There is a lot that we dont know about how the brain works, for example, so theres a lot that still has to be discovered and determined as keys to realizing the proper AI, says Sagar. He foresees the development of a digital biology, or a hybrid of biological and digital systems. Take Google for example. Its got better knowledge than any person on the planet general knowledge but no practical knowledge about dealing with the situation in front of you that would emerge.

Hence, humanizing AI is one of the key components to a true singularity, Sagar suggests. Having computers that are capable of embodied cognition and social learning will be very important to the socialization of machines, he says. Think about things like cooperation being the greatest force in human history. For us to cooperate with machines is one level, then for machines to cooperate with machines is another. It may well be that humans are always in there.

For machines to constructively cooperate with humans and other machines, humans would have to engineer an element of creativity into AI, Sagar says. And its possible, he adds: Where we start seeing machines create new ideas, the machines will be inventing new things.

One of the biggest obstacles here will remain human suspicion about smart computers, drawn from decades of dystopian fantasies. Yet Sagar says were already seeing breakthroughs in places like Japan, where home robots are part of peoples quotidien routine. Those machines are often anthropomorphized and socially integrated, with transparent capabilities and intentions that people accept gladly. Thats a hint of what the singularity can bring, he says: I think it will be an unprecedented time of cooperation and creativity.

Photos via Getty Images / Handout, Getty Images / Tomohiro Ohsumi, "I, Robot" from 20th Century Fox

Madison is a New York/Los Angeles-based journalist, with a specialty covering science, religion, cannabis, and other drugs.

Read more from the original source:

The Singularity Could Be Closer Than You Think - Inverse

Related Posts