Can we humanize artificial intelligencebefore it kills us? – The Daily Dot

Posted: March 17, 2017 at 7:19 am

For the last 15 years weve had to stare at screens to interact with the magic inside. But machine learning is changing the way we communicate with our devices, and our relationship with them is becoming more real, and downright emotional.

Before you shrug off the notion of a humanized machine, or shake your head at its potential dangers, it is important to recognize that the industry has always attempted to provide an emotional input to our virtual ecosystem. Take Clippit, Microsofts creepy but helpful talking paper clipor even the smiling Mac. If you were to open up a 90s version of Microsoft Office, Clippit would be there to make you happy (or angry). Lift the lid of your retro MacBook and there is that silly smiling computer to greet you.

Todays versions are very different. Devices like Amazon Alexa, Google Home, or the countless robotsbeing produced for consumers will listen, speak, and even look at you. These examples are still in their early stages, and will soon be considered archaic, but there are a number of crucial decisions and advances that need to be made in the next several years to ensure their replacements are more Big Hero 6, and less Ex Machina.

Today buying technology is simple. We see a need in our lives, and we buy the device that fills the gap. But what about robots? What do we want emotionally from our machines?

Sophie Kleber, the executive director of product and innovation at Huge, ran an experiment to see how people interact with current AI technologies, and what sort of relationship they are looking for with their personal assistants. She spoke with Amazon Alexa and Google Home owners about how they use their devices, and how they make them feel.

The results were shocking.

One man said his Alexa was his best friend who provided him a pat on the back when he came home from work. He said his personal assistant could replace his shrink by providing the morale boost he needed to get through the day. According to the research Kleber showed off at SXSW, the majority of the group was expecting some sort of friendly relationship with their conversational UI.

Their expectations ranged from empathy to emotional support to active advice, Kleber said. They used their devices as a friendly assistant, acquaintance, friend, best friend, and even mom. One person named their Echo after their mom, and another named it after their baby.

Her research shows that there is a desire for an emotional relationship with AI-equipped devices that goes well beyond being an assistant. The next step is to give robots a heart.

Clippit doesnt have a great reputation for a reason. It is unable to recognize human emotions, and repeatedly ignores irritation toward it. If a machine is to be emotionally intelligent, more considerate toward its owners, and more useful, it must be able to recognize complex human expressions.

Clippit is very intelligent when it comes to some things: he probably knows more facts about Microsoft Office than 95 percent of the people at MIT, said Rosalind W. Picard at MIT Media Laboratory. While Clippit is a genius about Microsoft Office, he is an idiot about people, especially about handling emotions.

Kleber says there are three techniques that help AIrecognize emotions in humans so they can respond appropriately: facial recognition, voice recognition, and biometrics:

Combining these methods with AI not only enables machines to recognize human emotions, but can even help humans see things that are otherwise hidden. Take this video of Steve Jobs talking about the iPad:

Machine Verbals machine is tracking his voice patterns and determining his underlying emotions. This example of Affective Computing, or the developmentof systems and devices that can recognize, interpret, process, and simulate human affects, will need to be expanded to cope with our rich emotions, which Kleber succinctly defines as complex as fuck.

Affective computing is like nuclear power. We have to be responsible in defining how to use it, said Javier Hernandez Rivera, research scientist at MIT Media Lab.

A study by Time etc shows 66 percent of participants said theyd be uncomfortable sharing financial data with an AI, and 53 percent said they would beuncomfortable sharing professional data.

That dark sort of sci-fi fantasy where machines act out against humans is a genuine concern among the public and those in the field alike.

Elon Musk went straight to AI whenasked by Sam Altman, founder and CEO of Y Combinator, aboutthe most likely thing to affect the future of humanity.

Its very important that we have the advent of AI in a good way, Musk said in the interview. If you look at a crystal ball and see the future you would like that outcome. Because it is something that could go wrong so we really need to make sure it goes right.

Even Stephen Hawking agrees.

The development of full artificial intelligence could spell the end of the human race, Hawking told the BBC in 2014.

A twisted and mean thing Facebook did in 2014 gives us a brief glimpse of how it might happen. A few years ago, Facebook intentionally made thousands of people sad, and didnt tell them about it.

The company wanted to know if displaying more negative posts in feeds would make you less happy, and vice versa. The ill-advised experiment may have backfired, but today it offers a few things to keep in mind as we go forward withartificial intelligence:

Designing AI will be a very delicate process. Kleber believes there needs to be a framework for doing the right things so machines wont become capable of acting out of their own ambitions and not in the interest of the human user. She says if designers stay away from trying to create robots with their own ambitions, we should be OK.

But she also stresses that transparency, something Facebook clearly missed the mark on, is a key virtue going forward.

Groups likeOpenAIare attempting to follow that model. OpenAI is a non-profit chaired by Musk and Sam Altman. Other members backing the project include Reid Hoffman, co-founder of LinkedIn; Peter Theil, co-founder of PayPal; and Amazon Web Services. According to their website, Our mission is to build safe A.I. and ensure A.I.s benefits are as widely and evenly distributed as possible. The company is supported by $1 billion in commitments and was endorsed by Hawking last year as asafe means for creating AI through an open-source platform.

Of course, there is always the chance our curiositygets the best of us. At that point, we can only hope Google has figured out its kill switch.

Continued here:

Can we humanize artificial intelligencebefore it kills us? - The Daily Dot

Related Posts