Will the Evolution of Artificial Intelligence Harm Humans? Depends – The Mac Observer

We tend to speak about Artificial Intelligence (AI) in terms of the pinnacle of its potential evolution, and thats a problem.

This article I found showcases one current debate about the potential for AI doing evil. Elon Musk fires back at Mark Zuckerberg in debate about the future: His understanding of the subject is limited.

On Sunday afternoon, while smoking some meats in his back garden, Zuckerberg, the Facebook CEO, questioned why Musk, the CEO of Tesla, SpaceX, and OpenAI, was being so negative about AI.

What theyre debating is the future potential for AIs that can, for all practical purposes, duplicate and then go far beyond the capabilities of the human mind. And, in addition, possess the ability to interact with humans beings for good or evil.

Of course, AIs today are very limited. We discover those limitations when we realize that AI demos typically only address one or two specific tasks. Like playing chess. Or driving a car on the roadwaysin traffic.Our interactions with Siri provide confirmation every day of that AIs limits.

So whats the debate really about? I think those who worry, like Elon Mush, ponder two certain things.

Just as Apple has built a sophisticated web browser, called Safari, that serves us well and trued to protect us, theres no way to perfectly protect the user when dedicated minds, the hackers, try to subvert the good uses of Safari for financial gain or other purposes.

Moreover, even though Apple has, for example, joined the Partnership on AI consortium, theres no guarantee that the knowledge or ethics developed there will be constrained only for good purposes, all over the planet Earth.

So then the question boils down to the limits of human capabilities. I dont think anyone doubts that well get smart enough to build an entity like Star Treks Lt. Commander Data. See NASAs page on the science of Star Trek:

At a conference on cybernetics several years ago, the president of the association was asked what is the ultimate goal of his field of technology. He replied, Lieutenant Commander Data. Creating Star Treks Mr. Data would be a historic feat of cybernetics, and its very controversial in computer science whether it can be done.

So how long will this take? If it takes us another 100 years to build a Lt. Commander Data, unforeseen events, war, climate change, and cultural changes could prevent that kind of evolution from ever happening. On the other hand, if we develop AI technology too fast, without adequate controls, we could end up as we did with nuclear weapons. A lot of power that we struggle to keep under control.

In the end, I think both Mr. Zuckerberg and Mr. Musk have equally good points. In Mr. Zuckerbergs favor, AI technology will do a lot to help us out in the short term, limited in scope as it is. However, in the long run, Mr. Musk has a great point. Namely, our species hasnt been able to control its worst instincts on the current day internet.What will we have to do as a species to avoid the worst possible fate of massive AI evil inflicted on ourselves.

Thats what were in the process of finding out.

Follow this link:

Will the Evolution of Artificial Intelligence Harm Humans? Depends - The Mac Observer

Related Posts

Comments are closed.