The ethics of artificial intelligence

As the Internet and digital systems penetrate further each day into our daily lives, concerns about artificial intelligence (AI) are intensifying. It is difficult to get exercised about connections between the Internet of Things and AI when the most visible indications are Siri (Apples digital assistant), Google translate and smart houses, but a growing number of people, including many with a reputation for peering over the horizon, are worried.

These questions have been with us for a long time: Alan Turing in 1950 asked whether machines could think and that same year writer Isaac Asimov contemplated what might happen if they could in I, Robot. (In truth, thinking machines can be found in ancient cultures, including those of the Greeks and the Egyptians.) About 30 years ago, James Cameron served up one dystopia created by AI in The Terminator. Science fiction became fact in 1997 when IBMs chess-playing Deep Blue computer beat world champion Garry Kasparov.

None of the darker visions have deterred researchers and entrepreneurs from pursuing the field. Reality has lagged those grim imaginings: It is hard to fear AI when the simplest demonstrations are more humorous than hair raising.

Recently, however, there has been a growing chorus of concern about the potential for AI. It began last year when inventor Elon Musk, a man who spends considerable time on the cutting edge of technology, warned that with AI we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and hes sure he can control the demon. It doesnt work out. For him, AI is an existential threat to humanity, more dangerous than nuclear weapons.

A month later, distinguished scientist Stephen Hawking told the BBC that he feared that the development of full artificial intelligence could bring an end to the human race. Not today, of course, but over time, machines could become both more intelligent and physically stronger than human beings. Last month, Microsoft founder Bill Gates joined the group, saying that he did not understand people who were not troubled by the prospect of AI escaping human control.

Researchers most deeply engaged in this work are more sanguine. The head of Microsoft Research dismissed Gates concern, saying he does not think that humankind will lose control of certain kinds of intelligences. He instead is focused on ways that AI will increase human productivity and better lives. The prevailing view among software engineers, who are writing the programs that make AI possible, is that they remain in control of what they program. Today, scientists and researchers are solving engineering problems. The prospect of machines that can learn is a distant future.

Nevertheless, a debate about prospects and possibilities is worthwhile. It is critical that those individuals on the front lines of research be thinking about the implications of their work. And since their focus tends to be on narrowly defined problems, others who can address larger issues should join the discussion. This process should be occurring for all such technologies, whether atomic energy or nanotechnology. We must not be blinded by science, nor held captive by unfounded or fantastic fears.

Even if true AI is a far-off prospect, ethical issues are emerging. Major auto manufacturers are experimenting with driverless cars. Drones are insinuating themselves into daily life, as are robots. The possibilities created by big data are driving increasing automation and in some cases AI in the office environment. Militaries are among the intense users of high-technology, and the adoption of that equipment has transformed decision making throughout the chain of command. Some ethicists are concerned about the removal of human beings from the act of killing and from war. Legal and administrative frameworks to deal with the proliferation of these technologies and AI have not kept pace with their application. Ethical questions are often not even part of the discussion.

Google has set up an ethics committee to examine the implications of AI and its potential uses. But we cannot leave such examinations to the whims of the marketplace nor the cost-benefit calculations of a given quarter. There must be cross-disciplinary assessments to guarantee that a range of views are included in discussions. Most significantly, there must be a way to ensure that these conversations are not dominated by those who have a stake in the expansion of AI.

Many working in this field dismiss the critics as fear mongers, or anticipating distant futures that may never materialize. That is no excuse for not being aware of the risks and working to ensure that boundaries are set, not just for research but for the application of that work. As always, the scientific communities must be alert to the dangers and working to instill cultures of safety and ethics. We need to be genuinely intelligent about how humankind anticipates artificial intelligence.

Read the original:

The ethics of artificial intelligence

Related Posts

Comments are closed.