A Guided Tour of AI and the Murky Ethical Issues It Raises – The Wire

Posted: November 30, 2019 at 10:08 am

As I read Melanie Mitchells Artificial Intelligence: A Guide for Thinking HumansI found myself recalling John Updikes 1986 novel Rogers Version. One of its characters, Dale, is determined to use a computer to prove the existence of God. Dales search leads him into a mind-bending labyrinth where religious-metaphysical questions overwhelm his beloved technology and leave the poor fellow discombobulated. I sometimes had a similar experience reading Artificial Intelligence. In Mitchells telling, artificial intelligence (AI) raises extraordinary issues that have disquieting implications for humanity. AI isnt for the faint of heart, and neither is this book for nonscientists.

To begin with, artificial intelligence machine thinking, as the author puts it raises a pair of fundamental questions: What is thinking and what is intelligence? Since the end of World War II, scientists, philosophers, and scientist-philosophers (the two have often seemed to merge during the past 75-odd years) have been grappling with those very questions, offering up ideas that seem to engender further questions and profound moral issues. Mitchell, a computer science professor at Portland State University and the author ofComplexity: A Guided Tour, doesnt resolve these questions and issues she as much acknowledges that they are irresolvable at present but provides readers with insightful, common-sense scrutiny of how these and related topics pervade the discipline of artificial intelligence.

Mitchell traces the origin of modern AI research to a 1956 Dartmouth College summer study group: its members included John McCarthy (who was the groups catalyst and coined the term artificial intelligence); Marvin Minsky, who would become a noted artificial intelligence theorist; cognitive scientists Herbert Simon and Allen Newell; and Claude Shannon (the inventor of information theory). Mitchell describes McCarthy, Minsky, Simon, and Newell as the big four pioneers of AI. The study group apparently generated more heat than light, but Mitchell points out that the subjects that McCarthy and his colleagues wished to investigate natural-language processing, neural networks, machine learning, abstract concepts and reasoning, and creativity are still integral to AI research today.

Also read:Artificial Intelligence Cant Think Without Polluting

Mitchells goal is to give a thorough (and I mean thorough) account not only of the ethical issues artificial intelligence raises today (and tomorrow), but of how the various branches of AI that the Dartmouth group pursued actually work. She is a good writer with broad knowledge of the topic (unsurprising, since she has a Ph.D. in computer science), and a canny mindfulness of both the merits and problems of AI. But even so, nonscientists will find it grueling to follow some of her explanations of the technical workings of AI. All too often, I found myself baffled and exasperated when she delved into high-tech arcana.

Take, for instance, the authors discussion of deep learning, which she says is itself one method among many in the field ofmachine learning, a subfield of AI in which machines learn from data or from their own experiences. So far, so good. However, from there matters become tenebrous: Deep learning simply refers to methods for training deep neural networks, which in turn refers to neural networks with more than one hidden layer. Recall that hidden layers are those layers of a neural network between the input and the output. The depth of a network is its number of hidden layers.

From there, she goes, well, deeper, for about eight more pages of text, diagrams, and photos that fail to fully clarify the subject for a general audience. This kind of abstruseness, alas, is fairly frequent, but still, I urge readers to soldier through the technology warrens, because we need to understand the systems that are frightening so many today, and the dedicated reader will come away with at least a modicum of understanding about how AI operates.

I also wish the book had examined the role AI is playing in military weaponry, and how quantum computers affect, or will affect, artificial intelligence or vice versa. In a recent article in the New York Times, Dario Gil, the director of IBM Research, is quoted as saying, The reality is, the future of computing will be a hybrid between [the] classical computer of bits, AI systems, and quantum computing coming together.

Also read: Is This the AI We Should Fear?

The book is exemplary, however, when discussing where AI is now and where it might be going, as well as the moral issues involved. Should we be terrified about AI? she writes. Yes and no. Superintelligent, conscious machines are not on the horizon. The aspects of humanity that we most cherish are not going to be matched by a bag of tricks. At least I dont think so. However, there is a lot to worry about regarding the potential for dangerous and unethical uses of algorithms and data.

Mitchells message is that AI-phobes can chill out because were not now and we probably wont ever be facing a dystopic future controlled by machines. One of the themes of the book is that while its impressive that AI devices have defeated human experts in Jeopardy and Go, no matter how remarkable such tours de force are, those were the only things those particular machines were programmed to do, and they required human input. And in such areas as object recognition, transcribing or translating language, and conversing with Homo sapiens, AI is, to use a word Mitchell favors, brittle.

The hand of humanoid robot AILA (artificial intelligence lightweight android) operates a switchboard during a demonstration by the German research centre for artificial intelligence at the CeBit computer fair in Hanover March, 5, 2013. Credit: Reuters/Fabrizio Bensch

Which is to say that even though great strides have been made (and will be made) in AI, such technology is a long way from being omnipotent, because it is error prone when faced with perplexing to its way of thinking tasks (be cautious, she warns, of riding in self-driving cars). And AI machines are still vulnerable to being manipulated by hackers who might work for foreign governments or are simply motivated to cause mayhem.

Near the end of the book, Mitchell asks, How far are we from creating general human-level AI? She quotes a computer scientist, Andrej Karpathy, who says, We are really, really far away, and then she concurs: Thats my view too.

Above all, her take-home message is that we humans tend to overestimate AI advances and underestimate the complexity of our own intelligence. These supposed limitations of humans are part and parcel of our general intelligence, she writes. The cognitive limitations forced upon us by having bodies that work in the world, along with the emotions and irrational biases that evolved to allow us to function as a social group, and all the other qualities sometimes considered cognitive shortcomings, are in fact precisely what enables us to be generally intelligent.

Also read:Why India Needs a Strategic Artificial Intelligence Vision

It occurred to me while reading about the extraordinary scientists in Artificial Intelligence that as AI becomes more intricately innovative, the men and women working in the field are also keeping pace, also becoming more redoubtably intelligent. So why worry? Surely, if theres ever an AI attempt to subjugate humanity, I have no doubt that Mitchell and others like her, or their successors, will protect our brittle species.

Howard Schneider is a New York City-based writer who reviews books on technology and science. His work has appeared in the Wall Street Journal, the Humanist, Art in America, the American Interest, and other publications.

This article was originally published on Undark. Read the original article.

See more here:

A Guided Tour of AI and the Murky Ethical Issues It Raises - The Wire

Related Posts