Guest Post: Malcolm MacIver on War with the Cylons | Cosmic Variance

Malcolm MacIverWe’re very happy to have a guest post from Malcolm MacIver. See if you can keep this straight: Malcolm is a professor in the departments of Mechanical Engineering and Biomedical Engineering at Northwestern, with undergraduate degrees in philosophy and computer science, and a Ph.D. in neuroscience. He’s also one of the only people I know who has a doctorate but no high school diploma.

With this varied background, Malcolm studies connections between biomechanics and neuroscience — how do brains and bodies interact? This unique expertise helped land him a gig as the science advisor on Caprica, the SyFy Channel’s prequel show to Battlestar Galactica. He also blogs at Northwestern’s Science and Society blog. It’s a pleasure to welcome him to Cosmic Variance, where he’ll tell us about robots, artificial intelligence, and war.

———————————————————

It’s a pleasure to guest blog for CV and Sean Carroll, a friend of some years now. In my last posting back at Northwestern University’s Science and Society Blog, I introduced some issues at the intersection of robotics, artificial intelligence (AI), and morality. While I’ve long been interested in this nexus, the most immediate impetus for the posting was meeting Peter Singer, author of the excellent book ‘Wired for War’ about the rise of unmanned warfare, while simultaneously working for the TV show Caprica and a U.S. military research agency that funds some of the work in my laboratory on bio-inspired robotics. Caprica, for those who don’t know it, is a show about a time when humans invent sentient robotic warriors. Caprica is a prequel to Battlestar Galactica, and as we know from that show, these warriors rise up against humans and nearly drive them to extinction.

a-centurian-cylon-in-battlestar-galactica--2Here, I’d like to push the idea that as interesting as the technical challenges in making sentient robots like those on Caprica are, an equally interesting area is the moral challenges of making such machines. But “interesting” is too dispassionate—I believe that we need to begin the conversation on these moral challenges. Roboticist Ron Arkin has been making this point for some time, and has written a book on how we may integrate ethical decision making into autonomous robots.

Given that we are hardly at the threshold of building sentient robots, it may seem overly dramatic to characterize this as an urgent concern, but new developments in the way we wage war should make you think otherwise. I heard a telling sign of how things are changing when I recently tuned in to the live feed of the most popular radio station in Washington DC, WTOP. The station had commercial after commercial from iRobot (of Roomba fame), a leading builder of unmanned military robots, clearly targeting military listeners. These commercials reflect how the use of unmanned robots in the military has gone from close to zero in 2001 to over ten thousand now, with the pace of acquisition still accelerating. For more details on this, see Peter Singer’s ‘Wired for War’, or the March 23 2010 congressional hearing on The Rise of the Drones here.

While we are all aware of these trends to some extent, it’s hardly become a significant issue of concern. We are comforted by the knowledge that the final kill decision is still made by a human. But is this comfort warranted? The importance of such a decision changes as the way in which war is conducted, and the highly processed information supporting the decision, becomes mediated by unmanned military robots. Some of these trends have been helpful to our security. For example, the drones have been effective against the Taliban and Al-Qaeda because they can do long-duration monitoring and attacks of sparsely distributed non-state actors. However, in a military context, unmanned robots are clearly the gateway technology to autonomous robots, where machines can eventually be in the position to make decisions that have moral weight.

“But wait!” many will say, “Isn’t this the business-as-usual-robotics-and-AI-are-just-around-the-corner argument we’ve heard for decades?” Robotics and AI have long been criticized as promising more than they could deliver. Are there signs that this could be changing? While an enormous amount could be said about the reasons for the past difficulties of AI, it is clear that some of its past difficulties stem from having too narrow a conception of what constitutes intelligence, a topic I’ve touched on for the recent Cambridge Handbook of Situated Cognition. This narrow conception revolved around what might loosely be described as cognitive processing or reasoning. Newer types of AI and robotics, such as embodied AI and probabilistic robotics, tries to integrate some of the aspects of what being more than a symbol processor involves: for example, sensing the outside world and dealing with the uncertainty in those signals in order to be highly responsive, and emotional processing. Advanced multi-sensory signal processing techniques such as Bayesian filtering were in fact integral to the success of Stanley, the autonomous robot that won DARPA’s Grand Challenge to drive without human intervention across a challenging desert course.

As these prior technical problems are overcome, autonomous decision making will become more common. Eventually, this will raise moral challenges. One area of challenge will be how we should behave towards artifacts, be they virtual or robotic, which are endowed with such a level of AI that how we treat them becomes an issue. On the other side, how they treat us becomes a problem, most especially in military or police contexts. What happens when an autonomous or semi-autonomous war robot makes an error and kills an innocent? Do we place responsibility on the designers of the decision making systems, the military strategists who placed machines with known limitations into contexts they were not designed for, or some other entity?

Both of these challenges are about morality and ethics. But it is not clear whether our current moral framework, which is a hodgepodge of religious values, moral philosophies, and secular humanist values, is up to responding to these challenges. It is for this reason that the future of AI and robotics will be as much a moral challenge as a technical challenge. But while we have many smart people working on the technical challenges, very few are working on the moral challenges.

How do we meet the moral challenge? One possibility is to look toward science for guidance. In my next posting I’ll discuss some of the efforts in this direction, pushed most recently by a new activist form of atheism which holds that it is incorrect to think that we need religion to ground morality, and even dangerous. We can instead, they claim, look to the new sciences of happiness, empathy, and cooperation for guiding our value system.


Related Posts

Comments are closed.