How humans will lose control of artificial intelligence – The Week Magazine

Posted: April 2, 2017 at 8:03 am

Sign Up for

Our free email newsletters

This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world's first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing they've already doomed us all.

Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence a big "if," some have argued we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.

After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."

That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.

Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators haven't considered the full ramifications of what they're building; they haven't built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.

But the superintelligence doesn't want to be turned off. It doesn't want to stop making paper clips. Acting quickly, it's already plugged itself into another power source; maybe it's even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: They'll have to be eliminated so the mission can continue. And Earth won't be big enough for the superintelligence: It'll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.

Galaxies reduced to paper clips: That's a worst-case scenario. It may sound absurd, but it probably sounds familiar. It's Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (It's also The Terminator, WarGames, and a whole host of others.) In this particular case, it's a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.

Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence "the idea that eats smart people." Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.

Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. "But even if you find them persuasive," he said, "there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously." He suggests there are more subtle ways to think about the problems of A.I.

Some of those problems are already in front of us, and we might miss them if we're looking for a Skynet-style takeover by hyper-intelligent machines. "While you're focused on this, a bunch of small things go unnoticed," says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at what's already happening with our comparatively rudimentary A.I.

She's focusing on "large-area effects," the unnoticed flaws in our systems that can do massive damage damage that's often unnoticed until after the fact. "If you were building a bridge and you screw up and it collapses, that's a tragedy. But it affects a relatively small number of people," she says. "What's different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily."

Take the recent rise of so-called "fake news." What caught many by surprise should have been completely predictable: When the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebook's driving ethos).

The incentives were all wrong; exacerbated by algorithms, they led to a state of affairs few would have wanted. "For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured," says Doshi-Velez. "That's a very simple application of A.I. having large effects that may have been unintentional."

In fact, "fake news" is a cousin to the paperclip example, with the ultimate goal not "manufacturing paper clips," but "monetization," with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but "monetization" as the driving force led to deleterious side effects such as the proliferation of "fake news."

In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.

The ideal was that the software's underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica found it was "likely to falsely flag black defendants as future criminals" while "[w]hite defendants were mislabeled as low risk more often than black defendants." Race was not part of the questionnaire, but it did ask whether the respondent's parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.

It's that kind of error that most worries Doshi-Velez. "Not superhuman intelligence, but human error that affects many, many people," she says. "You might not even realize this is happening." Algorithms are complex tools; often they are so complex that we can't predict how they'll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.

In 2015, Elon Musk donated $10 million to, as Wired put it, "to keep A.I. from turning evil." That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyer's example of "inadvertent algorithmic cruelty" Facebook's "Year in Review" app showing him pictures of his daughter, who'd died that year.

If there's a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making today's algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms we've granted great power. "I see teaching as this moral imperative," says Doshi-Velez. "You know, with great power comes great responsibility."

This article originally appeared at Vocativ.com: The moment when humans lose control of AI.

Continued here:

How humans will lose control of artificial intelligence - The Week Magazine

Related Posts