Google’s AI got highly aggressive when competition got stressful in a fruit-picking game – Quartz

Lets pretend you care, very much, about winning a game. The competition heats up, your oppositions closing in, and youre at risking of losing. What do you do? If your competitive streak is alive and well, then you get aggressive. Forget decorum, focus on the prize, and shove your opponent out of the way to claim your victory.

Turns out, Googles DeepMind artificial intelligence does much the same. The more intelligent the AI network is, the quicker it is to get aggressive in competitive situations where such aggression will pay off. The behavior raises questions about the link between intelligence and aggression, what it means for AI to mimic human-like emotional responses, and, if youre worried about potential robotic overlords, what we need to do to keep AI aggression in check.

In a study published online (but not yet in a peer-reviewed journal), Deep Mind researchers had AI agents compete against each other in 40 million rounds of a fruit-gathering computer game. In the game, each agent had to collect as many apples as possible. They also could temporarily knock an opponent out of the game by hitting them with a laser beam. Heres a video of the game:

When apples were abundant, the two agents were happy to collect their fruit without targeting each other. But in more scarce scenarios with fewer apples around, the agents became more aggressive. The researchers also found that the greater the cognitive capacity of the agent, the more frequently they attacked their opponent. This makes sense, as in this scenario attacking an opponent is more complex behavior and so requires greater intelligence.

However, the AI also learned to display cooperative behavior when that brought a benefit. In a second game, two agents acted as wolves while a third was the prey. If the two wolf agents worked together to catch their prey, they received a higher reward. When the two wolves capture the prey together, they can better protect the carcass from scavengers and hence receive a higher reward, the researchers explained in their paper. In this game, the more intelligent agents were less competitive and more likely to cooperate with each other.

The DeepMind researchers believe that as their studies of how AI agents compete become more complex, they could be used to better understand how humans learn to collaborate en masse. This model also shows that some aspects of human-like behavior emerge as a product of the environment and learning, lead author Joel Weibo told Wired. Say you want to know what the impact on traffic patterns would be if you installed a traffic light at a specific intersection. You could try out the experiment in the model first and get a reasonable idea of how an agent would adapt.

Unnervingly, this suggests that human responses to competitive scenarios arent so different to learned AI responses. While a losing sport teams cutthroat tactics may seem like a deeply human response, this behavior is much the same as AI computer characters programmed to compete.

As for whether aggressive AI fits into doomsday scenarios of robots overthrowing humans, well, robots dont need to show emotions to be a threat. While AI has fairly limited intelligence and is focused on fruit-picking, aggressive behavior isnt much to worry about. For now, at least, the biggest threat is how the humans behind the AI decide to program their robots.

Follow this link:

Google's AI got highly aggressive when competition got stressful in a fruit-picking game - Quartz

Related Posts

Comments are closed.