Simple Pictures That State-of-the-Art AI Still Cant Recognize

Look at these black and yellow bars and tell me what you see. Not much, right? Ask state-of-the-art artificial intelligence the same question, however, and it will tell you theyre a school bus. It will be over 99 percent certain of this assessment. And it will be totally wrong.

Computers are getting truly, freakishly good at identifying what theyre looking at. They cant look at this picture and tell you its a chihuahua wearing a sombrero, but they can say that its a dog wearing a hat with a wide brim. A new paper, however, directs our attention to one place these super-smart algorithms are totally stupid. It details how researchers were able to fool cutting-edge deep neural networks using simple, randomly generated imagery. Over and over, the algorithms looked at abstract jumbles of shapes and thought they were seeing parrots, ping pong paddles, bagels, and butterflies.

The findings force us to acknowledge a somewhat obvious but hugely important fact: Computer vision and human vision are nothing alike. And yet, since it increasingly relies on neural networks that teach themselves to see, were not sure precisely how computer vision differs from our own. As Jeff Clune, one of the researchers who conducted the study, puts it, when it comes to AI, we can get the results without knowing how were getting those results.

One way to find out how these self-trained algorithms get their smarts is to find places where they are dumb. In this case, Clune, along with PhD students Anh Nguyen and Jason Yosinski, set out to see if leading image-recognizing neural networks were susceptible to false positives. We know that a computer brain can recognize a koala bear. But could you get it to call something else a koala bear?

To find out, the group generated random imagery using evolutionary algorithms. Essentially, they bred highly-effective visual bait. A program would produce an image, and then mutate it slightly. Both the copy and the original were shown to an off the shelf neural network trained on ImageNet, a data set of 1.3 million images, which has become a go-to resource for training computer vision AI. If the copy was recognized as somethinganythingin the algorithms repertoire with more certainty the original, the researchers would keep it, and repeat the process. Otherwise, theyd go back a step and try again. Instead of survival of the fittest, its survival of the prettiest, says Clune. Or, more accurately, survival of the most recognizable to a computer as an African Gray Parrot.

Eventually, this technique produced dozens images that were recognized by the neural network with over 99 percent confidence. To you, they wont seem like much. A series of wavy blue and orange lines. A mandala of ovals. Those alternating stripes of yellow and black. But to the AI, they were obvious matches: Star fish. Remote control. School bus.

In some cases, you can start to understand how the AI was fooled. Squint your eyes, and a school bus can look like alternating bands of yellow and black. Similarly, you could see how the randomly generated image that triggered monarch would resemble butterfly wings, or how the one that was recognized as ski mask does look like an exaggerated human face.

But it gets more complicated. The researchers also found that the AI could routinely be fooled by images of pure static. Using a slightly different evolutionary technique, they generated another set of images. These all look exactly alikewhich is to say, nothing at all, save maybe a broken TV set. And yet, state of the art neural networks pegged them, with upward of 99 percent certainty, as centipedes, cheetahs, and peacocks.

The fact that were cooking up elaborate schemes to trick these algorithms points to a broader truth about artificial intelligence today: Even when it works, we dont always know how it works. These models have become very big and very complicated and theyre learning on their own, say Clune, who heads the Evolving Artificial Intelligence Laboratory at the University of Wyoming. Theres millions of neurons and theyre all doing their own thing. And we dont have a lot of understanding about how theyre accomplishing these amazing feats.

Studies like these are attempts to reverse engineer those models. They aim to find the contours of the artificial mind. Within the last year or two, weve started to really shine increasing amounts of light into this black box, Clune explains. Its still very opaque in there, but were starting to get a glimpse of it.

Read the original:

Simple Pictures That State-of-the-Art AI Still Cant Recognize

Related Posts

Comments are closed.