Google’s AI has learned how to draw by looking at your doodles – The Verge

Posted: April 13, 2017 at 11:49 pm

Remember last year when Google released an AI-powered web tool that played Pictionary with your doodles? Well, surprise! Those doodles you drew have now been used to teach Googles AI how to draw. The resulting program is called Sketch-RNN and, frankly, it draws about as well as a toddler. But like any new parents, Googles AI scientists are proud as punch.

To create Sketch-RNN, Google Brain researchers David Ha and Douglas Eck collected more than half a million user-drawn sketches from the Google tool Quick, Draw! Each time a user drew something on the app, it recorded not only the final image, but also the order and direction of every pen stroke used to make it. The resulting data gives a more complete picture (ho, ho, ho) of how we really draw.

All in all, Ha and Eck gathered 70,000 training doodles for 75 different categories, including cat, firetruck, garden, owl, pig, face, and mermaid. Their goal? To create a machine that can draw and generalize abstract concepts in a manner similar to humans. And it can! After studying this data, it learned first to draw based on human input, as seen below:

Notice, as seen most clearly in the penultimate row, the AI is not just copying line for line the human doodle. The input on the left-hand side shows a cat with three eyes but the AI is copying the concept, not the sketch itself, and it knows enough to know three eyes is one too many.

Next, Sketch-RNN learned to draw the objects without copying a starting sketch. (For more on how deep neural networks process and imitate data, check out our AI explainer.)

But whats the benefit of getting neural networks to sketch things in the first place, when theyre already pretty good at making photo-realistic images? Well, as Ha and Eck explain, although doodles look childish to us, theyre also masterpieces of abstraction and data compression. Doodles, they say, tell us something about how people represent and reconstruct images of the world around them. In other words, theyre more human. And once youve taught an AI to sketch, you can deploy it in all sorts of fun ways. Sketch-RNN can complete doodles started by someone else:

And it can combine different doodles together. So, in the picture below, the neural network has been asked to draw some combination of the category cat and chair. The result? Weird cat-chair chimeras:

It can also create what are called latent space interpolations looking at any number of doodle subjects, and combining them together in different ratios to create new sketches with multiple characteristics. In the group of drawings on the left, below, the AI has combined four different doodles: the pig, rabbit, crab, and face.

These drawings are obviously quite basic, but the methods used to create them are so interesting and, so potentially useful. In the future, AI programs like Sketch-RNN could be used as creative aids for designers, architects, and artists. If someone is struggling with a certain picture or design, they could get an AI to absorb their work and spit out a few more suggested variations. The images the computer produces might not be useful in themselves, but they could spark something in the human. Is this AI creativity? Its difficult to know what else to call it.

See the rest here:

Google's AI has learned how to draw by looking at your doodles - The Verge

Related Posts