AI is creating new types of art, and new types of artists – Seattle Times

Posted: August 16, 2017 at 6:18 pm

The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

MOUNTAIN VIEW, Calif. In the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls punk-influenced bluegrass Johnny Rotten crossed with Johnny Cash. But what he really wanted to do was combine his days and nights, and build machines that could make their own songs. My only goal in life was to mix AI and music, Eck said.

It was a naive ambition. Enrolling as a graduate student at Indiana University, in Bloomington, not far from where he grew up, he pitched the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, Gdel, Escher, Bach: An Eternal Golden Braid.Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.

But during the next two decades, working on the fringe of academia, Eck kept chasing the idea, and eventually, the AI caught up with his ambition.

Last spring, a few years after taking a research job at Google, Eck pitched the same idea he pitched Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art, including sketches, videos and jokes.

With its empire of smartphones, apps and internet services, Google is in the business of communication, and Eck sees Magenta as a natural extension of this work.

Its about creating new ways for people to communicate, he said during a recent interview inside the small two-story building here that serves as headquarters for Google AI research.

The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data.

By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognize a bike. This is how Facebook identifies faces in online photos, how Android phones recognize commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analyzing a set of songs, for instance, they can learn to build similar sounds.

As Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.

But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Eck and his team are combining them to form something that did not exist before, creating new ways that artists can work.

Were making the next film camera, Eck said. Were making the next electric guitar.

Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other AI techniques.

This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways.

In January, Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.

A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.

Soon, Eck and other Googlers spotted the blog, and now Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw.

By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They do not copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like. Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines.

Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Ha. AI is not just creating new kinds of art; it is creating new kinds of artists.

Read more:

AI is creating new types of art, and new types of artists - Seattle Times

Related Posts