Carnegie Mellons robotic painter is a step toward AI that can learn art techniques by watching people – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Can a robot painter learn from observing a human artists brushstrokes? Thats the question Carnegie Mellon University researchers set out to answer in a study recently published on the preprint server Arxiv.org. They report that 71% of people found the approach the paper proposes successfully captured characteristics of an original artists style, including hand-brush motions, and that only 40% of that same group could discern the brushstrokes drawn by the robot.

AI art generation has been exhaustively explored. An annual international competition RobotArt tasks contestants with designing artistically inclined AI systems. Researchers at the University of Maryland and Adobe Research describe an algorithm called LPaintB that can reproduce hand-painted canvases in the style of Leonardo da Vinci, Vincent van Gogh, and Johannes Vermeer. Nvidias GauGAN enables an artist to lay out a primitive sketch thats instantly transformed into a photorealistic landscape via a generative adversarial AI system. And artists including Cynthia Hua have tapped Googles DeepDream to generate surrealist artwork.

But the Carnegie Mellon researchers sought to develop a style learner model by focusing on the techniques of brushstrokes as intrinsic elements of artistic styles. Our primary contribution is to develop a method to generate brushstrokes that mimic an artists style, they wrote. These brushstrokes can be combined with a stroke-based renderer to form a stylizing method for robotic painting processes.

The teams system comprises a robotic arm, a renderer that converts images into strokes, and a generative model to synthesize the brushstrokes based on inputs from an artist. The arm holds a brush that it dips into buckets containing paints and puts the brush to canvas, cleaning off the extra paint between strokes. The renderer uses reinforcement learning to learn to generate a set of strokes based on the canvas and a given image, while the generative model identifies the patterns of an artists brushstrokes and creates new ones accordingly.

To train the renderer and generative models, the researchers designed and 3D-printed a brush fixture equipped with reflective markers that could be tracked by a motion capture system. An artist used it to create 730 strokes of different lengths, thicknesses, and forms on paper, which were indexed in grid-like sheets and paired with motion capture data.

In an experiment, the researchers had their robot paint an image of the fictional reporter Misun Lean. They then tasked 112 respondents unaware of the images authorship 54 from Amazon Mechanical Turk and 58 students at three universities to determine whether a robot or a human painted it. According to the results, more than half of the participants couldnt distinguish the robotic painting from an abstract painting by a human.

In the next stage of their research, the team plans to improve the generative model by developing a stylizer model that directly generates brushstrokes in the style of artists. They also plan to design a pipeline to paint stylized brushstrokes using the robot and enrich the learning dataset with the new samples. We aim to investigate a potential artists input vanishing phenomena, the coauthors wrote. If we keep feeding the system with generated motions without mixing them with the original human-generated motions, there would be a point that the human-style would vanish on behalf of a new generated-style. In a cascade of surrogacies, the influence of human agents vanishes gradually, and the affordances of machines may play a more influential role. Under this condition, we are interested in investigating to what extent the human agents authorship remains in the process.

Excerpt from:

Carnegie Mellons robotic painter is a step toward AI that can learn art techniques by watching people - VentureBeat

Related Posts

Comments are closed.