This Robot Taught Itself to Walk in a SimulationThen Went for a Stroll in Berkeley – Singularity Hub

Posted: April 13, 2021 at 6:30 am

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how itd fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robotwhich is basically just a pair of legswas able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

Its the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isnt the first robot video youve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the worlds top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, its hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabets DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, its modeled on the way we learn. Touch the stove, get burned, dont touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassies case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. Its not the first AI to learn to walk in this manner. But going from simulation to the real world doesnt always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didnt need further polishing. Said another way, when it was born into the physical worldit knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassies knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approacheslike this one aimed at training multi-skilled robots or this one offering continuous learning beyond trainingmay also move the dial. Its early yet, however, and theres no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College Londons Robot Learning Lab, told MIT Technology Review, This is one of the most successful examples I have seen.

The Berkeley team hopes to build on that success by trying out more dynamic and agile behaviors. So, might a self-taught parkour-Cassie be headed our way? Well see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube

Read more:

This Robot Taught Itself to Walk in a SimulationThen Went for a Stroll in Berkeley - Singularity Hub

Related Posts