Virtual limitations: Reinforcement learning has been used in the past to train robots to walk in simulations, but it is difficult to transfer this ability to the real world. “Most of the videos you see of virtual agents are not at all realistic,” says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the physical laws simulated inside a virtual environment and the actual physical laws outside of it, such as how friction works between a robot’s feet and the ground, can result in big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose its balance and fall if its movements are even a little out of step.
Double simulation: But training a big robot through trial and error in the real world would be dangerous. To work around these issues, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics which mirrors real world physics with a high degree of accuracy, but at the cost of running slower than real life. Once Cassie appeared to be walking well, the learned walking pattern was loaded into the actual robot.
The real Cassie was able to walk using the model learned in simulation without any additional adjustment. It could walk over rough, slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in her right leg but was able to adjust her movements to compensate. Finn thinks it’s exciting work. Edward Johns, who runs the Robot Learning Lab at Imperial College London agrees. “This is one of the most successful examples I’ve seen,” he says.
The Berkeley team hope to use their approach to add to Cassie’s repertoire of moves. But don’t expect a dance anytime soon.