We’ve all seen those adorable clips of newborn giraffes or foals first learning to walk on their shaky legs, stumbling around until they finally master the movements.
Researchers wanted to know how animals learn to walk and learn from their stumbling, so they built a four-legged, dog-sized robot to simulate it, according to a new study reported in Nature Machine Intelligence.
They found that it took their robot and its virtual spinal cord just an hour to get its walking under control.
Getting up and going quickly is essential in the animal kingdom to avoid predators, but learning how to co-ordinate leg muscles and tendons takes time.
Initially, baby animals rely heavily on hard-wired spinal cord reflexes to co-ordinate muscle and tendon control, while motor control reflexes help them to avoid falling and hurting themselves during their first attempts.
More precise muscle control must be practised until the nervous system adapts to the muscles and tendons, and the young are then able to keep up with the adults.
“As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes,” says first author Dr Felix Ruppert, a former doctoral student in the Dynamic Locomotion research group at the Max Planck Institute for Intelligent Systems (MPI-IS), Germany.
“If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.”
Building a virtual spinal cord to learn how to walk
The researchers designed a learning algorithm to function as the robot’s spinal cord and work as what’s known as a Central Pattern Generator (CPG). In humans and animals, the CPGs are networks of neurons in the spinal cord that, without any input from the brain, produce periodic muscle contractions.
These are important for rhythmic tasks like breathing, blinking, digestion and walking.
The CPG was simulated on a small and lightweight computer that controlled the motion of the robot’s legs and it was positioned on the robot where the head would be on a dog.
The robot – which the researchers named Morti – was designed with sensors on its feet to measure information about its movement.
Morti learnt to walk while having no prior explicit “knowledge” of its leg design, motors, or springs by continuously comparing the expected data (modelled from the virtual spinal cord) against the sensor data as it attempted to walk.
“Our robot is practically ‘born’ knowing nothing about its leg anatomy or how they work,” Ruppert explains. “The CPG resembles a built-in automatic walking intelligence that nature provides and that we have transferred to the robot. The computer produces signals that control the legs’ motors and the robot initially walks and stumbles.
“Data flows back from the sensors to the virtual spinal cord where sensor and CPG data are compared. If the sensor data does not match the expected data, the learning algorithm changes the walking behaviour until the robot walks well and without stumbling.”
Sensor data from the robot’s feet are continuously compared with the expected touch-down data predicted by the robot’s CPG. If the robot stumbles, the learning algorithm changes how far the legs swing back and forth, how fast the legs swing, and how long a leg is on the ground.
“Changing the CPG output while keeping reflexes active and monitoring the robot stumbling is a core part of the learning process,” Ruppert says.
Within one hour, Morti can go from stumbling around like a newborn animal to walking, optimising its movement patterns faster than an animal and increasing its energy efficiency by 40%.
“We can’t easily research the spinal cord of a living animal. But we can model one in the robot,” says co-author Dr Alexander Badri-Spröwitz, head of the Dynamic Locomotion research group.
“We know that these CPGs exist in many animals. We know that reflexes are embedded; but how can we combine both so that animals learn movements with reflexes and CPGs?
“This is fundamental research at the intersection between robotics and biology. The robotic model gives us answers to questions that biology alone can’t answer.”