Everyone knows that, unless you’re extraordinarily gifted, you need to crawl
before you can walk. Turns out the same principle could also apply to robots. In
a first-of-its-kind experiment conducted by University of Vermont (UVM)
roboticist Josh Bongard, both simulated and physical robots were created that,
like tadpoles becoming frogs, change their body forms while learning how to walk.
He found that these evolving robots were able to learn more rapidly than ones
with fixed body forms and that, in their final form, the changing robots had
developed a more robust gait.
02.jpg (4.45 KB. 100x100 - viewed 2 times.)
So far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured
environments. This is why Bongard and other robotics experts have turned to computer programs to design robots and develop their behaviors, instead of
trying to program the robots’ behavior directly.
Using a sophisticated computer simulation, Bongard unleashed a series of
synthetic beasts that move about in a 3-dimensional space. Generations of
the creatures then run a software routine called a genetic algorithm that
experiments with various motions until it develops a slither, shuffle, or walking
gait – depending on its body plan – that allows it to reach a light source
without tipping over.
“The robots have 12 moving parts,” Bongard says. “They look like the simplified skeleton of a mammal: it’s got a jointed spine and then you have four sticks –
the legs – sticking out.”
03.jpg (23.67 KB. 530x298 - viewed 2 times.)
Some of the creatures begin flat to the ground, like tadpoles or snakes with
legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals. Bongard found that the generations
of robots that progressed from slithering to wide legs and, finally, to upright
legs, ultimately performed better and discovered the desired behavior
faster than robots that started out in the upright position.
04.jpg (23.99 KB. 530x298 - viewed 2 times.)
“The snake and reptilian robots are, in essence, training wheels,” says Bongard,
“they allow evolution to find motion patterns quicker, because those kinds of
robots can’t fall over. So evolution only has to solve the movement problem,
but not the balance problem, initially. Then gradually over time it’s able to tackle
the balance problem after already solving the movement problem.”
The process mimics the way a human infant first learns to roll, then crawl and,
finally, walk.
“Yes,” says Bongard, “We’re copying nature, we’re copying evolution, we’re
copying neural science when we’re building artificial brains into these robots.”
Smarter and more flexible
But the most surprising finding of Bongard’s experiments was that the changing robots were not only faster in getting to the final goal, but afterward they were
also better able to deal with new kinds of challenges they hadn’t faced before,
such as attempts to tip them over.
“Realizing adaptive behavior in machines has to date focused on dynamic
controllers, but static morphologies,” Bongard writes in his paper that appears
in the Proceedings of the National Academy of Sciences. “This is an inheritance
from traditional artificial intelligence in which computer programs were developed
that had no body with which to affect, and be affected by, the world.”
“One thing that has been left out all this time is the obvious fact that in nature it’s
not that the animal’s body stays fixed and its brain gets better over time,” he
says, “in natural evolution animals bodies and brains are evolving together all the time.”
Bongard says this hasn’t been done in robotics because it’s much easier to change
a robot’s programming than it is to change its body.
Testing it in the real world
After running 5,000 simulations, each taking 30 hours on the parallel processors
in UVM’s Vermont Advanced Computing Center, Bongard decided to try the experiment with physical robots. He built a relatively simple four-legged robot
out of Lego Mindstorm kits. The physical robot was similar to those used in the computer simulation, but had a brace on its front and back legs. As the controller searches for successful movement patterns, the brace gradually tilts the legs, so
they go from horizontal to vertical.