Advertisement

One Giant Leap for Machinekind?

Share via
TIMES SCIENCE WRITER

A duo of computer scientists has created something science fiction writers have thus far only imagined: self-evolving and self-generating machines. From start to finish, a computer system designs and builds the robot-like creations, described in today’s issue of the journal Nature.

“This is a long-awaited and necessary step toward the ultimate dream of self-evolving machines,” Rodney Brooks, a leading computer researcher and director of the Artificial Intelligence Lab, wrote in a commentary accompanying the research.

The machines were created by Hod Lipson and Jordan Pollack of Brandeis University in Waltham, Mass.

Advertisement

Made of smooth, white plastic preassembled as a single unit, the robotic creatures are powered by motors and controlled by a neural network on a microchip. Some move by dragging along the ground, or as one visitor described it, “doing the breast stroke on the floor.”

Although the devices can’t do much more than crawl blindly about the lab, they are considered a huge step for the field of artificial life, which seeks to understand basic biological principles by replicating them synthetically.

The evolution of the machines is surely the stuff of a classic B movie: from the human mind to the computer, which through a self-selection process created the best generations of robot-like machines and ordered another machine to spit them out.

Advertisement

It started with a computer program that contained three building blocks--bars for structure, synthetic muscles and artificial nerve cells--and joined the components in various ways.

A “fitness test” in the computer gauged the creations’ movements. Any creatures that moved well in the “virtual tests” were copied multiple times and mutated further by the computer. Those that did not move well were, like virtual Edsels, replaced by more efficient ones. After hundreds of generations, only the best-moving creatures remained.

The designs were then fed to an off-the-shelf 3-D printer, which essentially spits out drops of plastic in layers to create objects depicted on computer screens. These prototyping machines, which cost about $50,000, are used routinely by industrial designers in testing new designs for cell phones, for example.

Advertisement

Most of the creations contained about 20 components and were 8 to 12 inches long. They looked strikingly different from each other--one like an arrow, one like a crab, one like a snake and some like random geometric forms.

“It’s interesting to see all the different solutions for a simple task in a simple world,” said Lipson, a research scientist at the Volen Center of Complex Systems at Brandeis.

Many of the gadgets ended up, with no guidance from humans, being symmetrical--a useful form for moving in a straight line. “It was surprising to see established engineering ideas in many of our designs,” said Pollack, an associate professor who directs the Dynamical and Evolutionary Machine Organization Lab at Brandeis.

The project wasn’t completely hands-off. The researchers did have to snap motors on, but nothing more. “That’s the only thing we touch,” said Pollack. “It’s authentic, self-generated, self-organized design.”

A major advance in artificial life occurred in 1994 when former Massachusetts Institute of Technology computer scientist Karl Sims created evolving animated creatures that walked and swam through a simulated world where Newtonian rules of physics applied. Though inspiring, they were trapped within the computer that created them.

Other scientists have evolved robot brains, or control systems, within computers and then transferred them to robot bodies. Brooks calls this separation of brain and body a “glaring omission.” Pollack agrees, saying, “There is never a body without a brain in nature.”

Advertisement

Two years ago, Pollack and graduate student Paolo Funes tried to link the robot brain and body by building cranes and bridges that evolved inside computer programs. But the scientists had to build the structures using Lego blocks; the computer didn’t manufacture them.

Now, the lab has taken the next step--one that quite a few labs around the world have been hoping to achieve.

“They’ve finally bridged that gap between evolution simulation [in computers] and the physical world,” said Maja Mataric, who directs USC’s Robotics Research Lab. “To evolve the body and brain together in the real world is a first.”

Devices Are Not Sophisticated

Mataric said her only caveat is that she would not call the devices robots until they contain sensors for gathering information. But the ability for robots to evolve sensors remains out of reach, at least for now.

Pollack and Lipson are the first to admit--and even to emphasize--the primitive nature of their creations. “These are not Lt. Data. They are not Terminator. They’re pretty dumb,” said Pollack, who equated their complexity with that of bacteria. Currently, he said, human-created robots are far superior.

A next important step is to develop robots that can design themselves for one task, and with the aid of sensors taking in information, be able to morph into new robots for additional tasks, said Pradeep Khosla, head of the departments of electrical and computer engineering at Carnegie Mellon University in Pittsburgh.

Advertisement

Although he called the new research extremely interesting, he also said any robots deployed in the real world would need to be more robust than those created by a 3-D printer.

Ultimately, the electronic creatures may represent a way to severely cut the high cost of robot production by removing pricier aspects of manufacturing--like human salaries.

“We can be in a situation where there is no cost of development, only the cost of the materials--plastic and motors,” said Pollack.

The work also offers the possibility of creating robots exquisitely designed to carry out one task in a specific environment very well--like the ultimate robot vacuum cleaner.

“We’d come to your house with a laser scanner and use it to create a virtual room,” mused Pollack. “We’d then use that environment to train the robotic vacuum to avoid your low-slung sofa and the fabric hanging too low because the staples fell out.”

The ultimate dream mentioned by Brooks is for machines that can evolve and improve themselves by learning about the world, with no human intervention. Although that is still some way off, such technological visions are inspiring some fear.

Advertisement

“I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil,” Sun Microsystems co-founder Bill Joy wrote in the widely circulating essay “Why the Future Doesn’t Need Us,” published in Wired magazine in April.

Joy’s fears center on the danger of intelligent machines “hugely amplified by the power of self-replication.”

Many of these fears are fueled not by science fiction and Terminator movies, but by the forecasts of veteran computer scientist Hans Moravec, who helped found Carnegie Mellon’s respected robotics program. In his book “Robot,” Moravec suggests that robots will match human intelligence within 50 years. Enslavement by robots, some fear, is the next logical step.

Although some roboticists welcome a philosophical debate on such issues, others say such fears are overblown.

“Let’s remember, these robots can’t even sense the world,” said Mataric. Added Khosla: “I’m more concerned about cloning.”

Creators Pollack and Lipson wittily acknowledge the debate in the name of their project. It’s GOLEM (for Genetically Organized Lifelike Electro Mechanics), named after the Jewish legend of a rabbi who created a being called a Golem out of clay to clean houses and keep order.

Advertisement

As it learned of the world, the Golem grew angry that it couldn’t be more like a person, having fun and eating good-tasting things. Finally, in one version, the Golem ran amok. “It’s a warning,” said Pollack, “about hubris.”

Movies of the robots are available at https://www.demo.cs.brandeis.edu/golem.

(BEGIN TEXT OF INFOBOX / INFOGRAPHIC)

Almost No Assembly Required

A computer designed a series of moving machines by itself, then instructed a prototyping machine to make them. Here are two examples. For an explanation of how the process works, see graphic, A32.

What the computer designed

What the protyping machine built

*

Source: Lipson and Pollack, Brandeis University

(BEGIN TEXT OF INFOBOX / INFOGRAPHIC)

How It Works

A computer simulator is given 3 basic parts to work with:

*

Starting from nothing, the simulator chooses parts randomly, configuring them, then testing the design’s potential.

*

Prototyping machine builds design layer by layer, below left. Instructions are downloaded into computer chip. A motor is snapped on and it moves. Finished robotic creature shown below.

Source: Lipson and Pollack, Brandeis University

Advertisement