When Robots Do the Killing

David L. Ulin is the author of "The Myth of Solid Ground: Earthquakes, Prediction, and the Fault Line Between Reason and Faith" (Viking, 2004).

Late last week, in a parking lot in New Jersey, the U.S. Army unveiled what may be the future of war: 3-foot-tall robotic “soldiers,” outfitted with tank tracks, night vision and mounted automatic weapons capable of firing more than 300 rounds at a burst. Known as SWORDS (Special Weapons Observation Reconnaissance Detection Systems), these battle bots are on the leading edge of a new kind of warfare, in which -- or so the argument goes -- our troops will one day remain hidden (and, presumably, protected) while engaging the enemy by remote control. The Army intends to deploy 18 SWORDS units to Iraq in the spring, marking the first time robots have been used to fight and kill human beings one on one.

If, like me, you grew up on science fiction, the idea of robot soldiers strikes a chilling chord. Killer droids, after all, have long been speculative-universe staples, potent symbols of the dangers of technology, of what happens when machines go wrong. In Karel Capek’s 1920 play “R.U.R. (Rossum’s Universal Robots)” -- which introduced “robot” to the vernacular -- automatons rise up to wipe out the human race. In “Blade Runner,” renegade cyborgs stage a bloody mutiny and flee to Earth. Robotic armies rampage by the screenful in George Lucas’ “Star Wars” films.

And then, of course, there is the “Terminator” series, in which robots designed to look and smell like people infiltrate human encampments to execute rebel leaders without mercy or remorse. This is the cybernetic future at its most apocalyptic: a world in which our high-tech weapons turn on us, just as we always feared they would.

The fear resonates. Why else would SWORDS designers feel compelled to reassure us, as they did last week, that their robots are not automonous terminators, but function only at the command of humans, , who must identify targets via video before giving the electronic OK to shoot?


On a certain level, the developers of SWORDS make a valid argument: These are not smart weapons, but surrogates for soldiers in the field. It’s hard to quarrel with any tool that might make our soldiers safer, and if nothing else, a robot warrior will never have to worry about inadequate armor or supplies.

Yet something more disturbing is at work, a sense of willful disassociation, as if, with enough distance, we might remove ourselves from what war is. Here too the military mimics Hollywood. For “Star Wars,” it’s been reported, storytellers relied on battle bots to take the blood out of the onscreen killing and render moral questions moot.

A similar logic fuels the ban on photos of flag-draped coffins -- if we don’t see them, they’re not there -- and it’s no stretch to suggest that SWORDS, and other high-tech weapons now being developed by the Pentagon’s Defense Advanced Research Projects Agency, will further sanitize our point of view.

What can’t be sanitized, however, is the robot’s deadly efficiency; remove the human from the weapon, and problems like recoil and breath control are eliminated, allowing the robot to hit a nickel-sized target at 328 yards. In one test, a SWORDS scored 70 out of 70 bull’s-eyes.

Thirty or so years ago, the composer John Cage proposed a different sort of battle strategy: Take the heads of warring nations, give each a 50-pound sack of horse manure, lock them in a room, and let them fight it out. It’s a quixotic notion, but at least it takes into account a human element, the idea that war cannot be waged without a price.

As for the SWORDS units, what does it say about us that this is how we use our creativity -- to invent robots that offer more efficient ways to kill? How can we be so disconnected that we refer to people as “targets,” whether they are enemies or civilians, too indistinct to identify through the garble of a video display? Surely we lose something by all this disengagement.

It’s easy to be ruthless from a distance; less so when you see the whites of someone’s eyes. If there’s no potential for human cost, how do we calculate our humanity, how do we show anything resembling restraint? And without restraint, are we even fully human anymore?