Advertisement

Opinion: Are killer robots a real possibility?

A large drone in the night sky with the moon shining bright.
An unmanned U.S. Predator drone flies over Kandahar Airfield in Afghanistan in 2010.
(Kirsty Wigglesworth / Associated Press)
Share via

Drone warfare is about to get much more dangerous.

Within a few years, military drones (also known as unmanned aerial vehicles) are likely to be equipped with facial recognition technology capable of identifying people by comparing real-time images with massive commercial and governmental databases.

If all goes according to plan, these aircraft will also be fully autonomous, meaning they will conduct missions using artificial intelligence, with no human operators even on the ground.

This may sound far-fetched, but it’s not. The U.S. Air Force recently awarded a contract to RealNetworks, a Seattle-based tech firm specializing in AI and digital media, to adapt its proprietary facial recognition platform for deployment on a non-piloted drone “for special ops,” which it said would “open the opportunity for real-time autonomous response by the robot.”

Advertisement

The Air Force awarded the firm a similar contract to incorporate facial recognition technology into an autonomous, quadrupedal robot “for security and expeditionary” use. Imagine a mechanized mastiff with a mind of its own.

As Big Tech and Big Defense join forces, science fiction is on the verge of becoming science fact.

Although the recent Air Force contracts don’t specify that the autonomous drones and robots with facial recognition technology are being built for combat missions, it’s likely this will eventually happen. Once autonomous robots incorporate the technology to identify and spy on suspected enemies, the Pentagon and American intelligence agencies will undoubtedly be tempted to use the software to assassinate them.

Advertisement

But there are a few big problems. For one thing, facial recognition programs are notoriously inaccurate.

In 2019, a sweeping study conducted by the U.S. National Institute of Standards and Technology revealed alarming discrepancies in how facial recognition software identified certain groups of people.

It exposed deep flaws, including significantly higher rates of false positive matches for darker-skinned people compared to lighter-skinned people. The reason for these algorithmic biases has to do with bad data: Since darker-skinned people are often poorly represented in facial recognition data sets, bias creeps into the results. Garbage in, garbage out.

Advertisement

Further problems arise when you start to use fully autonomous drones without human operators.

Sure, they’re very appealing to the Pentagon’s top brass. As counter-drone systems such as control signal jammers become more sophisticated, military leaders are eager to have drones that are less reliant on remote control and more self-sufficient. These machines are likely to use AI-based navigation systems such as simultaneous location and mapping, lidar (light detection and ranging) technology, and celestial navigation.

Fully autonomous drones are also desirable from the government’s point of view because of the psychological impact of remote-controlled warfare on drone pilots, many of whom suffer from serious mental illnesses such as post-traumatic stress disorder after killing their targets.

To some observers, autonomous drones seem to offer a way of eliminating the psychological trauma of remote killing.

For nearly 20 years, researchers have observed psychological effects of remote-controlled drone warfare, which simultaneously stretches and compresses the battlefield. It does so by increasing the geographic distance between the targeter and the targeted, even as drone operators develop a close, intimate picture of the daily lives of those they eventually kill from thousands of miles away. It’s more like long-distance hunting than warfare.

But autonomous drones raise a host of ethical concerns precisely because they might one day absolve humans of responsibility for life-and-death decisions. That’s an alluring, even seductive prospect. But the question is: Who will be held accountable when an autonomous robot outfitted with facial recognition software kills civilian noncombatants?

Advertisement

When autonomous hunter-killer drones are able to select and engage targets on their own, the human conscience will effectively have been taken out of the process. What restraints will be left?

No one has an answer to these ethical dilemmas. Meanwhile, the technological developments in drone warfare are unfolding in a broader context.

Ukraine has become a testing ground for a vast array of unmanned aerial vehicles, including strike drones, weaponized DIY hobbyist drones and loitering munitions that stay airborne for some time and attack only when the target has been identified. But there’s also been an acceleration in the development of advanced autonomous weapons as the U.S., China, Russia, Iran, Israel, the European Union and others compete to build new warfighting technologies.

As autonomous-weapons research and development lunges forward, the possibility of a full-blown robot war looms on the horizon. Scientists, scholars and citizens of conscience should act to ban these weapons now, before that day arrives. Organizations such as the International Committee for Robot Arms Control, Human Rights Watch and the Future of Life Institute still have a chance of convincing the United Nations to ban autonomous weapons — if enough of us take a stand against killer robots.

Roberto J. González is chair of the anthropology department at San José State University. His most recent book is “War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future.”

Advertisement