Advertisement

Column: We don’t know how Israel’s military is using AI in Gaza, but we should

Smoke and flames erupt from an airstrike on a building.
Smoke and flames rise after Israeli air forces target a shopping center in the Gaza Strip on Oct. 7.
(Anadolu Agency / Getty Images)
Share

The fog of war has thickened in Gaza, a ground invasion is gathering steam, and aerial bombardments continue at a furious pace. On Tuesday, missiles struck a refugee camp in Jabaliya, where the Israel Defense Forces said a senior Hamas leader was stationed, killing dozens of civilians.

Debate over the crisis rages online and off, yet for all the discourse, there’s one lingering question I haven’t seen widely considered: To what extent is Israel relying on artificial intelligence and automated weapons systems to select and strike targets?

In the first week of its assault alone, the Israeli air force said it had dropped 6,000 bombs across Gaza, a territory that is 140 square miles — one-tenth the size of the smallest U.S. state of Rhode Island — and is among the most densely populated places in the world. There have been many thousand more explosions since then.

Advertisement

Israel commands the most powerful and highest-tech military in the Middle East. Months before the horrific Hamas attacks on Oct. 7, the IDF announced that it was embedding AI into lethal operations. As Bloomberg reported on July 15, earlier this year, the IDF had begun “using artificial intelligence to select targets for air strikes and organize wartime logistics.”

Israeli officials said at the time that the IDF employed an AI recommendation system to choose targets for aerial bombardment, and another model that would then be used to quickly organize ensuing raids. The IDF calls this second system Fire Factory, and, according to Bloomberg, it “uses data about military-approved targets to calculate munition loads, prioritize and assign thousands of targets to aircraft and drones, and propose a schedule.”

In response to a request for comment, an IDF spokesperson declined to discuss the country’s military use of AI.

In a year when AI has dominated the headlines around the globe, this element of the conflict has gone curiously under-examined. Given the myriad practical and ethical questions that continue to surround the technology, Israel should be pressed on how it’s deploying AI.

The torrent of fake videos, phony experts and enraged screeds unleashed by the Israel-Hamas war shows the failings of social media as a news source and underscores the need for something better.

Oct. 20, 2023

“AI systems are notoriously unreliable and brittle, particularly when placed in situations that are different from their training data,” said Paul Scharre,vice president of the Center for a New American Security and author of “Four Battlegrounds: Power in the Age of Artificial Intelligence.” Scharre said he was not familiar with the details of the specific system the IDF may be using, but that AI and automation that assisted in targeting cycles probably would be used in scenarios like Israel’s hunt for Hamas personnel and materiel in Gaza. The use of AI on the battlefield is advancing quickly, he said, but carries significant risks.

“Any AI that’s involved in targeting decisions, a major risk is that you strike the wrong target,” Scharre said. ”It could be causing civilian casualties or striking friendly targets and causing fratricide.”

Advertisement

One reason it’s somewhat surprising that we haven’t seen more discussion of Israel’s use of military AI is that the IDF has been touting its investment in and embrace of AI for years.

In 2017, the IDF’s editorial arm proclaimed that “The IDF Sees Artificial Intelligence as the Key to Modern-Day Survival.” In 2018, the IDF boasted that its “machines are outsmarting humans.” In that article, the then-head of Sigma, the branch of the IDF dedicated to researching, developing, and implementing AI, Lt. Col. Nurit Cohen Inger wrote, “Every camera, every tank, and every soldier produces information on a regular basis, seven days a week, 24 hours a day.”

“We understand that there are capabilities a machine can acquire that a man can’t,” Inger continued. “We are slowly introducing artificial intelligence into all areas of the IDF — from logistics and manpower to intelligence.”

The IDF went so far as to call its last conflict with Hamas in Gaza, in 2021, the “first artificial intelligence war,” with IDF leadership touting the advantages its technology conferred in combating Hamas. “For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy,” an IDF Intelligence Corps senior officer told the Jerusalem Post. A commander of the IDF’s data science and AI unit said that AI systems had helped the military target and eliminate two Hamas leaders in 2021, according to the Post.

The IDF says AI systems have officially been embedded in lethal operations since the beginning of this year. It says that the systems allow the military to process data and locate targets faster and with greater accuracy, and that every target is reviewed by a human operator.

Yet international law scholars in Israel have raised concerns about the legality of using such tools, and analysts worry that they represent a creep toward more fully autonomous weapons and warn that there are risks inherent in turning over targeting systems to AI.

Advertisement

Corporate leaders have been quicker to condemn the Hamas terror attack on Israel than they were to condemn Russia for invading Ukraine, but they’re still walking a fine line.

Nov. 1, 2023

After all, many AI systems are increasingly black boxes whose algorithms are poorly understood and shielded from public view. In an article about the IDF’s embrace of AI for the Lieber Institute, Hebrew University law scholars Tal Mimran and Lior Weinstein emphasize the risks of relying on opaque automated systems capable of resulting in the loss of human life. (When Mimran served in the IDF, he reviewed targets to ensure they complied with international law.)

“So long as AI tools are not explainable,” Mimran and Weinstein wrote, “in the sense that we cannot fully understand why they reached a certain conclusion, how can we justify to ourselves whether to trust the AI decision when human lives are at stake? ... If one of the attacks produced by the AI tool leads to significant harm of uninvolved civilians, who should bear responsibility for the decision?”

Again, the IDF would not elaborate to me precisely how it is using AI, and the official told Bloomberg that a human reviewed the system’s output — but that it only took a matter of minutes to do so. (“What used to take hours now takes minutes, with a few more minutes for human review,” the head of the army’s digital transformation said.)

There are a number of concerns here, given what we know about the current state of the art of AI systems, and that’s why it’s worth pushing the IDF to reveal more about how it’s wielding them.

For one thing, AI systems remain encoded with biases, and, while they are often good at parsing large amounts of data, they routinely produce error-prone output when asked to extrapolate from that data.

“A really fundamental difference between AI and a human analyst given the exact same task,” Scharre said, “is that the humans do a very good job of generalizing from a small number of examples to novel situations, and AI systems very much struggle to generalize to novel situations.”

Advertisement

ChatGPT and other new AI services benefit from a science fiction-infused marketing frenzy unlike anything in recent memory. There’s more to fear here than killer robots.

March 31, 2023

One example: Even supposedly cutting-edge facial recognition technology of the sort used by American police departments has been shown time and again to be less accurate at identifying people of color — resulting in the systems fingering innocent citizens and leading to wrongful arrests.

Furthermore, any AI system that seeks to automate — and accelerate — the selecting of targets increases the chance that errors made in the process will be more difficult to discern. And if militaries keep the workings of their AI systems secret, there’s no way to assess the kind of mistakes they’re making. “I do think militaries should be more transparent in how they’re assessing or approaching AI,” Scharre said. “One of the things we’ve seen in the last few years in Libya or Ukraine is a gray zone. There will be accusations that AI is being used, but the algorithms or training data is difficult to uncover, and that makes it very challenging to assess what militaries are doing.”

Even with those errors embedded in the kill code, the AI could meanwhile lend a veneer of credibility to targets that might not otherwise be acceptable to rank-and-file operators.

Finally, AI systems can create a false sense of confidence — which was perhaps evident in how, despite having a best-of-class AI-augmented surveillance system in place in Gaza, Israel did not detect the planning for the brutal, highly coordinated massacre on Oct. 7.

As Reuters’ Peter Apps noted, “On Sept. 27, barely a week before Hamas fighters launched the largest surprise attack on Israel since the 1973 Yom Kippur war, Israeli officials took the chair of NATO’s military committee to the Gaza border to demonstrate their use of artificial intelligence and high-tech surveillance. … From drones overhead utilizing face recognition software to border checkpoints and electronic eavesdropping on communications, Israeli surveillance of Gaza is widely viewed amongst the most intense and sophisticated efforts anywhere.”

Yet none of that helped stop Hamas.

“The mistake has been, in the last two weeks, saying this was an intelligence failure. It wasn’t, it was a political failure,” said Antony Loewenstein, an independent journalist and author of “The Palestine Laboratory” who was based in East Jerusalem from 2016 to 2020. “Israel’s focus had been on the West Bank, believing they had Gaza surrounded. They believed wrongly that the most sophisticated technologies alone would succeed in keeping the Palestinian population controlled and occupied.”

Advertisement

That may be one reason that Israel has been reluctant to discuss its AI programs. Another may be that a key selling point of the technology over the years, that AI will help choose targets more accurately and reduce civilian casualties, does not seem credible. “The AI claim has been around targeting people more successfully,” Loewenstein said. “But it has not been pinpoint-targeted at all; there are huge numbers of civilians dying. One third of homes in Gaza have been destroyed. That’s not precise targeting.”

And that’s a fear here — that AI could be used to accelerate or enable the destructive capacity of a nation convulsing with rage, with potentially deadly errors in its algorithms being obscured by the fog of war.

Advertisement