The Israeli publications +972 and Local Call have exposed how the Israeli military used an artificial intelligence program known as Lavender to develop a “kill list” in Gaza that includes as many as 37,000 Palestinians who were targeted for assassination with little human oversight. A second AI system known as “Where’s Daddy?” tracked Palestinians on the kill list and was purposely designed to help Israel target individuals when they were at home at night with their families. The targeting systems, combined with an “extremely permissive” bombing policy in the Israeli military, led to “entire Palestinian families being wiped out inside their houses,” says Yuval Abraham, an Israeli journalist who broke the story after speaking with members of the Israeli military who were “shocked by committing atrocities.” Abraham previously exposed Israel for using an AI system called “The Gospel” to intentionally destroy civilian infrastructure in Gaza, including apartment complexes, universities and banks, in an effort to exert “civil pressure” on Hamas. These artificial intelligence military systems are “a danger to humanity,” says Abraham. “AI-based warfare allows people to escape accountability.”
Transcript
This is a rush transcript. Copy may not be in its final form.
AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman.
The Israeli publications +972 Magazine and Local Call have exposed how the Israeli military used an artificial intelligence known as “Lavender” to develop a “kill list” in Gaza that includes as many as 37,000 Palestinians who were targeted for assassination with little human oversight. The report is based in part on interviews with six Israeli intelligence officers who had firsthand involvement with the AI system.
+972 reports, quote, “Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine ‘as if it were a human decision.’”
A second AI system known as “Where’s Daddy?” tracked Palestinian men on the kill list. It was purposely designed to help Israel target individuals when they were at home at night with their families. One intelligence officer told the publications, quote, “We were not interested in killing operatives only when they were in a military building or engaged in a military activity. On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations,” they said.
Today we spend the hour with the Israeli investigative journalist Yuval Abraham, who broke this story for +972 and Local Call. It’s headlined “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” I spoke with Yuval Abraham yesterday and began by asking him to lay out what he found.
YUVAL ABRAHAM: Yeah. Thank you for having me again, Amy.
It is a very long piece. It’s 8,000 words. And we divided it into six different steps. And each step represents a process in the highly automated way in which the military marks targets since October. And the first finding is Lavender. So, Lavender was designed by the military. Its purpose was, when it was being designed, to mark the low-ranking operatives in the Hamas and Islamic Jihad military wings. That was the intention, because, you know, Israel estimates that there are between 30,000 to 40,000 Hamas operatives, and it’s a very, very large number. And they understood that the only way for them to mark these people is by relying on artificial intelligence. And that was the intention.
Now, what sources told me is that after October 7th, the military basically made a decision that all of these tens of thousands of people are now people that could potentially be bombed inside their houses, meaning not only killing them but everybody who’s in the building — the children, the families. And they understood that in order to try to attempt to do that, they are going to have to rely on this AI machine called Lavender with very minimal human supervision. I mean, one source said that he felt he was acting as a rubber stamp on the machine’s decisions.
Now, what Lavender does is it scans information on probably 90% of the population of Gaza. So we’re talking about, you know, more than a million people. And it gives each individual a rating between one to 100, a rating that is an expression of the likelihood that the machine thinks, based on a list of small features — and we can get to that later — that that individual is a member of the Hamas or Islamic Jihad military wings. Sources told me that the military knew, because they checked — they took a random sampling and checked one by one — the military knew that approximately 10% of the people that the machine was marking to be killed were not Hamas militants. They were not — some of them had a loose connection to Hamas. Others had completely no connection to Hamas. I mean, one source said how the machine would bring people who had the exact same name and nickname as a Hamas operative, or people who had similar communication profiles. Like, these could be civil defense workers, police officers in Gaza. And they implemented, again, minimal supervision on the machine. One source said that he spent 20 seconds per target before authorizing the bombing of the alleged low-ranking Hamas militant — often it also could have been a civilian — killing those people inside their houses.
And I think this, the reliance on artificial intelligence here to mark those targets, and basically the deadly way in which the officers spoke about how they were using the machine, could very well be part of the reason why in the first, you know, six weeks after October 7th, like one of the main characteristics of the policies that were in place were entire Palestinian families being wiped out inside their houses. I mean, if you look at U.N. statistics, more than 50% of the casualties, more than 6,000 people at that time, came from a smaller group of families. It’s an expression of, you know, the family unit being destroyed. And I think that machine and the way it was used led to that.
AMY GOODMAN: You talk about the choosing of targets, and you talk about the so-called high-value targets, Hamas commanders, and then the lower-level fighters. And as you said, many of them, in the end, it wasn’t either. But explain the buildings that were targeted and the bombs that were used to target them.
YUVAL ABRAHAM: Yeah, yeah. It’s a good question. So, what sources told me is that during those first weeks after October, for the low-ranking militants in Hamas, many of whom were marked by Lavender, so we can say “alleged militants” that were marked by the machine, they had a predetermined, what they call, “collateral damage degree.” And this means that the military’s international law departments told these intelligence officers that for each low-ranking target that Lavender marks, when bombing that target, they are allowed to kill — one source said the number was up to 20 civilians, again, for any Hamas operative, regardless of rank, regardless of importance, regardless of age. One source said that there were also minors being marked — not many of them, but he said that was a possibility, that there was no age limit. Another source said that the limit was up to 15 civilians for the low-ranking militants. The sources said that for senior commanders of Hamas — so it could be, you know, commanders of brigades or divisions or battalions — the numbers were, for the first time in the IDF’s history, in the triple digits, according to sources.
So, for example, Ayman Nofal, who was the Hamas commander of the Central Brigade, a source that took part in the strike against that person said that the military authorized to kill alongside that person 300 Palestinian civilians. And we’ve spoken at +972 and Local Call with Palestinians who were witnesses of that strike, and they speak about, you know, four quite large residential buildings being bombed on that day, you know, entire apartments filled with families being bombed and killed. And that source told me that this is not, you know, some mistake, like the amount of civilians, of this 300 civilians, it was known beforehand to the Israeli military. And sources described that to me, and they said that — I mean, one source said that during those weeks at the beginning, effectively, the principle of proportionality, as they call it under international law, quote, “did not exist.”
AMY GOODMAN: So, there’s two programs. There’s Lavender, and there’s Where’s Daddy? How did they even know where these men were, innocent or not?
YUVAL ABRAHAM: Yeah, so, the way the system was designed is, there is this concept, in general, in systems of mass surveillance called linking. When you want to automate these systems, you want to be able to very quickly — you know, you get, for example, an ID of a person, and you want to have a computer be very quickly able to link that ID to other stuff. And what sources told me is that since everybody in Gaza has a home, has a house — or at least that was the case in the past — the system was designed to be able to automatically link between individuals and houses. And in the majority of cases, these households that are linked to the individuals that Lavender is marking as low-ranking militants are not places where there is active military action taking place, according to sources. Yet the way the system was designed, and programs like Where’s Daddy?, which were designed to search for these low-ranking militants when they enter houses — specifically, it sends an alert to the intelligence officers when these AI-marked suspects enter their houses. The system was designed in a way that allowed the Israeli military to carry out massive strikes against Palestinians, sometimes militants, sometimes alleged militants, who we don’t know, when they were in these spaces in these houses.
And the sources said — you know, CNN reported in December that 45% of the munitions, according to U.S. intelligence assessments, that Israel dropped on Gaza were unguided, so-called dumb bombs, that have, you know, a larger damage to civilians. They destroy the entire structure. And sources said that for these low-ranking operatives in Hamas, they were only using the dumb munitions, meaning they were collapsing the houses on everybody inside. And when you ask intelligence officers why, one explanation they give is that these people were, quote, “unimportant.” They were not important enough, from a military perspective, that the Israeli army would, one source said, waste expensive munitions, meaning more guided floor bombs that could have maybe taken just a particular floor in the building.
And to me, that was very striking, because, you know, you’re dropping a bomb on a house and killing entire families, yet the target that you are aiming to assassinate by doing so is not considered important enough to, quote, “waste” an expensive bomb on. And I think it’s a very rare reflection of sort of the way — you know, the way the Israeli military measures the value of Palestinian lives in relation to expected military gain, which is the principle of proportionality. And I think one thing that was very, very clear from all the sources that I spoke with is that, you know, this was — they said it was psychologically shocking even for them, you know, like it was — yeah.
So, that’s the combination between Lavender and Where’s Daddy? The Lavender lists are fed into Where’s Daddy? And these systems track the suspects and wait for the moments that they enter houses, usually family houses or households where no military action takes place, according to several sources who did this, who spoke to me about this. And these houses are bombed using unguided missiles. This was a main characteristic of the Israeli policy in Gaza, at least for the first weeks.
AMY GOODMAN: You write that they said they didn’t have as many smart bombs. They were more expensive, so they didn’t want to waste them, so they used the dumb bombs —
YUVAL ABRAHAM: Yeah.
AMY GOODMAN: — which kill so many more.
YUVAL ABRAHAM: Yeah, exactly. Exactly, that’s what they said. But then I say, if the person, you know, is not important enough for you to waste ammunition on, but you’re willing to kill 15 civilians, a family?
AMY GOODMAN: Yuval Abraham, I wanted to read from the Israeli military statement, the IDF statement, in response to your report.
YUVAL ABRAHAM: Please.
AMY GOODMAN: They say, quote, “The process of identifying military targets in the IDF consists of various types of tools and methods, including information management tools, which are used in order to help the intelligence analysts to gather and optimally analyze the intelligence, obtained from a variety of sources. Contrary to claims, the IDF did not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist. Information systems are merely tools for analysts in the target identification process.” Again, that’s the IDF response, Yuval Abraham, to your report. Your response?
YUVAL ABRAHAM: I read this response to some of the sources, and they said that they’re lying, that it’s not true. And I was surprised that they were — you know, usually they’re not so, you know, blatant in saying something that is false.
I think this can very easily be disproven, because, you know, a senior-ranking Israeli military official, the head of the 8200 unit’s AI center, gave a public lecture in last year, in 2023, in Tel Aviv University — you can google it, anybody who’s listening to us — where he spoke about, quote — I’m quoting him in that lecture — “an AI system that the Israeli military used in 2021 to find terrorists.” That’s what he said. So, to have that on record, to have — I have the presentation slides showing how the system is rating the people — and then to get a comment from the IDF spokesperson saying, “We do not have a system that uses AI to…” I really don’t know. Like, I almost thought, “Do I put this in the piece or not?” Because, you know, like, I know — in the end, you know, I gave them the space in the piece to make those claims, like I think I tried to be as dry as possible in the way that I was reporting. But, really, like, I am very, very confident in those findings. They are verified from from numerous sources that I’ve spoken with.
And I think that people who read the full investigation, read the depth of it — the commander of the 8200 unit wrote a book in 2021, titled Human-Machine Teams: How Synergy Between AI and Human Beings Can Revolutionize the World. And in the book, he’s talking about how militaries should rely on artificial intelligence to, quote, “solve the problem of the human bottleneck” in creating new targets and in the decision-making to approve new targets. And he wrote in that book, he said — and this is another quote from him — he says that “no matter how many intelligence officers you have tasked with producing targets during the war, they still will not be able to produce enough targets per day.” And he gives a guide in that book as to how to build these AI systems. Now, I want to emphasize, you know, he writes in the book very, very clearly that these systems are not supposed to replace human judgment. He calls it, you know, a mutual learning between humans and artificial intelligence. And he says — and the IDF still maintains this — they say it is intelligence officers who look at the results and make a decision.
From what I heard from numerous sources, after October 7th, that stopped being the cases, at least in some parts of the IDF, where, again, Amy, as I said before, sources were told that if they check that the target is a man, they can accept Lavender’s recommendations without thoroughly looking at them, without checking why the machine made the decision that it made.
And I think, you know, when speaking with sources, like, just to describe — like, many of these sources, you know, they were drafted to the military after October 7th. Many of them were shocked by atrocities that happened on October 7th, their families, their friends. Some of them did not think they would be drafted to the military again. They said, “OK, we have to go now.” There was this sense that — and gradually, when they realized what they were being asked to do, the things that they are involved in, not — I wouldn’t say that all six are like this, but at least some of them felt, again, shocked by committing atrocities and by being involved in things and killing families, and they felt it’s unjustifiable. And they felt a responsibility, I think. And I felt this also in the previous piece that I wrote, “’A mass assassination factory,’” which spoke about another AI machine called The Gospel. They felt a need to share this information with the world, out of a sense that people are not getting it. You know, they’re hearing the military spokesperson and all of these narratives that we’ve been hearing for the past six months, and they do not reflect the reality on the ground.
And I really believe, I really believe — you know, there’s a looming attack now on Rafah — these systems could be used there again to kill Palestinians in massive numbers, these attacks, and, you know, it’s placing the Israeli hostages in danger who are still unjustifiably held in Gaza and need to be released. There needs to be a ceasefire. It cannot go on. And I hope that this investigation, that exposes things so clearly, will help more people all around the world call for a ceasefire, call to release the hostages and end the occupation and move towards a political solution there. For me, there is no — there is no other way forward.
AMY GOODMAN: I wanted to ask if U.S. military, if U.S. technology is playing a role in Israeli AI, artificial intelligence.
YUVAL ABRAHAM: So, I don’t know. And there is some information that I cannot fully share, like, at this moment. I’m investigating, like, you know, who is involved in developing these systems.
What I can tell you is, you know, based on previous experience of the 2014 war and the 2021 war, when the wars end, these systems are then sold to militaries all over the world. And I think, regardless of, you know, the horrific results and consequences of these systems in Gaza, alongside that, I really think there is a danger to humanity. Like, this AI-based warfare allows people to escape accountability. It allows to generate targets, really, on a massive — you know, thousands, 37,000 people marked for potential assassination. And it allows to do that and maintain a sort of aesthetic of international law, because you have a machine that makes you a target file with, you know, commander or, like, target, collateral damage, but it loses all meaning.
I mean, take the principle of distinction under international law. When you design a system that marks 37,000 people, and you check, and you know that 10% of them are actually not militants — right? — they’re loosely related to Hamas or they’re not related at all — and you still authorize to use that system without any meaningful supervision for weeks, I mean, isn’t that a breach of that principle? When you authorize to kill, you know, up to 15 or up to 20 civilians for targets that you consider, from a military point of view, not especially important, isn’t that a clear breach of the principle of proportionality? You know, and I don’t know, like, I think international law really is in a crisis right now. And I think these AI-based systems are making that crisis even worse. They are draining all of these terms from meaning.
AMY GOODMAN: Let me play for you a clip of National Security Council spokesperson John Kirby being questioned on Tuesday about Israel’s killing of seven aid workers in three cars from Chef Andrés’s World Central Kitchen. This is Kirby.
NIALL STANAGE: Is firing a missile at people delivering food and killing them not a violation of international humanitarian law?
JOHN KIRBY: Well, the Israelis have already admitted that this was a mistake that they made. They’re doing an investigation. They’ll get to the bottom of this. Let’s not get ahead of that. … The State Department has a process in place. And to date, as you and I are speaking, they have not found any incidents where the Israelis have violated international humanitarian law.
AMY GOODMAN: So, that’s the U.S. top spokesperson, John Kirby, saying Israel has never broken international law so far since October 7th. And again, this is in response to a question about the killing of the seven aid workers, one Palestinian and six international aid workers. Can you talk about your response to this attack, three different missiles hitting all three cars, and then what Kirby said?
YUVAL ABRAHAM: Yeah. Wow! It’s quite shocking, in my mind, I mean, what he said, you know, based on the evidence that exists. The first thought that popped up to my mind when he was talking about, you know, Israel is investigating it, since I know the statistics — so, if you take the 2014 bombing and war in Gaza, so, you know, 512 Palestinian children were killed. Israel said that it will investigate. There were like hundreds of claims for war crimes. Like, only one file the Israeli military actually prosecuted a soldier about, and it was about looting of like 1,000 shekels. Everything was closed. This happens, you know, 2018, 2019. Two hundred thirty Palestinians are shot dead at the border. Again, tens of — one file prosecuted. Like, to claim that because Israel is having an investigation, it somehow means that they are getting to the bottom of this and changing something, it’s just mocking our intelligence, I think.
The second thing that I would say is that it’s true that the state of Israel has apologized for it. But if you actually look at the track record of people being killed around aid trucks, this has happened over and over again for Palestinians. I mean, in the beginning of March, 112 Palestinians were killed around the flour aid truck. The Guardian reported at the time that 14 such cases happened, like in February and January. So it’s clear to me that the Israeli military is apologizing not because of the crime, but because of the identity of the people who were killed in the crime. And I think that’s really hypocrisy.
And to answer the question about my findings, I mean, I don’t know if artificial intelligence was involved in that strike. I don’t want to say something that I’m not, you know, 100% sure of. But what I have learned from Israeli intelligence officers makes me not be surprised that this strike took place, because the firing policy is completely permissive. And we’re seeing it. I mean, we’re seeing, you know, unarmed civilians being bombed to death. We saw that video of four people walking and being bombed to death. We have doctors, you know, talking about how in hospitals they’re seeing young children, like, with bullet holes, like The Guardian investigation who spoke to nine doctors that spoke about that. So, this extreme permissiveness is not surprising to me.
AMY GOODMAN: Your piece doesn’t talk about drones, but, Yuval, can you talk about how the AI systems interact with unmanned attack drones?
YUVAL ABRAHAM: Yeah, so, you know, I said this last time, Amy. Like, I can’t speak about everything, also because we are sort of — always have to think of the military censor in Israel. As Israeli journalists, we’re very much, you know, binded by that.
But the systems interact. And, you know, if somebody is marked to be killed by Lavender, then that person could be killed by a warplane, they could be killed by a drone, and they could be killed by a tank that’s on the ground. Like, there is like a sort of policy of sharing intelligence between different — yeah, different units and different weapon operators.
Like, I wouldn’t be surprised if — because Israel said, you know, there was a target, like somebody that we suspected. Of course, the aid workers and — like, they completely rejected that. But, like, I wouldn’t be surprised if the flagging, you know, that the Israeli system received was somehow related to a faulty automated mechanism that is, you know, mass surveilling the area and picked up on something and had, you know, not the highest precision rate. Again, from what I’m hearing from sources, this is the atmosphere. This is the case.
AMY GOODMAN: Investigative reporter Yuval Abraham on his latest piece for +972 Magazine and Local Call headlined “’Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” He’s speaking to us from Jerusalem.
ZNetwork is funded solely through the generosity of its readers.
Donate