This article was written by Safa for the series ‘Digitized Divides’ and originally published on tacticaltech.org. An edited version is republished by Global Voices under a partnership agreement on Sept 22,2025.
Technology can be used to help people, to harm people, but it also isn’t necessarily an either/or situation — it can be used simultaneously for the benefit of one person or group while harming another person or group.
While some may ask whether the benefits of using personal data to implement widespread policies and actions outweigh the harms, comparing the benefits and harms in this balanced, binary, two-sided approach is a misguided way to assess it critically, especially when the harms include violence against civilians. After all, human suffering is never justified, and there are no ways to sugarcoat negative repercussions in good faith. Technological bothsidesism attempts to determine the “goodness” or “brownie points” of technology, which is a distraction, because technology itself isn’t good or bad — it is about the humans behind it, the owners and operators behind the machines. Depending on the intentions and aims of those people, technology can be used for a wide variety of purposes.
Israel uses data collected from Palestinians to train AI-powered automated tools, including those co-produced by international firms, like the collaboration between Israel’s Elbit Systems and India’s Adani Defence and Aerospace, that have been deployed in Gaza and across the West Bank. Israeli AI-supercharged surveillance tools and spyware, including Pegasus, Paragon, QuaDream, Candiru, Cellebrite, as well as AI weaponry, including the Smart Shooter and Lavender, are world-famous and exported to many places, including South Sudan and the United States.
The US is also looking into ways to use home-made and imported facial recognition technologies at the US–Mexico border to track the identities of migrant children, collecting data they can use over time. Eileen Guo of MIT Technology Review wrote: “That this technology would target people who are offered fewer privacy protections than would be afforded to US citizens is just part of the wider trend of using people from the developing world, whether they are migrants coming to the border or civilians in war zones, to help improve new technologies.” In addition to facial recognition, the United States is also collecting DNA samples of immigrants for a mass registry with the FBI.
In 2021, US-headquartered companies Google and Amazon jointly signed an exclusive billion-dollar contract with the Israeli government to develop “Project Nimbus,” which was meant to advance technologies in facial detection, automated image categorization, object tracking, and sentiment analysis for military use — a move that was condemned by hundreds of Google and Amazon employees in a coalition called No Tech for Apartheid.
The Israeli army also has ties with Microsoft for machine learning tools and cloud storage. These examples are brought in here to show the imbalance of power within the greater systems of oppression at play. These tools and corporate ties are not accessible to all potential benefactors; it would be inconceivable for Google, Amazon, and Microsoft to sign these same contracts with, say, the Islamic Resistance Movement (Hamas).
Former US President Barack Obama is credited with normalizing the use of armed drones in non-battlefield settings. The Obama administration described drone strikes as “surgical” and “precise,” at times even claiming that the use of armed drones resulted in not “a single collateral death,” when that was patently false. Since Obama took office in 2009, drone strikes became commonplace and even expanded in US international actions (in battlefield and non-battlefield settings) of the subsequent administrations.
Critics say the use of drones in warfare gives governments the power to “act as judge, jury, and executioner from thousands of miles away” and that civilians “disproportionately suffer” in “an urgent threat to the right to life.” In one example, the BBC described Russian drones as “hunting” Ukrainian civilians.
In 2009, Human Rights Watch reported on Israel’s use of armed drones in Gaza. In 2021, Israel started deploying “drone swarms” in Gaza to locate and monitor targets. In 2022, Omri Dor, commander of Palmachim Airbase, said, “The whole of Gaza is ‘covered’ with UAVs that collect intelligence 24 hours a day.” In Gaza, drone technology has played a major role in increasing damage and targets, including hybrid drones such as “The Rooster” and “Robodogs” that can fly, hover, roll, and climb uneven terrain. Machine gun rovers have been used to replace on-the-ground troops.
The AI-powered Smart Shooter, whose slogan is “one-shot, one-hit,” boasts a high degree of accuracy. The Smart Shooter was installed during its pilot stage in 2022 at a Hebron checkpoint, where it remains active to this day. Israel also employs “smart” missiles, like the SPICE 2000, which was used in October 2024 to bomb a Beirut high-rise apartment building.
The Israeli military is considered to be one of the top 20 most powerful military forces in the world. Israel claimed that it conducts “precision strikes” and does not target civilians, but civilian harm expert Larry Lewis has said Israel’s civilian harm mitigation strategies have been insufficient, with their campaigns seemingly designed to create risk to civilians. The aforementioned technologies employed by Israel have helped their military use disproportionate force to kill Palestinians in Gaza en masse. As an IDF spokesperson described, “We’re focused on what causes maximum damage.”
While AI-powered technologies reduce boots on the ground and, therefore, potential injuries and casualties of the military who deploy them, they greatly increase casualties of those being targeted. The Israeli military claims AI-powered systems “have minimized collateral damage and raised the accuracy of the human-led process,” but the documented results tell a different story.
Documentation reveals that at least 13,319 of the Palestinians who were killed were babies and children between 0 and 12 years of age. The UN’s reports of Palestinian casualties are said to be conservative by researchers, who estimate the true death toll to be double or even more than triple. According to one report: “So-called ‘smart systems’ may determine the target, but the bombing is carried out with unguided and imprecise ‘dumb’ ammunition because the army doesn’t want to use expensive bombs on what one intelligence officer described as ‘garbage targets.’” Furthermore, 92 percent of housing units were destroyed in Gaza, as well as 88 percent of school buildings, and 69 percent of overall structures across Gaza have been destroyed or damaged.
In 2024, UN experts deplored Israel’s use of AI to commit crimes against humanity in Gaza. Regardless of all the aforementioned information, that same year, Israel signed a global treaty on AI developed by the Council of Europe for safeguarding human rights. Seeing how Israel has killed such a large number of Palestinians using AI-powered tools, and connected to technologies which are used in daily life, such as WhatsApp, is seen by some as a warning sign of what is possible to befall them one day, but is seen by others as a blueprint for efficiently systematizing supremacy and control.
This piece positions that it isn’t just about the lack of human oversight with data and AI tools that is the issue; actually, who collects, owns, controls, and interprets the data and what their biases are (whether implicit or explicit) is a key part in understanding the actual and potential for harm and abuse. Furthermore, focusing exclusively on technology in Israel’s committing of genocide in Gaza, or any war for that matter, could risk a major mistake: absolving the perpetrators’ responsibility for crimes they commit using technology. When over-emphasizing the tools, it can become all too easy to redefine intentional abuses as machine-made mistakes.
When looking at technology’s use in geopolitics and warfare, understanding the power structures is key to gaining a clear overview. Finding the “goodness” in ultra-specific uses of technology does little in the attempt to offset the “bad.”
For the human beings whose lives have been made more challenging and conditions dire as a result of the implementation of technology in domination, warfare, and systems of supremacy, there is not much that can be rationalized for the better. And the same can be said of other entities that use advantages (geopolitical, technological, or otherwise) in order to assert control over others who are in relatively more disadvantaged and vulnerable positions. To divorce the helpful and harmful applications of technology is to lose oversight of the bigger picture of not only how tech could be used one day, but how it is actually being used right now.
(NS)
Suggested Reading: