The New Face of Conflict
From the drone-filled skies over Ukraine, where algorithms help pilots evade electronic jamming, to the data centers processing intelligence for the war in Gaza, a new combatant has entered the theater of war: artificial intelligence. This is not the stuff of science fiction, but the evolving reality of 21st-century conflict. While the specific applications may differ, these fronts represent a single, unfolding revolution: the deep integration of AI into the core functions of surveillance, analysis, and lethal action. As nations and armed groups increasingly delegate critical tasks to algorithms, we are forced to confront a series of profound questions. Are we crossing an ethical and legal Rubicon by automating aspects of lethal decision-making? The introduction of AI into warfare is more than a technological evolution; it is a fundamental challenge to the established laws of war, the nature of human accountability, and the very stability of the global security order.
I. Defining the Revolution: What is AI in Warfare?
The phrase “AI in war” often conjures images of sentient “killer robots,” but the reality is both broader and more nuanced. Artificial intelligence is not a single entity but a wide spectrum of technologies fundamentally reshaping military strategies, tactics, and decision-making processes.
Beyond “Killer Robots”: The Broad Spectrum of Military AI
At its core, the current military AI revolution is about data. The primary role of AI is to achieve “information dominance” by sifting through massive, disparate datasets—from satellite imagery and drone surveillance feeds to intercepted communications, to identify patterns, correlations, and actionable insights at a scale and speed far beyond human capabilities. This capability extends across multiple domains:
- Intelligence, Surveillance, and Reconnaissance (ISR): Machine learning algorithms analyze vast quantities of data to identify potential targets, track enemy movements, and provide predictive analysis of adversary actions.
- Cyber Warfare: AI is a dual-use tool in the digital domain. Defensively, it powers intrusion detection systems that identify and respond to network threats in real time. Offensively, it can be used to develop sophisticated malware that adapts to its environment or to automate the discovery of vulnerabilities in enemy systems.
- Decision Support: By fusing data from disparate sources like radar, sonar, and human intelligence, AI creates a unified operational picture. This enhances situational awareness and allows commanders to make faster, more informed decisions in complex, rapidly changing environments.
- Psychological Operations (Psyops): AI-driven profiling can create detailed audience segments, enabling specialists to craft and disseminate messages tailored to the psychological triggers of specific demographics to influence public opinion or enemy morale.
The Critical Concept: Levels of Autonomy
The central ethical and legal debate surrounding military AI hinges not on the technology itself, but on the degree of human involvement in its operation. Autonomy is best understood as a system’s ability to accomplish its goals independently or with minimal human supervision, particularly in complex and unpredictable environments. The objective for many militaries is to advance from systems requiring constant human oversight to those with higher levels of autonomy enabled by machine learning.
To clarify this spectrum, the U.S. Department of Defense (DoD) Directive 3000.09 provides a useful framework that categorizes systems based on the role of the human operator rather than on their technological sophistication. This human-centric definition is crucial to understanding the political and legal debates. The main categories are:
- Human-in-the-Loop (Semi-Autonomous): The system can only engage individual targets or specific groups of targets that have been explicitly selected by a human operator. A classic example is a “fire-and-forget” missile, which autonomously guides itself to a human-designated target.
- Human-on-the-Loop (Human-Supervised): A human operator monitors the system’s actions and has the ability to intervene and halt an engagement if necessary. The machine may select the target, but a human retains final veto power.
- Human-out-of-the-Loop (Fully Autonomous): Once activated, the system can independently search for, identify, select, and engage targets without any further intervention by a human operator. These are what are formally known as Lethal Autonomous Weapon Systems (LAWS).
The widespread lack of an internationally agreed-upon definition for LAWS is not a simple diplomatic oversight; it is a central feature of the political stalemate surrounding their regulation. Nations at the forefront of AI development, such as the United States and Russia, have resisted narrow, technology-specific definitions that could quickly become obsolete or inadvertently prohibit systems they view as advantageous. Instead, the U.S. policy focuses on ensuring “appropriate levels of human judgment” over the use of force, a deliberately flexible and subjective standard. This approach allows them to argue that even highly advanced systems remain under a form of broad human control (e.g., by setting the rules of engagement or the operational area), thereby technically complying with their interpretation of the law. This definitional ambiguity is a political tool, creating a significant barrier to any potential regulatory treaty by allowing powerful states to maintain maximum freedom for development while participating in diplomatic talks.
II. The Promise and the Peril: The Dual-Use Dilemma of Military AI
The rapid integration of AI into military arsenals is driven by a compelling set of perceived advantages. Proponents argue that these technologies can make warfare not only more effective but also more humane. However, critics warn that these same technologies pose existential risks to legal norms and human moral agency.
The Case for Automation: A More “Effective and Humane” Warfare?
The arguments in favor of military AI center on overcoming the limitations, both physical and psychological, of human soldiers.
- Operational Superiority: AI-enabled systems can process information and execute decisions at superhuman speeds, providing a decisive advantage in the compressed timelines of modern combat. Furthermore, fully autonomous systems are envisioned to operate effectively in environments where communications are jammed or denied, situations in which remotely piloted systems would be rendered useless.
- Enhanced Precision and Force Protection: Proponents contend that autonomous weapons can attack targets with greater precision than human-directed systems, potentially reducing unintended collateral damage. A primary driver for their development is force protection; deploying unmanned systems for “dull, dirty, or dangerous” missions removes human soldiers from harm’s way, thereby saving military lives and resources.
- Overcoming Human Frailty: A key argument is that machines are immune to the human factors that can lead to battlefield errors and atrocities. An AI system is not influenced by emotions like fear, anger, exhaustion, or a desire for revenge, nor does it suffer from the stress and fatigue that degrade human judgment. The theory is that an autonomous system, dispassionately applying pre-programmed rules of engagement, could behave more “humanely” than a human soldier acting under extreme duress.
The Inherent Peril: The Case Against Algorithmic Warfare
Conversely, a global coalition of critics, including non-governmental organizations, academics, and a growing number of states, raises fundamental objections to the automation of warfare.
- Erosion of Human Control and Moral Agency: The primary ethical risk is the delegation of life-and-death decisions to machines. This trend reduces meaningful human control and devalues the unique human skills and judgment essential for the lawful and ethical application of force.
- The “Black Box” Problem and Unpredictability: Many advanced AI systems, particularly those based on machine learning, are inherently opaque. Their internal decision-making processes can be unexplainable even to their own designers. As these systems learn and adapt based on new data from the battlefield, their behavior can become unpredictable, posing an unacceptable risk when lethal force is involved.
- Dilution of Responsibility: The distributed nature of AI development, involving programmers, manufacturers, data providers, and military users, combined with the unpredictability of the machine’s actions, creates a profound “accountability gap.” When an autonomous system makes an unlawful kill, it becomes incredibly difficult to assign legal or moral responsibility to any single human actor.
The central justification for military AI,that it can be more ethical by removing flawed human emotions, contains a deep philosophical contradiction. This argument presents human emotion as a pure liability, citing battlefield atrocities driven by fear or revenge as problems that automation can solve. However, this perspective fails to recognize that the very same human psyche that produces cruelty also produces compassion. As organizations like the International Committee of the Red Cross (ICRC) have argued, emotions are “indispensable for effective and flexible moral evaluation, reasoning, intuition, empathy, [and] self-regulation”. The deep, visceral inhibition a human soldier might feel before killing a non-combatant, or the capacity for mercy toward a surrendering enemy, are moral safeguards that arguably can never be replicated in an algorithm. In stripping away the potential for human error, we risk stripping away the capacity for human conscience, leading to a colder, more ruthlessly efficient, but ultimately less moral form of warfare.
III. The Law of War in the Age of the Algorithm
Any weapon or tactic, whether a spear or a sophisticated AI, is governed by the same body of law: International Humanitarian Law (IHL), also known as the law of armed conflict. This legal framework was designed by humans, for humans, and its core principles present profound challenges for autonomous systems.
The Bedrock of Restraint: Core Principles of International Humanitarian Law (IHL)
IHL seeks to impose limits on the brutality of war by balancing military necessity with humanitarian concerns. Its foundational principles include:
- Humanity: Forbids the infliction of suffering, injury, or destruction not strictly necessary for the accomplishment of legitimate military purposes.
- Military Necessity: Justifies only that degree and kind of force required to achieve the enemy’s submission, and which is not otherwise prohibited by IHL.
- Distinction: This is the cornerstone of IHL. Parties to a conflict must at all times distinguish between combatants, who may be lawfully targeted, and civilians, who are protected from direct attack.
- Proportionality: An attack is prohibited if the expected incidental loss of civilian life, injury to civilians, or damage to civilian property would be “excessive in relation to the concrete and direct military advantage anticipated”.
- Precaution: In all military operations, constant care must be taken to spare the civilian population. All feasible precautions must be taken to verify targets and to avoid or minimize incidental civilian harm.
AI on Trial: The Challenge to IHL Principles
Autonomous weapon systems face immense difficulty in complying with these nuanced, context-dependent legal requirements.
- The Challenge to Distinction: In contemporary conflicts, where combatants often do not wear uniforms and operate among civilian populations, distinguishing a threat is not just a matter of visual identification but of assessing intent; a uniquely human cognitive task. An AI system relying on pattern matching could easily misidentify a civilian carrying a tool as a combatant carrying a weapon, with fatal consequences.
- The Challenge to Proportionality: The proportionality test is not a simple mathematical equation; it is a complex, value-laden judgment. An AI might be able to quantify potential casualties or physical damage, but it cannot qualitatively assess the “military advantage” of a strike, which is a strategic and context-dependent human judgment. This is especially true for cyber operations, where the cascading, reverberating effects on interconnected civilian infrastructure are incredibly difficult to predict and weigh.
- The Challenge to Precaution: The obligation to take “feasible precautions” implies a dynamic assessment of a changing situation and the ability to cancel or suspend an attack if it becomes apparent that it may violate the rule of proportionality. It is questionable whether a pre-programmed system, however sophisticated, can truly adapt its behavior to spare civilian life in unforeseen circumstances, or whether it will rigidly execute its mission parameters regardless of new information.
The attempt to make AI systems fully compliant with principles like distinction and proportionality reveals a fundamental category error. These legal tenets do not demand mere calculation; they require judgment. IHL was crafted to guide the conscience and reasoning of human commanders. An AI, particularly one based on machine learning, does not “reason” or “understand” in a human sense; it performs statistical correlation and pattern matching based on the data it was trained on. To ask an algorithm to make a proportionality assessment is akin to asking a calculator to appreciate a work of art. It can process inputs and produce an output based on a pre-set formula, but it cannot perform the qualitative, ethical balancing act that the law demands of a human. This suggests that the challenge is not simply about building “smarter” AI. It implies that for an action to be truly compliant with the spirit of IHL, these specific, value-laden judgments must always remain with a human, limiting AI to a supportive, data-processing role.
Finally, the Martens Clause, a foundational provision of IHL, serves as a legal and ethical backstop. It stipulates that even in cases not covered by specific treaty law, all conduct in war remains subject to “the principles of humanity and the dictates of public conscience”. This raises the ultimate question: does delegating the final decision to kill a human being to a machine, regardless of its supposed legality or efficiency, inherently violate the dictates of public conscience?
IV. The Unblinking Eye: AI on the Frontlines in Ukraine and the Middle East
The ethical debates surrounding military AI are not theoretical. These systems are being developed, tested, and deployed in active combat zones, providing two distinct case studies of how this technology is changing the character of modern war.
Case Study: The Russia-Ukraine War – A Race for Tactical Adaptation
In the near-peer conflict in Ukraine, AI has become a critical tool in a high-tech war of attrition, primarily focused on gaining tactical advantage on a fluid battlefield.
- The Drone and Counter-Drone Struggle: Both Russia and Ukraine are locked in a rapid technological race to develop and deploy drones with AI and machine learning (ML) capabilities. A primary goal is to automate targeting and create systems that can operate despite the pervasive electronic warfare (EW) that jams communication and navigation signals.
- Machine Vision as a Key Adaptation: A key innovation has been the integration of “machine vision” into drones. This allows a drone to be shown an image of a target (e.g., a tank) and then use its onboard processing to lock onto and home in on that target, even if its connection to the human operator is severed by jamming. This represents a significant step toward autonomy, though it still relies on a human to make the initial target identification.
- Emerging Capabilities and Battlefield Management: Both sides are testing more advanced concepts, such as AI-managed drone swarms and “mothership” drones that can autonomously fly deep into enemy territory to deliver smaller attack drones. Ukraine, in particular, has developed sophisticated battlefield management systems like “Delta,” a cloud-based platform that fuses intelligence from drones, satellites, and frontline reports into a common operational picture. This system not only aids current decision-making but also creates a rich data environment for training future AI models.
Case Study: The Israel-Hamas Conflict – A Revolution in Targeting at Scale
The war in Gaza offers a different model of AI warfare, one characterized by the use of AI in an asymmetric conflict to process vast amounts of data and generate targets at an industrial scale.
- The AI-Powered “Target Factory”: Israel has reportedly deployed a suite of AI systems to radically accelerate its targeting cycle. These classified systems are known by names like:
- “Lavender”: An AI-powered database that, at one point, had identified as many as 37,000 Palestinian men as potential low-ranking militants based on analyzing their associations and communication patterns. The system is reported to have an error rate of approximately 10%.
- “Gospel”: An AI system that analyzes surveillance data to recommend the bombing of infrastructure targets, such as buildings and other facilities alleged to be used by militants.
- “Where’s Daddy?”: An algorithm designed to track targeted individuals and alert the military when they enter their private residences, enabling strikes on them while at home with their families.
- The “Dehumanization of ISR”: According to reports from former and current Israeli intelligence officials, the use of these systems has led to a “dehumanization” of the targeting process. Human analysts reportedly act as “rubber stamps,” sometimes spending as little as 20 seconds to review and approve an AI-generated target before authorizing an airstrike. This has been accompanied by a reported loosening of the rules of engagement, permitting hundreds of civilian casualties in strikes aimed at a single senior militant commander.
- The Role of Big Tech: This new form of warfare is enabled by commercial technology. U.S. tech giants, including Microsoft (through its investment in OpenAI), Google, and Amazon, provide the cloud computing and AI services that power these military systems, in some cases under a $1.2 billion contract known as “Project Nimbus”. This marks a pivotal moment where commercial AI models, not originally designed for warfare, are being directly used to help decide who lives and who dies, sparking ethical debates and protests within the tech companies themselves.
The conflicts in Ukraine and Gaza are not just different applications of the same technology; they represent two distinct emerging doctrines of AI-enabled warfare. Ukraine showcases an “adaptive model,” where AI is a tool for tactical advantage in a symmetric conflict, driven by the need to overcome specific enemy capabilities like EW. Here, the human operator often remains closely involved out of tactical necessity. The goal is to make each strike more effective. In contrast, Gaza showcases an “industrial model,” employed in an asymmetric conflict where one side possesses overwhelming technological superiority. Here, AI is a tool for achieving scale and efficiency, processing data to generate targets at a rate far beyond human capacity. This model, by its very design, pushes toward the accountability gap and loss of human control that critics fear most. It reframes targeting not as a series of discrete, carefully considered moral acts, but as an industrial process to be optimized, a fundamental and deeply troubling shift in the ethics of war.
V. The Core Ethical Quagmires
Beyond the battlefield, the use of AI in war creates a series of profound ethical and legal challenges that strike at the heart of our systems of justice and moral responsibility.
The Accountability Gap: Who is Responsible When a Machine Kills Unlawfully?
This is arguably the most significant and intractable obstacle to the lawful deployment of fully autonomous weapons. When an autonomous system makes a mistake and kills unlawfully, holding someone responsible is fraught with difficulty.
- The Problem of Mens Rea (Criminal Intent): Under international law, a war crime requires both an unlawful act (actus reus) and a guilty mind, or criminal intent (mens rea). A machine, as an inanimate object, cannot possess intent, moral agency, or consciousness. Therefore, it cannot be held criminally liable for its actions. This creates a novel accountability gap where the entity that actually selected and engaged the target cannot be held legally responsible for the crime.
- The Chain of Human Responsibility Breaks Down: Tracing responsibility back to a human becomes exceedingly difficult. A programmer or manufacturer could not have foreseen the specific, unpredictable action the AI would take in a dynamic combat environment, especially if the system is designed to learn and adapt on its own. A military commander who deploys the system may not have intended the unlawful outcome and may have been physically unable to predict or prevent the AI’s action due to its speed and autonomy, breaking the legal doctrine of command responsibility.
- An Exacerbation of an Existing Flaw: This technological problem magnifies a pre-existing weakness in IHL. The law already permits a significant amount of “awful but lawful” incidental and accidental civilian harm for which there is no international accountability mechanism. AI, by enabling actions at greater speed and scale and introducing new forms of error (e.g., software bugs, automation bias), dramatically expands the potential for such harm, making the lack of accountability a more acute and pervasive problem.
The Search for “Meaningful Human Control” (MHC)
In response to the accountability gap, the international community has focused on the concept of “meaningful human control” as a potential solution, but the term itself is a point of contention.
- What is “Meaningful” Control? There is no universally agreed-upon definition. Proponents of strong regulation argue that MHC requires more than just having a human in the loop to press a button. It demands predictable, reliable, and transparent technology; that the human user has accurate information about the system and its context; and that there is a genuine opportunity for timely human judgment and intervention. The goal is to preserve human agency and moral responsibility over the use of force.
- The U.S. Alternative: “Appropriate Levels of Human Judgment”: In contrast, the official policy of the U.S. DoD deliberately avoids the term MHC, opting instead for the more flexible phrase “appropriate levels of human judgment”. This standard is context-dependent and does not require direct manual control of every engagement. It allows for a broader interpretation of control, such as a human setting the system’s mission parameters and rules of engagement, which grants maximum operational flexibility.
This semantic debate between “Meaningful Human Control” and “Appropriate Human Judgment” is not merely about wording; it is a proxy war for the fundamental political conflict over the future of autonomous weapons. “Meaningful Human Control” is the banner of the coalition of states and NGOs pushing for a legally binding treaty with clear, hard limits on autonomy. “Appropriate Human Judgment” is the preferred language of nations like the U.S. that are developing these technologies and wish to maintain flexibility and resist binding constraints. The battle over which term becomes the international standard is therefore a battle over the very nature of any future regulation.
Bias in the Code: The Risk of Automated Discrimination
AI systems are not inherently objective; they are reflections of the data they are trained on and the assumptions of their creators. This introduces the risk of algorithmic bias.
- Definition and Sources: Algorithmic bias refers to systemically skewed performance by an AI that leads to unjustifiably discriminatory outcomes based on social characteristics like race, gender, or ethnicity. This bias can enter a system through incomplete or prejudiced training data (e.g., if surveillance data disproportionately covers one population group) or through the flawed assumptions of its developers.
- Life-or-Death Consequences: In a military context, the stakes of bias are absolute. A biased facial recognition algorithm at a checkpoint or a threat-assessment algorithm used for targeting could lead to the wrongful detention, injury, or death of individuals from a specific ethnic or social group, effectively embedding discrimination into the machinery of war.
VI. The New Geopolitics: An AI Arms Race and the Struggle for Regulation
The development of military AI is not happening in a vacuum. It is being driven by intense geopolitical competition and, in turn, is reshaping global power dynamics.
The Inevitable Arms Race
The competition between the United States and China, with Russia as another significant player, is widely described as an AI arms race, or even a new “AI Cold War”. This race is fueled by the powerful belief, articulated by leaders like Russian President Vladimir Putin, that whichever nation masters AI will achieve global dominance. This competitive pressure creates perverse incentives to cut corners on safety protocols and ethical considerations in a rush to be the first to deploy a new capability, increasing the risks of catastrophic accidents and unintended escalation.
The Proliferation Problem: When “Killer Robots” Go Global
As AI technology becomes cheaper, more powerful, and more accessible, there is a grave risk of proliferation beyond the major military powers. Autonomous systems could fall into the hands of smaller states or non-state actors, including terrorist organizations and militias, who would operate without any regard for IHL or ethical norms. Furthermore, some analysts fear that the availability of autonomous systems, which reduce the risk of casualties among a state’s own soldiers, could lower the political threshold for using force, making armed conflict a more frequent and easily chosen policy option.
The Uphill Battle for a Treaty: Diplomacy in the Shadow of Development
For over a decade, the United Nations has been the primary venue for discussions on regulating LAWS, but progress has been painfully slow.
- The UN CCW Group of Governmental Experts (GGE): This has been the main forum for talks within the Convention on Certain Conventional Weapons (CCW) since 2014. While the GGE has produced a set of 11 non-binding Guiding Principles, it has been unable to agree on a legally binding instrument. Its consensus-based model means that a few powerful states can effectively veto any meaningful progress.
- A Divided World: A large and growing majority of states now support the negotiation of a new international treaty with clear prohibitions on the most dangerous systems and strict regulations on all others. However, the key military powers developing these weapons continue to block progress. Frustration with this deadlock recently led to a UN General Assembly resolution calling for talks to be held outside the CCW framework, a sign that the international community is seeking a new path forward. The UN Secretary-General and the ICRC have called for the conclusion of a legally binding treaty by 2026, framing it as an urgent humanitarian imperative.
The core of this diplomatic stalemate lies in the deeply divergent positions of the world’s major military powers.
Country | Official Stance on a LAWS Ban | Key Policy/Doctrine | Diplomatic Posture in the UN CCW |
United States | Opposes a pre-emptive ban. Argues LAWS can be more precise and humane. | DoD Directive 3000.09 emphasizes “appropriate levels of human judgment” over lethal force, a flexible standard that does not require direct control. | Resists a legally binding treaty. Advocates for a non-binding framework of best practices and argues that existing IHL is sufficient to govern LAWS. |
China | Supports a ban on the use of fully autonomous systems, but strategically avoids mention of development, production, or export. | Pursues “intelligentized warfare” as a core military modernization goal. Engages in strategic ambiguity, publicly supporting some limits while aggressively developing its own capabilities. | Proposes a new CCW protocol but with a narrow scope limited to use. Accused of using the consensus process to delay meaningful regulation while it catches up technologically. |
Russian Federation | Strongly opposes any legally binding instrument, moratorium, or other prohibitions on LAWS. | Views AI as essential for future military superiority and argues LAWS can be more effective than human soldiers. | Actively blocks consensus on a new treaty within the CCW, which it calls the “only optimal forum.” Argues existing IHL is “fully applicable and sufficient” and needs no modernization. |
Conclusion: A Choice for Humanity
The integration of artificial intelligence into warfare represents a paradigm shift, not an incremental change. It has brought to the fore a series of irreconcilable tensions: the promise of military efficiency versus the peril of dehumanization; the human-centric framework of international law versus the cold logic of the algorithm; and the urgent need for global governance versus the powerful momentum of a geopolitical arms race. The abstract ethical and legal debates are no longer theoretical. The “accountability gap” is measured in the rubble of apartment buildings in Gaza. The challenge of “distinction” is a life-or-death reality for drone operators and their targets in Ukraine. The consequences of this technological revolution are being written in real time, in human lives.
The choice before the international community is not whether to use AI, but how to govern its use. This requires urgent, principled action on multiple fronts. It demands greater transparency from states and the tech companies that enable them, shedding light on the capabilities and rules governing these powerful systems. It requires a global consensus on a high standard for robust human control, ensuring that the ultimate moral and legal responsibility for the use of lethal force remains firmly with human beings. And it necessitates renewed and intensified efforts to negotiate a legally binding international instrument that establishes clear prohibitions on the most dangerous types of autonomous weapons and strict regulations on all others. The technologies we build for war are ultimately a reflection of our values. In the code we write, the policies we enact, and the treaties we forge, we are making a choice about the future of armed conflict and, ultimately, about the place of humanity within it.
Leave a Reply
You must be logged in to post a comment.