By Kian Biglarbeigi, Ph.D. candidate in International Law at the University of Tehran focusing on international humanitarian law, cyber operations, and the protection of human rights in armed conflicts, and Maedeh Safari, LL.M. in International Law from the University of Tehran

In today’s world, the integration of artificial intelligence (AI) is transforming modern warfare, shifting the landscape from traditional, human-centric command to conflicts employing AI-powered systems. These technologies, including autonomous weapons, offer military advantages such as precision targeting and the ability to deploy non-human entities like robots, potentially reducing military casualties. A pivotal example is the “Lavender” system. While initial reports depicted it as an autonomous platform, subsequent clarifications have described it as a “smart database” that identifies targets for human review. The distinction between Lavender and other AI-based weaponry lies in its comprehensive autonomy throughout the operational process, allowing it to make final decisions with minimal to no human intervention (For more details see here and here).

This autonomy raises critical legal questions under International Humanitarian Law (IHL). The international community has recognized the urgency of these challenges, as reflected in the United Nations (UN) General Assembly resolution A/RES/78/241 on “Lethal autonomous weapons systems”, expressing concern over their global security implications. The challenge is that AI, which operates on prediction, lacks the clear human judgment essential for lawful decision-making in conflict.

The Absence of a Comprehensive Legal Framework for AI-based Weapons

While international law may struggle to adapt to and regulate the use of AI-based and automated weapons, no comprehensive international regulatory framework exists to address concerns surrounding the use of AI-based autonomous and semi-autonomous systems. To illustrate this regulatory void,  at the convention level, the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons were established to regulate specific types of physical, tangible weaponry, such as those containing incendiary or blinding laser effects, and do not extend to algorithmic warfare (Article 1 and 2). Subsequently, when the international community turned to address modern threats through the recent Draft United Nations Convention against Cybercrime, the result is not only not yet in force, but its substantive scope is fundamentally directed at combating cybercrime, not regulating the use of digital means, let alone weaponized AI, in armed conflict.

This regulatory gap extends to the level of customary international law. For a rule governing the use of AI-based targeting systems to crystallize into custom, it would require both widespread and consistent state practice (usus) and a belief that such practice is legally required (opinio juris) as derived from the foundational formulation of customary international law in Article 38(1)(b) of the ICJ Statute. Currently, state practice is fragmented, with a handful of states developing and potentially deploying such systems, including Israel with systems like Lavender and Gospel (See here), while the vast majority do not. More critically, diplomatic positions reveal profound divisions that prevent the formation of a common legal conviction. While many states, like those in the Non-Aligned Movement, call for a legally binding instrument, other major Global South powers actively resist it (See here).

As soft law instruments, numerous contributions have culminated in documents and reports that are significant in this context. The most recent of these were the Summit of the Future, where world leaders adopted the “Pact for the Future and its annexes: the Global Digital Compact and Declaration on Future Generations” and the International Committee of the Red Cross (ICRC)’s 2024 report on International Humanitarian Law and the challenges of contemporary armed conflicts. However, they provide no specific rules for AI in cyber warfare, mandating only that any technology regulation must conform to human rights and IHL, with a priority on civilian protection.

In parallel, the lack of any specific rule addressing these weapons leads us to other general provisions related to cyber warfare, including the Tallinn Manual 2.0, which was drafted by the NATO Cooperative Cyber Defense Centre of Excellence. Although it serves only as a non-binding study (see here and here), it is widely cited and has become a key reference in the field. However, within its guidelines, we cannot find any rule that directly governs the scope and legitimacy of using AI-based systems like Lavender, aside from some general rules leading us to IHL. For instance, Rule 80 also promotes the application of IHL as the law of armed conflicts within cyberattacks.

While the examined resources, from binding conventions to soft law, are mute on the specific regulation of AI targeting, their collective emphasis on coordination, cooperation, and technical assistance, particularly through technical assistance and technology transfer to developing countries, could reshape the evaluation of AI-based autonomous systems in modern conflicts, as many developing nations lack the necessary infrastructure and expertise to effectively utilize advanced AI, creating a destabilizing asymmetry in military capabilities. This asymmetry, in turn, fuels a competitive dynamic that risks precipitating an unregulated AI arms race, which has explicitly been prevented due to its threat to global stability (See here).

Applying Core Principles of IHL to AI Systems

From this, we can infer that there are currently no provisions that provide a legal framework for the use of AI-based weapons in armed conflicts. In such a scenario, the only remaining option is to refer back to the existing provisions that have historically established the general legal principles governing conflicts. In light of the Martens Clause, in cases not covered by specific legal provisions, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, principles of humanity, and the dictates of public conscience.

To this end, primarily, IHL provisions rooted back to the Geneva Conventions and its Additional Protocol I “AP I.” Article 36 of AP I specifically mandates that states assess the legality of any new weapon, means, or method of warfare before its development or deployment ensuring they comply with the existing IHL established by the Geneva Conventions and their protocols. Although Article 36 does not explicitly address cyber warfare or cybercrime, its principles highlight the necessity for states to evaluate how emerging forms of conflict align with humanitarian principles such as distinction, proportionality, military necessity, and the prohibition of unnecessary suffering.

Firstly, the principle of distinction, addressed in Article 48 of AP I, mandates that combatants must have the capacity to make rational judgments regarding circumstances to distinguish between military and civilian individuals, as well as between military and non-military objectives, thereby minimizing harm to civilians. Sections 2 and 3 of the Article13 and section 2 of the Article 4 of AP II further reinforce this principle. Similarly, sections 7 and 8 of the Article 3 of the Amended Protocol II to the CCW, and section 1 of the Article 2 of the Protocol III to the CCW as well as the preamble of the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction (Ottawa Convention) and Rule 90 of the Tallinn Manual 2.0 and Rule 1 of the ICRC Customary International Humanitarian Law are notable. For AI systems like Lavender to adhere to the principle of distinction, they must be equipped with advanced image analysis capabilities, the ability to analyze data from other sensors, and integrative capabilities. However, reports indicate that such systems can fail to make these critical legal distinctions. For instance, when the parameters for identifying a “combatant” were broadened, the system reportedly began classifying individuals with roles in civil services, such as police officers and civil defense personnel in non-combat functions, as legitimate military targets (See here and here). This flaw is exacerbated by the alleged fact that it spends as little as 20 seconds per target and often only confirm the target’s gender (See here).

Next, the principle of proportionality, addressed in Article 51 (5) (b) and Article 57 (2) (iii) of AP I , Rule 14 of the ICRC Customary International Humanitarian Law, as well as Rule 113 Tallinn Manual 2.0, means that parties to a conflict must be able to weigh the damage and injuries caused by attacks on military objectives against the harm inflicted on civilians. If an AI-based system cannot make distinction between military and non-military objects, it will undoubtedly fail to meet this principle. An AI system like Lavender, when programmed to analyze the potential collateral damage of a strike, would be attempting to adhere to this principle and, by extension, the broader concept of military necessity, which justifies only those acts necessary to defeat the enemy. As defined in international law, military necessity permits the application of force to compel the enemy’s submission, but it is always subject to the laws of war (For further analysis see here). Therefore, for Lavender’s actions to be lawful, its proportionality calculation must be an assessment of a “concrete and direct military advantage,” not a pretext for excessive force. If its algorithms cannot accurately predict complex effects, its assessment will be unreliable and illegal. In fact, if the system’s foundational data is flawed or its algorithms cannot accurately predict complex second-order effects, its proportionality assessment will be fundamentally unreliable. Moreover, when an AI weapon is capable of collecting data and updating its own algorithms, such a system is highly vulnerable to cyber-attacks from a malicious state or non-state actor (See here).

Lastly, Article 35 (2) of the AP I prohibits the employment of means or methods of warfare of a nature to cause superfluous injury or unnecessary suffering to both the civilian population and combatants. This principle, reaffirmed in Rule 104 Tallinn Manual 2.0, means that any AI-based system used in warfare must be capable of understanding the context and circumstances surrounding its targets. A key aspect of this rule is the safeguard of persons hors de combat, as defined in Article 41 of AP I, which prohibits attacking anyone who is surrendering, is incapacitated by wounds, or is otherwise in the power of an adverse party. As clarified in the ICRC Commentary on Article 41 of AP I, paragraph 1630, surrender is effected when a person expresses a clear and unambiguous intention to submit, such as by laying down arms. An AI system like Lavender, which operates on data patterns and algorithms, lacks the capacity to perceive and interpret the complex, contextual human behaviors that signify surrender, such as a raised hand, or a clearly expressed intention to yield (See here). Attacking such individuals constitutes a grave violation and inflicts unnecessary suffering.

Conclusion

In conclusion, while existing international foundations contain no specific rules for AI-based weapons, IHL principles remain applicable under the general provisions of international law.  As analyzed, AI-based weapons like Lavender face challenges in complying with these principles: the principle of distinction, due to an inability to make context-dependent judgments between combatants and civilians; the principle of proportionality, owing to a lack of capacity for the nuanced, ethical balancing of military advantage against incidental civilian harm; and the prohibition of unnecessary suffering, given the incapacity to perceive complex human signals, such as surrender. Therefore, for systems like Lavender to operate within the bounds of IHL, human intervention and oversight are indispensable to mitigate their inherent risks. This requirement stems not only from technical necessity but from fundamental legal obligations as article 1 of the Geneva Conventions establishes the duty to “respect and ensure respect” for IHL. Furthermore, the Martens Clause reaffirms that in the absence of specific treaties, civilians and combatants remain under the protection of principles of humanity and the dictates of public conscience.  Ultimately, this legal gap underscores the urgent need for states to actively shape the legal landscape. Beyond binding conventions, the crystallization of customary law through consistent state practice and accepted obligation is paramount. This process of international norm-building can be significantly advanced through soft law declarations and codes of conduct, which guide state behavior and create the foundation for future legally binding instruments.

Image credit: Photo by Michael Dziedzic on Unsplash