Nuno Marques is a Doctoral Researcher in Law and Cybersecurity at Leeds Law School, Leeds Beckett University and Assistant Professor in Cybersecurity at Oslo Metropolitan University. He also serves as Deputy Mayor of Notodden, Norway and is an International Election Observer with the Norwegian Refugee Council.

Artificial intelligence (AI) is rapidly transforming the operational environment of cyberspace. States increasingly deploy AI-enabled systems to detect vulnerabilities, analyse large volumes of data and support cyber operations. These technologies promise faster decision-making, improved threat detection and greater operational precision. Yet they also introduce new legal challenges. Machine-learning systems frequently operate through complex models whose internal processes are difficult even for their designers to interpret. Their behaviour may evolve over time, interact unpredictably with digital environments and produce outcomes that cannot easily be traced to a specific human decision.

AI-enabled systems are also increasingly embedded upstream of the use of force. Military decision-support platforms now assist with intelligence analysis, target identification and the modelling of operational consequences. As recent analysis by Yéelen Marie Geairon in the International Committee of the Red Cross highlights, such systems can structure what decision-makers are able to anticipate, compare and justify before an attack. By modelling cascading effects on interconnected systems such as electricity, water supply or telecommunications, AI tools reshape how foreseeability is operationalized in military decision-making. At the same time, the growing speed of algorithmically assisted targeting compresses operational timelines. Recent reporting on the use of AI-assisted targeting systems during strikes against Iran suggests that automated decision-support tools may significantly shorten military planning cycles, raising concerns that human legal review may occur under severe time pressure.

These developments raise an important question for international law: How should responsibility be assessed when harmful cyber effects arise from partially autonomous systems whose behaviour cannot easily be explained through human intention?

The answer may lie in a gradual doctrinal shift toward objective standards of responsibility based on effects, causation and reasonable foreseeability. Rather than attempting to reconstruct the subjective intentions of decision-makers, international law increasingly evaluates cyber operations by reference to their consequences. AI reinforces this trend because the opacity and autonomy of algorithmic systems make subjective mental states difficult and sometimes impossible to determine.

From Intention to Effects in the Jus ad Bellum

Historically, international legal doctrine has often relied on subjective concepts such as intention, knowledge or mistake when assessing violations of primary rules. States accused of wrongful conduct have frequently attempted to justify their actions by arguing that harm resulted from misunderstanding, misidentification or operational error rather than deliberate intent. In disputes concerning the use of force, such arguments implicitly rely on the idea that responsibility depends on the mental state of the acting state.

International jurisprudence, however, has long emphasised that the legal assessment of force ultimately turns on objective facts rather than subjective motivations. The International Court of Justice (ICJ) has famously held that whether an operation constitutes an “armed attack” must be evaluated by reference to its “scale and effects” rather than the type of weapon used or the intentions behind the operation (see Nicaragua v. United States of America (1986), para 195).

This effects-based logic increasingly shapes how states interpret the prohibition on the use of force in cyberspace. A growing number of governments have published official statements explaining how international law applies to cyber operations. Several states, including Australia, Estonia, France and the United States assess potential cyber use of force primarily by examining the consequences produced by the operation. Cyber activities may qualify as a use of force when their effects resemble those produced by conventional military attacks, such as causing death, injury, physical destruction or severe disruption of critical infrastructure (see, for example, the United States’ position on international law in cyberspace).

Importantly, these formulations rarely require proof that the harmful consequences were specifically intended. Instead, the legal question focuses on whether the operation produced or could reasonably be expected to produce effects comparable to kinetic force. The same reasoning appears in the widely cited Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, which treats the “directness” (Rule 69 – Definition of use of force) between a cyber operation and its consequences as a key factor in determining whether a cyber activity constitutes a use of force.

AI and the Erosion of Subjective Fault

AI strengthens this doctrinal trend because AI-enabled cyber operations challenge the assumption that harmful conduct can always be traced to identifiable human intentions. Machine-learning systems frequently operate through complex probabilistic models that adapt to data inputs and environmental conditions. Their internal decision-making processes may therefore be opaque to operators and developers alike.

This opacity creates significant difficulties for legal frameworks that rely on reconstructing subjective mental states. If an autonomous cyber capability produces unintended harmful effects, determining whether those effects were intended, foreseeable or accidental may become extremely difficult. The behaviour of the system may emerge from interactions between algorithms, training data and network conditions that cannot easily be reconstructed after the fact.

Scholars such as Russel Buchan and Matthijs Maas examining the governance implications of AI have therefore highlighted the growing importance of objective standards in technologically complex environments. The opacity of algorithmic systems can make it difficult to determine exactly why a particular decision was made, raising concerns about accountability and responsibility in automated decision-making contexts.

These challenges are particularly acute in cyberspace. Cyber operations already occur at speeds that strain meaningful human supervision. Automated vulnerability discovery, exploitation tools and self-propagating malware can operate across networks within seconds. Once activated, such capabilities may interact with digital systems in ways that exceed the ability of operators to monitor or control them in real time. When AI becomes integrated into these processes, the diffusion of agency becomes even more pronounced.

In such environments, attempts to determine legality based on subjective intention risk becoming dependent on information that may simply not exist. International law therefore increasingly relies on objective criteria such as effects, causation and foreseeability to maintain accountability.

Causation and Foreseeability in AI-Enabled Cyber Operations

Causation plays a central role in this emerging framework. International law traditionally distinguishes between factual causation and legal causation. Factual causation asks whether harm would have occurred “but for” the relevant conduct. Legal causation, by contrast, evaluates whether the connection between conduct and harm is sufficiently direct, foreseeable and normatively significant to justify responsibility.

Although Article 2(4) of the UN Charter does not explicitly refer to causation, the prohibition on the use of force presupposes a connection between state conduct and the harmful consequences produced. International jurisprudence has therefore evaluated armed attacks primarily by reference to their scale and effects, rather than the weapon used or the subjective intention of the acting state (see Nicaragua v. United States of America (1986)). Contemporary discussions of cyber operations adopt a similar logic, assessing whether the consequences of a cyber activity are comparable to those produced by conventional military force.

AI complicates this causal analysis, but it does not eliminate the requirement to establish a legally relevant connection between state conduct and harmful consequences. Autonomous cyber capabilities may behave unpredictably, propagate beyond intended targets or interact with digital environments in ways that operators did not anticipate. Machine-learning systems may also be vulnerable to adversarial manipulation techniques such as data poisoning, evasion attacks or model exploitation. These vulnerabilities can introduce additional layers of complexity into the causal chain between a cyber operation and its consequences.

Yet complexity does not negate responsibility. Instead, it reinforces the importance of reasonable foreseeability as a legal criterion. When states deploy AI-enabled cyber capabilities, they knowingly introduce systems into an interconnected digital ecosystem where unintended interactions are possible. Certain risks therefore become objectively foreseeable.

The historical example of the Stuxnet cyber operation illustrates this dynamic. Although the malware was reportedly designed to target specific industrial control systems, it ultimately spread beyond its original operational environment. Such propagation risks are inherent in certain forms of cyber capability and therefore fall within the realm of foreseeable consequences.

From a legal perspective, this means that responsibility cannot be limited solely to intended outcomes. If a state deploys an AI-enabled cyber capability without adequate testing, safeguards or supervision, harmful consequences arising from predictable system behaviour should be treated as foreseeable effects of that deployment.

The same reasoning applies when adversarial actors exploit known vulnerabilities in machine-learning systems. Techniques such as data poisoning or evasion attacks are well documented within the cybersecurity community. Failure to anticipate and mitigate these risks cannot easily be characterised as an unforeseeable intervening act. Only highly sophisticated or unprecedented forms of manipulation – outside the range of reasonable anticipation – might interrupt the causal chain.

Accountability and the Persistence of State Responsibility

AI also raises broader concerns about accountability and sovereignty in cyberspace. AI-enabled cyber capabilities are often developed by distributed networks of engineers, researchers and contractors. This complexity can make it difficult to identify individual actors responsible for specific technical decisions within the system.

However, the diffusion of technical responsibility does not eliminate state responsibility under international law. Under the Draft articles on Responsibility of States for Internationally Wrongful Acts, with commentaries 2001, the conduct of state organs, including the armed forces and other governmental entities is attributable to the state even when those actors exceed their authority or act contrary to instructions (see for instance Article 7. Excess of authority or contravention of instructions). When a state chooses to develop, acquire or deploy AI-enabled cyber capabilities, it assumes responsibility for the reasonably foreseeable consequences of that decision.

Viewed in this light, AI does not create a legal vacuum. Instead, it highlights the importance of objective standards that ensure accountability even when technological systems become increasingly complex. International law has long relied on consequences-based reasoning in areas such as the prohibition on the use of force and the law of state responsibility. AI simply exposes the limitations of relying on subjective mental states in technologically opaque environments.

The practical implication is that states must exercise heightened caution when deploying AI-enabled cyber capabilities. For instance, adequate testing, robust cybersecurity design and meaningful human oversight become essential safeguards. By focusing on whether harmful outcomes were reasonably foreseeable at the time of deployment, international law can maintain its protective function without requiring unrealistic insight into the internal workings of algorithmic systems.

AI will undoubtedly continue to reshape cyber conflict. Yet the core principle remains unchanged: states bear responsibility for the consequences of their actions in cyberspace. Rather than undermining international law, the rise of AI reinforces the importance of objective standards – effects, causation and foreseeability – that ensure accountability in an increasingly automated world.

Photo credit: Wikipedia