Shreya Shankar is a PhD researcher at Hamburg University studying the intersections between law and psychology to investigate compliance-related decisions during armed conflict.


Introduction

The Office of the Prosecutor (OTP) at the International Criminal Court (ICC) has embraced rapidly developing technological trends by  that all Article 15 communications will be mediated through OTPLink. OTPLink will process the data received using artificial intelligence (AI) and machine learning. Article 15 of the Rome Statute allows individuals, non/intergovernmental organisations, and states to submit evidence for the OTP’s consideration to begin a proprio motu investigation. Therefore, it is instrumental in allowing the voices of less powerful actors, such as individual victims, to be reflected in selecting cases at the ICC. This implies that how Article 15 communications are processed has a direct impact on the different levels of power that stakeholders may have in the functioning of the OTP and, therefore, the ICC.

This blog will argue that processing Article 15 communications using AI programmes causes algorithmic anxiety among individuals, non/intergovernmental organisations and states initiating these communications. This blog post will first, argue that algorithmic anxiety begets ‘folk beliefs’ in what type of communication gets the attention of the OTP, which in turn shapes the kinds of communications submitted, both in terms of content as well as presentation style. Second, this anxiety leads to a form of epistemic injustice, where stakeholders feel compelled to alter their choice on whether or not to make a communication as well as the content of their communication due to a lack of trust in the algorithm. This blog post will argue that these outcomes lead to a skewed power dynamic between the OTP and other stakeholders.

Impact of the use of AI in Assessing Article 15 Communications

Before highlighting the impact of the use of artificial intelligence within the OTP in the ICC, it is useful to consider the duties of the OTP under Article 15 and the considerations that it is mandated to look into while making decisions regarding referrals. Article 15(1) of the Rome Statute allows the OTP to initiate investigations of its own accord on the basis of ‘information received’, subject to the jurisdictional basis within article 12.

Article 15 delineates the broad contours of the duties of the Prosecutor in relation to the communication received. These duties include the responsibilities to ‘analyse the seriousness of the information’, decide on the existence of a ‘reasonable basis’ and request the Pre-Trial Chamber for permission to proceed with the investigation. Article 15 uses the word ‘shall’ in relation to the analysis to be performed on the information received. According to the case matrix commentary of the Rome Statute, this implies the existence of an obligation to ‘analyse’ all communications received. This duty to ‘analyse’ is further consolidated within the OTP regulations. Furthermore, the OTP ‘may’ publish the results of its investigations if there is no prior confidentiality requirement.

In 2023, the OTP initiated the use of AI and machine learning in conducting the  aforementioned process of assessing evidence. Several scholars have pointed out issues that may arise from the use of AI to deal with Article 15 communications, including the perpetuation of biases and lack of appropriate consideration during the decision-making process. However, this blog will analyse a previously unidentified problem in regard to the use of AI at the ICC, namely the perpetuation of algorithmic anxiety.

Anxiety generally refers to a heightened emotional state that is often not directed at a specific source. In other words, unlike fear or worry, anxiety may not always be linked to a definable cause.  Algorithmic anxiety refers to this feeling of stress while interacting with an algorithm. It stems from the fact that an algorithm’s functioning is necessarily unknowable. It is exemplified by the black box problem, where it is impossible to fully comprehend exactly how an AI algorithm prioritises certain variables over others or understand the process of algorithmic decision-making. This blog will focus on the experience of the creators of the information that is imputed into the algorithm.

An Unforgiving Master: Algorithmic Anxiety and Changing Beliefs and Behaviour

According to Chayka, algorithmic anxiety originates from an individual perceiving the algorithm  as a ‘boss’ or manager.  This may then result in changes in behaviour based on (potentially inaccurate) expectations of what the algorithm may prioritise. Algorithmic anxiety has specifically been studied in the context of social media content creators, artists and Airbnb hosts. However, the insights generated from this research may inform an understanding of the way in which Article 15 communicators perceive their interaction with the ICC. In each of these cases, the individual experiencing algorithmic anxiety is dependent on the algorithm for success or income. By shaping the quality of cases presented to the ICC, algorithmic anxiety may impact the perceived legitimacy of the court. 

Developing ‘Folk Beliefs’, Changing Behaviours

The first observable change in behaviour that arose as a result of algorithmic anxiety was noted by Jhaver et al. in relation to Airbnb hosts through the development of ‘folk beliefs’. Folk beliefs refer to unfounded beliefs about seemingly illogical interactions with the AI that appear to prioritise the input of a particular creator or turn the tides of AI decision-making. An example of a folk belief experienced by some Airbnb hosts is that the algorithm prioritises those inputs where creators refresh or open their booking page multiple times. This belief caused Airbnb hosts to open their listings multiple times throughout the day in order to encourage the algorithm to prioritise their listings. While this behaviour itself appears to be relatively harmless (though it may have significant harmful effects on the mental health of the Airbnb host), not all such behavioural changes are as benign. One such example is several Airbnb hosts listed their spaces as child-safe when they were not child-safe due to the erroneous belief that the AI Algorithm prioritised child-safe places. This behavioural change may result in a lack of expected safety for children who may be staying in such properties.

This change of behaviours, while problematic in and of itself, becomes a more pressing issue in the context of communications to the ICC. Individuals, non-state actors and states may feel compelled to change the framing of their communication based on erroneous assumptions about what the AI might prioritise. Firstly, quantitative information may be prioritised over qualitatively relevant parts of the communication. Secondly, another possibility is an overuse of supposed ‘keywords’ within a communication based on the Rome Statute and other judgements of the ICC, similar to how an applicant may draft a cover letter while searching for a job. This overuse may lead to the communication being inauthentic and sometimes even untruthful. Lastly, individual communicators may find themselves hesitant to submit a communication unless backed by a reputed NGO or other organised actor. These possibilities have not yet been subject to quantitative research, however they arguably represent plausible assumptions about  behavioural and cognitive changes.

Being compelled to grapple with algorithmic anxiety while submitting a particular piece of evidence results in a form of self-censorship, where the type of knowledge that an entity possesses is automatically truncated in order to fit the requirements of an algorithm in a situation where the algorithm itself does not publish these requirements. This type of censorship affects the information available to the OTP and ultimately has the potential to change the kind of evidence that the ICC deals with.

Black Box at the OTP: Algorithmic Anxiety and Epistemic Injustice 

In assessing Article 15 communications, the OTP necessarily makes several subjective judgements on the seriousness of the evidence, whether the ‘interests of justice’ are fulfilled as well as whether the evidence comes from a reliable source and is credible. This subjectivity is specifically problematic when used wholly or partly in combination with an AI tool. Since it is impossible to precisely define the manner in which the AI will assess the data derived from the communications, it is impossible to use ‘standard methodological practices’, a requirement according to the OTP Policy Paper as well as Article 54 of the Rome Statute. This implies that the method used to assess Article 15 communications cannot be made public since it cannot even be completely defined by the OTP.

Given that the OTP is vastly more powerful, in most cases, than the individual entities that submit communications to it, at least in the context of triggering a case at the ICC, it is important for the guiding principles of the discretionary powers of the OTP to be adequately adhered to. However, based on the above argument, the use of AI-based algorithms may violate the principle of objectivity as well as transparency. In combination with the algorithmic anxiety that is inherent to interactions with algorithmic systems, this lack of safeguards significantly increases the power differential between communicators and the OTP in terms of their role in presenting narratives before the ICC. This is particularly important since the communicators, in this instance, are likely providing evidence of their own stories and narratives. Using AI to mediate this interaction may lead to disenfranchisement from one’s own story due to the compulsion to change it in order to make it palatable to the AI.

This leads to two different types of harm. Firstly, it leads to a form of epistemic injustice where information is not made available to individuals who may need this information to make decisions about how to interact with these institutions. Secondly, as noted above, the algorithmic anxiety that stems from this lack of information may lead to behavioural changes in the form of altering the kinds of communications submitted as well as the words used to describe the evidence. This self-censorship itself may be considered a different form of epistemic injustice.

Conclusion

It is easy to see the use of AI as a panacea for issues such as delays in proceedings, the cost of analysis of large data sets, and consistency. In fact, the use of AI may lead to real-time improvements in the efficiency of data processing at the OTP. At the same time, it is important to consider the leviathan influence of algorithms in the way in which people and groups interact when their interactions are mediated by an AI algorithm. If social media interactions are any indication, it is easy to see how entities can present data in a way that fits the needs of the algorithm while being entirely inauthentic. This inauthenticity and anxiety can have a wide range of results in the context of international criminal law, ranging from the generation of false information to a feeling of dissatisfaction with the process of obtaining justice. Algorithmic anxiety, specifically, may detract from the perceived legitimacy of the proceedings since it may often lead to behaviours that self-censor evidence and information. Therefore, the use of AI within these court systems must be tempered with caution regarding its effects on the human psyche and behaviour in order to create trust within the system.