The Explainability of Artificial Intelligence in International Law

by | Nov 17, 2021 | Alumni, International Law | 0 comments

Unintelligible – if there is one word that could describe the hesitation surrounding the use of artificial intelligence (AI) in states’ affairs, it is this. This hinges upon a fear of the unknown as AI seems to be inexplicable. It either cannot explain how it arrived at its conclusions or a common person cannot understand its explanations.¹

Automatisation of Justice

The problem has even entered the legal world as some imagine a future with robot judges hastening the administration of justice.² Studies using Natural Language Processing and Machine Learning have been conducted on hundreds of European Court of Human Rights cases to predict their results, leading to a finding that certain topics are predictive.³ This is proof that AI can foresee the outcome of court proceedings when certain topics are involved, and that it might therefore be able to take over the role of human judges in these matters.

It is doubted, however, whether this can meet the requirements of the law.⁴ The fact that the presence of a few words can affect the judgment is criticized.⁵ Considering that the studies found ‘facts of a case’ to be the ‘most important predictive factor’,⁶ one might also wonder what pattern of reasoning AI technologies would resort to when faced with unprecedented circumstances in a novel case.

Recommendation on the Ethics of AI

State members to the United Nations Educational, Scientific, and Cultural Organization are set to deal with this conundrum in November 2021.⁷ On the agenda of the General Conference is the issuance of a Recommendation on the ethics of AI.⁸ The draft introduces a principle of explainability⁹ that is described as one of the ‘essential preconditions to ensure the respect, protection, and promotion of human rights, fundamental freedoms, and ethical principles’.¹⁰ Unlike some other principles included in the document (e.g., proportionality), this is a concept that has not been established in international law.

Principle of Explainability

Dissecting its proposed definition, the principle deals with two aspects:

  1. The process of understanding results produced through the AI system (‘making intelligible and providing insight into the outcome of AI systems’); and
  2. The quality of relevant components of the AI system (‘the understandability of the input, output, and the functioning of each algorithmic building block and how it contributes to the outcome of the systems’).¹¹

The former pertains to actions required of relevant actors while the latter refers to a product expected from AI developers. This last part is actually reminiscent of a basic systems analysis and design concept called the Input-Process-Output (IPO) model. As the name implies, the model looks at three issues:

  1. What inputs should be given to the system?
  2. What processes should these inputs go through in the system?
  3. What outputs should the system produce from these inputs and processes?

The understandability of these components are addressed in the proposed definition. This is a crucial bridge between the legal discipline and computer science. It narrows down the components which experts from both fields need to look at to know whether an AI system is sufficiently explainable.

Identification of Non-Negotiables

Moreover, with the highlight it gives to the issues raised in the IPO model, it could encourage the creation of default system designs for specific kinds of AI systems to ensure their compliance with ethical standards. It has been said before that it is ideal to begin with designs before coding.¹² The proposal takes a step in that direction. It provides a noticeable shift in the discussion from the usual fear of robots replacing humans / probabilities and statistics substituting reason. Instead, it gives attention to the need to agree on non-negotiables. For example, in the context of the automatisation of justice, this could be an agreement on what constitutes the ‘fundamental features of legal decision-making’.¹³

Nevertheless, legal practitioners should be circumspect in assessing the formulation of this part of the Recommendation. After all, it is meant to shape the practice of present and future stakeholders (e.g., governments, private sector companies).¹⁴ Once settled, even more technical issues¹⁵ could then be faced.

Marie Anne Cyra H. Uy, LL.M.’21
Assistant to Mr. Nguyễn Hồng Thao, International Law Commission
Alumna of the LL.M. in International Law 2021, Graduate Institute Geneva
marie.uy@graduateinstitute.ch


¹ Maxi Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open?’ (2019) 36 Journal of International Arbitration 539, 563–564.

² ‘Artificial Intelligence: Examples of Ethical Dilemmas’ (UNESCO) <https://en.unesco.org/artificial-intelligence/ethics/cases#law> accessed 17 August 2021.

³ Scherer (n 1) 563–564; Nikolaos Aletras and others, ‘Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective’ [2016] PeerJ Computer Science 1 <https://peerj.com/articles/cs-93/> accessed 21 August 2021.

⁴ Scherer (n 1) 563–564.

⁵ ibid.

⁶ Aletras and others (n 4) 1.

⁷ ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (UNESCO) <https://events.unesco.org/event?id=1736064082> accessed 17 August 2021; ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (UNESCO) <https://events.unesco.org/event?id=515530304> accessed 17 August 2021.

⁸ ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (n 9); ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (n 9).

⁹ Paragraphs 37 – 41, Part III.2, Draft Recommendation.

¹⁰ Paragraph 37, Part III.2, Draft Recommendation.

¹¹ Paragraph 40, Part III.2, Draft Recommendation.

¹² Kevin D Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Cambridge University Press 2017) 36–37.

¹³ Scherer (n 1) 562.

¹⁴ Paragraph 47, Part III.2, Draft Recommendation.

¹⁵ For example, should the suggestion to resort to cognitive computing, i.e., humans and machines working together, be considered? (See Ashley (n 13) 12–13; Scherer (n 1) 573.)

0 Comments

Submit a Comment

Your email address will not be published.

Related articles
___

Blockchain technology in humanitarian aid

Blockchain technology in humanitarian aid

Humanitarian aid is highly complex There is no lack of complexity in solving humanitarian aid issues. Various policy circles work intensively on these issues on multiple levels. They understand that agreed-upon solutions are negotiated outcomes that can have a...

read more
LL.M. in International Law 10th Anniversary

LL.M. in International Law 10th Anniversary

Did you know that we offer an LL.M. in International Law for postgraduates? This one-year degree course provides advanced, comprehensive and practice-oriented training in international law. The LL.M. is designed to provide a firm grounding in public...

read more
The War in Ukraine and International Law

The War in Ukraine and International Law

It is a truism to state that the war launched by Russia against Ukraine in late February breaches many foundational rules of international law ranging from the prohibition on the use of force and the principle of territorial integrity to the right of the Ukrainian...

read more

Newsletter
___

Receive our latest articles by subscribing to our newsletter!

Previous articles
___

Tags
___

Follow us
___

The views and opinions expressed in the articles are those of the authors and do not necessarily reflect the position of The Graduate Institute, Geneva.

SDG Portal
___

The Graduate Institute’s SDG Portal provides a window on our more than 150 IHEID experts, research projects, publications, courses, events and other activities connected to the 2030 Agenda for Sustainable Development.

Events
___

POW NPE 16.05

Nature-Positive Economy Q&A
Programme Overview Webinar
Register here>

POW DNP 14.05

Diplomacy, Negotiation and Policy Q&A
Programme Overview Webinar
Register here>

POW DPP - blog image

Programmes
___

Upskill series 4

Sharpen your International Negotiation Skills
Upskill Series - Executive Course
Apply now>