Unintelligible – if there is one word that could describe the hesitation surrounding the use of artificial intelligence (AI) in states’ affairs, it is this. This hinges upon a fear of the unknown as AI seems to be inexplicable. It either cannot explain how it arrived at its conclusions or a common person cannot understand its explanations.¹
Automatisation of Justice
The problem has even entered the legal world as some imagine a future with robot judges hastening the administration of justice.² Studies using Natural Language Processing and Machine Learning have been conducted on hundreds of European Court of Human Rights cases to predict their results, leading to a finding that certain topics are predictive.³ This is proof that AI can foresee the outcome of court proceedings when certain topics are involved, and that it might therefore be able to take over the role of human judges in these matters.
It is doubted, however, whether this can meet the requirements of the law.⁴ The fact that the presence of a few words can affect the judgment is criticized.⁵ Considering that the studies found ‘facts of a case’ to be the ‘most important predictive factor’,⁶ one might also wonder what pattern of reasoning AI technologies would resort to when faced with unprecedented circumstances in a novel case.
Recommendation on the Ethics of AI
State members to the United Nations Educational, Scientific, and Cultural Organization are set to deal with this conundrum in November 2021.⁷ On the agenda of the General Conference is the issuance of a Recommendation on the ethics of AI.⁸ The draft introduces a principle of explainability⁹ that is described as one of the ‘essential preconditions to ensure the respect, protection, and promotion of human rights, fundamental freedoms, and ethical principles’.¹⁰ Unlike some other principles included in the document (e.g., proportionality), this is a concept that has not been established in international law.
Principle of Explainability
Dissecting its proposed definition, the principle deals with two aspects:
- The process of understanding results produced through the AI system (‘making intelligible and providing insight into the outcome of AI systems’); and
- The quality of relevant components of the AI system (‘the understandability of the input, output, and the functioning of each algorithmic building block and how it contributes to the outcome of the systems’).¹¹
The former pertains to actions required of relevant actors while the latter refers to a product expected from AI developers. This last part is actually reminiscent of a basic systems analysis and design concept called the Input-Process-Output (IPO) model. As the name implies, the model looks at three issues:
- What inputs should be given to the system?
- What processes should these inputs go through in the system?
- What outputs should the system produce from these inputs and processes?
The understandability of these components are addressed in the proposed definition. This is a crucial bridge between the legal discipline and computer science. It narrows down the components which experts from both fields need to look at to know whether an AI system is sufficiently explainable.
Identification of Non-Negotiables
Moreover, with the highlight it gives to the issues raised in the IPO model, it could encourage the creation of default system designs for specific kinds of AI systems to ensure their compliance with ethical standards. It has been said before that it is ideal to begin with designs before coding.¹² The proposal takes a step in that direction. It provides a noticeable shift in the discussion from the usual fear of robots replacing humans / probabilities and statistics substituting reason. Instead, it gives attention to the need to agree on non-negotiables. For example, in the context of the automatisation of justice, this could be an agreement on what constitutes the ‘fundamental features of legal decision-making’.¹³
Nevertheless, legal practitioners should be circumspect in assessing the formulation of this part of the Recommendation. After all, it is meant to shape the practice of present and future stakeholders (e.g., governments, private sector companies).¹⁴ Once settled, even more technical issues¹⁵ could then be faced.
¹ Maxi Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open?’ (2019) 36 Journal of International Arbitration 539, 563–564.
² ‘Artificial Intelligence: Examples of Ethical Dilemmas’ (UNESCO) <https://en.unesco.org/artificial-intelligence/ethics/cases#law> accessed 17 August 2021.
³ Scherer (n 1) 563–564; Nikolaos Aletras and others, ‘Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective’  PeerJ Computer Science 1 <https://peerj.com/articles/cs-93/> accessed 21 August 2021.
⁴ Scherer (n 1) 563–564.
⁶ Aletras and others (n 4) 1.
⁷ ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (UNESCO) <https://events.unesco.org/event?id=1736064082> accessed 17 August 2021; ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (UNESCO) <https://events.unesco.org/event?id=515530304> accessed 17 August 2021.
⁸ ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (n 9); ‘Intergovernmental Meeting Related to the Draft Recommendation on the Ethics of Artificial Intelligence’ (n 9).
⁹ Paragraphs 37 – 41, Part III.2, Draft Recommendation.
¹⁰ Paragraph 37, Part III.2, Draft Recommendation.
¹¹ Paragraph 40, Part III.2, Draft Recommendation.
¹² Kevin D Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Cambridge University Press 2017) 36–37.
¹³ Scherer (n 1) 562.
¹⁴ Paragraph 47, Part III.2, Draft Recommendation.
¹⁵ For example, should the suggestion to resort to cognitive computing, i.e., humans and machines working together, be considered? (See Ashley (n 13) 12–13; Scherer (n 1) 573.)