AI Diplomacy: what vision for the future of multilateralism?

by | Oct 3, 2023 | Digital, Diplomacy, Main article | 0 comments

The integration of AI in diplomacy brings with it exciting prospects but also significant risks that could compromise the very essence of diplomatic practice, writes the Graduate Institute’s Dr Jérôme Duberry.

From the corridors of the Palais des Nations in Geneva to bustling embassies around the world, diplomats are increasingly relying on artificial intelligence (AI) to implement foreign policy, with a range of services and routine operations opening up to the prospect of AI.  

AI is an umbrella term that refers to several technologies. There are many definitions, but for simplification, we’ll refer to it here as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.” 

AI holds significant potential in various government sectors such as education, transportation, telecommunications, data security and management, finance, healthcare, law and justice, to name a few. We will focus here on diplomacy.

One particular area, consular services, stands out as a low-hanging fruit for AI integration since they are governed by clear and stable procedures, eliminating the need to treat each decision as a new challenge, except in crisis situations. Chatbots and virtual assistants are now commonplace to support visa applications, consular registrations, and legal aid for refugees. AI is also increasingly used by governments and public institutions to protect against cyberattacks. AI not only automates routine operations but can also make decisions in place of humans by applying different types of data analytics. 

First, descriptive analytics provides a comprehensive overview of the current situation, offering insights into prevailing trends and patterns. For instance, AI can help diplomats to monitor in real time a large array of foreign media news in many languages to flag emerging risks and political changes. Data visualization is a natural fit for communicating descriptive analysis. For instance, the Crisis Risk Dashboard developed by United Nations Development Programme is a tool for data aggregation and visualization to support contextual risk analysis. 

A second type of analytics where AI can be integrated to delve deeper and probe into the reasons behind observed phenomena is diagnostic analytics. AI-based sentiment analysis tools are becoming invaluable assets for diplomats, enabling them to gauge prevailing perceptions of national policies and what narratives are gaining traction. The analysis of external signals can be coupled with diplomatic documents ranging from cables sent by embassies to media summaries and intelligence briefings. This is particularly useful for diplomats seeking to assess the strength and validity of signals received during crises. 

Building on this, predictive analytics helps diplomats identify future trends and anticipate social or political tensions in host countries. This can be instrumental in helping them to better allocate their attention and resources and more effectively counter disinformation and malign influence campaigns. 

Finally, prescriptive analytics actively recommends specific courses of action according to a set of criteria. AI-assisted scenario generation for strategic planning and Monte Carlo simulation (a mathematical technique that predicts possible outcomes of an uncertain event) offer decision-makers a range of probability-ranked outcomes for multiple courses of action. 

Risks of AI intermediation 

In a global landscape marked by turbulence, complexity and instability, trust remains a cornerstone and “intelligent consensus-driven decision-making processes on issues of our time” are highly needed. The permeation of AI into diplomacy introduces new challenges that could potentially undermine this trust.

AI helps reduce the cognitive effort required in decision-making processes. But diplomacy is an art which relies on knowledge, intelligence, experience, and creativity, to name only a few. Diplomats must ensure to keep their competence and not fall into the trap of relying only on the AI-powered data analytics. AI can provide invaluable insights and support decision-making with reliable and digestible information. But diplomats should not become dependent and give an exaggerated importance and veracity to what AI produces. AI does not understand the meaning of what it produces. And there is a risk of it contributing to the impoverishment of diplomatic expertise and skills. AI can assist diplomats, but it should not replace them. 

Indeed, the opacity of many machine learning algorithms makes it difficult to assess how AI arrives at specific conclusions, thereby affecting accountability. This raises questions about the AI’s origins, design, and the values and interests it is designed to uphold: Is it an AI system built internally, or is it a service purchased to a private company? Is it a proprietary algorithm, or is it an open-source algorithm? Is it designed to reach consensus or to maximize the gains of one party? What worldview and values does it uphold? 

Introducing a proprietary AI developed by a foreign state or a private company into diplomatic negotiations is akin to bringing an unchecked stakeholder into the room. Technology is political in the sense that it reflects the values and interests of their designer, which are both temporally and geographically localized. Its features are the result of choices and decisions made by a group (often small, white, and male with a similar IT background) of individuals. What’s more, the absence of common global technical standards for AI and a unified regulatory framework adds a layer of uncertainty, especially when multiple AI systems are involved in diplomatic processes. Lastly, AI requires access to large datasets of past diplomatic cables and memos for training. Once digitalised, this data becomes vulnerable to cyberattacks and leaks.  

AI and big data require computational and financial capabilities that are limited to a small number of tech companies located in a limited number of states around the world. A’s rapid development and its thirst for data may well exacerbate the digital divide, particularly between the data-rich global north and the data-poor global south. AI models trained on data from the global north may not be optimal for other regions and could be biased according to the values and interests of the global north. Indeed, AI may impact on the balance of global power, though AI has the potential to reshuffle winners and losers in global markets and, when used in lethal autonomous weapons, to name only two drivers. AI also presents the risk of power concentration, as we see today with OpenAI and ChatGPT. The stakes are high for the tech industry right now with the ongoing Google trial in the US.

The integration of AI into diplomacy is a double-edged sword. While it offers the promise of enhanced efficiency and effectiveness, it also poses significant risks that could compromise the very essence of diplomatic practice. The dependency on AI might compel diplomats to sacrifice the sanctity of diplomatic secrecy for the sake of data analytics. Today, we need to ask ourselves to what degree we wish to integrate AI into diplomacy. If diplomats increasingly delegate decisions to opaque AI systems, we may be turning diplomacy into a form of algocratic system, in which computer coded algorithms “structure, constrain, incentivise, nudge, manipulate or encourage different types of human behaviour”. Is this our vision for the future of multilateralism?

By Dr Jérôme Duberry
Managing Director of the Tech Hub
Academic Advisor of the Executive Degree in Diplomacy, Negotiation and Policy

This article was edited by and produced in partnership with Geneva Solutions.

0 Comments

Submit a Comment

Your email address will not be published.

Related articles
___

The variable geometry of AI governance

The variable geometry of AI governance

Global tensions, important elections and the growing call for checks and balances on powerful industry players will shape the context for AI governance in 2024. Roxana Radu argues for the importance of finding a common vocabulary, protecting non-digital knowledge and...

read more
AI Industry vs Copyright Law: the 2024 battlefield

AI Industry vs Copyright Law: the 2024 battlefield

In 2023, with OpenAI's ChatGPT and other competitors going mainstream, artificial intelligence (AI) gained momentum and general acceptance but also exposed most people to how AI tools function—and sometimes "hallucinate".It has been a watershed moment in AI policy and...

read more

Newsletter
___

Receive our latest articles by subscribing to our newsletter!

Previous articles
___

Tags
___

Follow us
___

The views and opinions expressed in the articles are those of the authors and do not necessarily reflect the position of The Graduate Institute, Geneva.

SDG Portal
___

The Graduate Institute’s SDG Portal provides a window on our more than 150 IHEID experts, research projects, publications, courses, events and other activities connected to the 2030 Agenda for Sustainable Development.

Events
___

POW NPE 16.05

Nature-Positive Economy Q&A
Programme Overview Webinar
Register here>

POW DNP 14.05

Diplomacy, Negotiation and Policy Q&A
Programme Overview Webinar
Register here>

Waterhub 2024

Programmes
___

Upskill series 4

Sharpen your International Negotiation Skills
Upskill Series - Executive Course
Apply now>