The variable geometry of AI governance

by | Mar 12, 2024 | Digital, Experts, Governance | 0 comments

Global tensions, important elections and the growing call for checks and balances on powerful industry players will shape the context for AI governance in 2024. Roxana Radu argues for the importance of finding a common vocabulary, protecting non-digital knowledge and nurturing the voice of academia.

The current state of affairs 

The development of artificial intelligence (AI) dates back to the 1950s, but it has never been as popular as it is today. While the historical breakthroughs have been on “narrow intelligence” models, today we see multi-modal AI using a range of sources/inputs to improve general-purpose technology. For instance, based on neural networks and pattern recognition, deep learning became increasingly successful when natural language processing integrated the transformer architecture in 2017, allowing large language models to advance very quickly. Generative AI tools, such as OpenAI’s ChatGPT launched in November 2022, are seeing mainstream adoption across the public and private sectors. 

“Given the growing geopolitical division, AI is developing in an era of uncertainty.”

Roxana Radu

Today, AI is not artificial general intelligence, meaning that machines do not have their own form of “superintelligence”. As a general-purpose technology, however, AI is used for a wide range of purposes, from industrial applications to large language models that generate new text, images or videos. The fast developments in machine learning in the last 5 years show the great potential of this powerful technology, which can also cause disastrous consequences. The future of AI will be built on models that move from facial to emotional recognition in seamless ways, incorporating more data systems than ever before. These advances mostly come from private research labs, generally funded by gatekeeper companies such as Microsoft, Google, or Meta in Silicon Valley, or Alibaba, Baidu or Tencent in China. These companies have invested significantly in both data collection and computing power in the last decade. 

Given the growing geopolitical division, AI is developing in an era of uncertainty. 2024 will be a record year for the number of elections held around the world, bringing more than 2 billion people to the polls, including in the United States, India, Mexico and South Africa. With AI governance as a top policy priority, changes in these administrations will impact global developments in the field. Second, in the context of the technological competition between the US and China, Taiwan is an essential source for the global supply chain of advanced graphical processing units (GPUs), which are central to the development of AI models. There could be escalatory scenarios in which the US and China take measures and countermeasures against one another, while their allies ponder on the best possible alignment. Third, the search for checks and balances to counter the unprecedented power of a few players in the AI industry will define regional and national governance arrangements. Contestation will emerge from these three undercurrents that influence the development of AI.

The battle for influence over AI’s global rulebook

The international community acknowledges the risks and complexities of governing AI. There are many risks amplified by the use of AI, ranging from relatively narrow threats to broader societal risks. At one end of the spectrum, AI systems produce biased, erroneous inferences, or even “hallucinations” presenting false or misleading information as a fact. On the other end, they trigger structural and societal changes via job losses and educational or environmental impacts. While recent discussions tend to focus more on existential risks and potential loss of human control, the everyday harms engendered by AI use need full consideration. A growing disconnection between AI and digital inclusion makes it visible that countries and populations are already suffering from both digital divide(s) and digital poverty.

We are witnessing various efforts to govern AI. First, on the multilateral level, several processes have started: the Council of Europe has been negotiating an international legally binding instrument on the development, design and application of AI systems. Meanwhile, the UN Secretary-General has appointed a High-Level Advisory Board on AI tasked to map the governance landscape and propose policy and institutional options ahead of the Summit of the Future in September 2024. A first global AI Safety Summit convened by the UK in November 2023 recently built momentum for bridging the gap in AI safety testing.

Second, discussions on regulation are taking place at the national level. Alongside the EU’s ongoing work to create an AI Act, China’s internal attempts to provide regulatory mechanisms and the more recent US Executive Order on AI, more and more governments in Latin America, Africa and Asia are discussing AI bills. What is clear is that the road towards governing and regulating AI cannot be a quick fix. Many countries are still figuring out their position in the regulatory debates. Instead, it is a long-term process to govern a general-purpose technology that is already here, while allowing for sufficient flexibility to accommodate constant advances. 

Despite these initiatives, key questions dominating the global AI discussions remain unanswered: what future-proof solutions can be designed on a governance level? What are the dangers of a fragmented governance system? How could a more coherent regime be optimised for the public interest? Are we ready to develop governance ecosystems that are centred on the well-being of humanity and remain mission-driven, rather than reactive ones that try to fix solutions to current problems? 

The role of International Geneva

“Most organisations seem to work in their silos, which makes it difficult to speak with a strong voice in global AI conversations.”

Roxana Radu

Geneva can play a significant role in AI governance. With a critical mass of international organisations and NGOs, Geneva is a global hub for both technical standardisation and international law – in particular humanitarian and human rights protections. More deliberate efforts for these established fields to inform AI developments are needed moving forward. The humanitarian ecosystem is rapidly transformed by AI and the data of an unprecedented number of civilians is now being handed over to machines, with long-term, but unknown consequences.

Yet, most organisations seem to work in their silos, which makes it difficult to speak with a strong voice in global AI conversations. The majority of international institutions based in Geneva have already embarked on the AI journey, either by applying AI tools internally or by contributing to or building AI-enhanced instruments for global benefit. Directly and indirectly, Geneva is a part of how consequential AI decisions are taken. 

Additionally, active multi-stakeholder networks based in and around Geneva remain at the forefront of the “AI for good” movement. Yet, the meaning of ‘good’ is not always clear and not commonly shared by all stakeholders. Previous attempts to draft ethical principles in the AI world have proven their limitations, as they do not travel globally. In this light,  two pathways are worth pursuing: first, agreeing on the central cornerstones that different stakeholders value and using the convening power of International Geneva to find a common vocabulary. Second, International Geneva can facilitate discussions about mechanisms for the protection of non-digital knowledge from AI-related disruptions. 

Finally, academia remains an underrepresented stakeholder in the AI governance conversations. While some universities advance the field of AI, there is an imbalance in the academics’ access to discussion venues, which can be minimised in International Geneva. A growing number of governments have announced public funding and pooling of resources for national supercomputers (a recent example is the Swiss National Supercomputing Centre), aimed at minimising the gap between private and public expertise. Future discussions should not only integrate the evidence and views provided by academic institutions but also actively inform future research on societal transformations engendered by AI. The time is now.

By Roxana Radu for the Geneva Policy Outlook 2024
Associate Professor of Digital Technologies and Public Policy at the University of Oxford’s Blavatnik School of Government
Guest Speaker of the Executive Diploma in Diplomacy, Negotiation and Policy

0 Comments

Submit a Comment

Your email address will not be published.

Related articles
___

AI Industry vs Copyright Law: the 2024 battlefield

AI Industry vs Copyright Law: the 2024 battlefield

In 2023, with OpenAI's ChatGPT and other competitors going mainstream, artificial intelligence (AI) gained momentum and general acceptance but also exposed most people to how AI tools function—and sometimes "hallucinate".It has been a watershed moment in AI policy and...

read more

Newsletter
___

Receive our latest articles by subscribing to our newsletter!

Previous articles
___

Tags
___

Follow us
___

The views and opinions expressed in the articles are those of the authors and do not necessarily reflect the position of The Graduate Institute, Geneva.

SDG Portal
___

The Graduate Institute’s SDG Portal provides a window on our more than 150 IHEID experts, research projects, publications, courses, events and other activities connected to the 2030 Agenda for Sustainable Development.

Events
___

POW NPE 16.05

Nature-Positive Economy Q&A
Programme Overview Webinar
Register here>

POW DNP 14.05

Diplomacy, Negotiation and Policy Q&A
Programme Overview Webinar
Register here>

Waterhub 2024

Programmes
___

Upskill series 4

Sharpen your International Negotiation Skills
Upskill Series - Executive Course
Apply now>