Balancing the yin and yang of AI for
the benefit of all

by | Jul 17, 2023 | Digital, Experts | 0 comments

Technology is not neutral but shaped by the value systems and beliefs of those who build it. As the use of AI becomes more widespread, so does the need to ensure that the positives benefit us all, and the risks do not undermine our rights and freedoms.

Computer scientists Stuart Russell and Peter Norvig described artificial intelligence in their seminal 1995 textbook on the topic as “the study of how to make computers do things that, at the moment, people do better”. With the launch of large language models such as Microsoft-backed OpenAI’s ChatGPT-4, AI has taken center stage in the debate about the resilience of our society and economy.

First coined in 1956, AI has evolved over the past seven decades and been integrated into ever more parts of our daily lives. As new developments accelerate, we must address the question of how to balance the harm these systems pose against the benefits they can bring.

One particular area for concern is the impact on our democratic processes. Decades of peace and reasonably well-functioning democratic institutions in Europe may have made us forget that it takes constant care and effort to nourish, defend, and protect democracy. The war in Ukraine and the rise of anti-democratic practices in some east-central European countries are stark reminders that democracy is not a given.

If left unchecked I believe AI could call into question the social contract between governments and their citizens ̶ an unwritten agreement, posited by the 18th-century philosopher Jean-Jacques Rousseau, where citizens surrender some of their individual freedoms in exchange for protection and security provided by the state.

Understanding the risks to democracy

Facing the double demand to digitalise and optimise public action, governments are increasingly turning to AI to improve knowledge management capacity, map and predict risks, automate data collection and analysis, assist decision-making and public services offerings, and enhance civic technology platforms. But governments need to assess what level of risk they are prepared to take relative to the benefits that can be accrued from developments in AI. By introducing an emerging technology in citizen-government relations, some key questions must be raised: How was the technology developed? Who processes the data? Can the technology be audited and by whom? AI can present degrees of autonomy, and in coming years, we can expect a growing degree of agency. In this context, its design and governance are central to ensure citizen trust in public institutions.

AI is not only present in governmental services. It is also present in other areas that are central to the resilience of the democratic model, including political communication, (dis/mis)information distribution, and surveillance.

Democracy is based on the principle that citizens can make well-informed decisions by having access to a plurality of information and being able to express their voice free from any form of coercion. AI plays a key role in the distribution of information with algorithms selecting the information we see online amid an avalanche of content, often with the dual purpose of curating the most relevant content for us while keeping us online to commodify our attention. This has led to social media platforms becoming echo chambers for extremist or sensationalist views rather than spaces for deliberation.

At the same time, the monitoring and analysis of our behaviour has made it possible to target citizens with personalised political ads, which can be automated with AI. The rise of “deepfakes”, where content is manipulated to, for example, misrepresent politicians during elections, has rightly raised concerns about the power of AI to negatively influence voter intentions.

AI is also at the heart of government surveillance systems. After 9/11, we saw the emergence of a new surveillance paradigm: instead of identifying a threat first and then collecting intelligence data about this threat, governments started to collect and then identify potential threats out of the vast bulk of all available data. It meant the original premise that the internet was a free space was not true anymore. Today, everything we do online is collected, stored, and sometimes processed (in reality, only a fraction of our digital trail is actually processed by an organisation, whether it is a private company for marketing purposes or a public intelligence entity).

Some important questions worth asking against this backdrop are:

  1. How can we ensure the decisions delegated to AI (such as allocating resources or processing citizen consultation data) contribute to strengthening trust between citizens and governments?
  2. How can we ensure those decisions are transparent and auditable?
  3. How do we explain to citizens how the AI took the decision?

Beyond this lies a bigger question: What can we do to ensure AI is deployed for the public good and doesn’t further distort the inequities and disinformation already present in our society?

Four ways to protect democracy in the age of AI

1. Focus on transparency and accountability

It is crucial to prioritise transparency and accountability in AI systems. This can be achieved through mechanisms such as algorithmic auditing and explainability. Governments and organisations should conduct regular risk assessments, as proposed in the AI Act from the European Union, to assess levels of transparency, fairness, accuracy, and bias within AI algorithms. The development and deployment of AI should be accompanied by clear guidelines and regulations that ensure accountability for any discriminatory or harmful outcomes. Additionally, promoting transparency by making AI systems and their decision-making processes understandable to the public can help build trust and allow for independent scrutiny.

2. Promote diversity and inclusion in AI

Promoting diversity and inclusivity in AI development is essential. This involves ensuring that AI systems are trained on diverse and representative data that accurately reflects the population. Collaboration with a broad range of stakeholders, including diverse communities, civil society organisations, and experts, can help identify and mitigate biases in AI algorithms. Furthermore, fostering a diverse workforce within the AI industry can lead to more comprehensive perspectives and reduce the likelihood of biased decision-making. A first step would be to encourage more women and other underrepresented youth communities into science, technology, engineering, and math (STEM) careers. We also need to retrain AI to recognise and counter the biases inherent in the data. However, this in itself is fraught with controversy. Even the fact that we want to unbias a dataset, or a technology, involves the reflection of world views, principles and values which are not universal.

3. Introduce new regulation and policy

While I don’t believe the moratorium on the training of powerful AI systems proposed by Twitter chief Elon Musk among others is feasible, the current pace of development is a gnarly challenge for policymakers, and regulation is not keeping pace. It took Netflix 3.5 years to reach one million users. ChatGPT reportedly achieved this feat in just five days. This rapid adoption of innovations leaves no time for “policy sandboxing”, where tech developers and regulators test a new tech in a safe space, which allows regulators to adapt the regulatory framework when needed prior to the launch of a new technology. This pre-launch experimentation phase enables to reduce the potential harm of a tech on populations and institutions, which is a key mandate of the state ̶ to ensure the security of its citizens.

Today, we are not in the same situation as in early 2000 with the emergence of social media platforms. Most governments are aware that they need to establish clear guidelines and standards for the development, deployment, and use of AI systems. But how to balance the precautionary principle with the need to take risk and innovate? This is particularly difficult for a technology “under development”, where future levels of AI (such as artificial general intelligence) make it difficult to foresee future applications and risks, particularly in a context where innovation takes place behind closed doors in companies.

As a result, we may need to develop new forms of tech regulation and governance that are much more adaptive and agile. Although this is particularly difficult today given the polarisation of the multilateral system, particularly around questions related to democracy and human rights, international cooperation and coordination are vital to address the global challenges posed by AI and democracy. Collaboration between governments, organisations, and researchers can help establish ethical standards, share best practices, and develop harmonised regulations.

The European Union is leading the charge here with its proposed AI Act that categorises technology according to four risk levels. For example, the law bans applications that create “unacceptable risk” such as the government-run social scoring technology used in China. “High risk” applications, such as CV-scanning technology, face specific legal requirements, while applications that are deemed to be lower risk are largely left unregulated.

4. Raise AI and digital literacy

Finally, promoting digital literacy and media literacy is essential to empower citizens in the AI era. As mentioned previously, we need to ensure that technology does not contribute to distrust against institutions and the information ecosystems. Citizens and especially youth, who are among the first users, need to be included in the design and governance of tech. This is the best way to increase their literacy and mitigate the risk of harm.

Educating the public about AI technologies, their limitations, and potential biases can help individuals critically evaluate information and resist manipulation. Making AI more visible to citizens is also key in this context. Efforts should be made to improve AI literacy programs in schools and provide accessible resources for lifelong learning. Furthermore, fostering digital media literacy skills can enable citizens to identify and counter disinformation, enhancing the overall resilience of democratic processes.

According to the OECD, only a little over half of 15-year-olds in the EU reported being taught how to detect whether information is subjective or biased. At the Geneva Graduate Institute, we have launched a project to allow young people to participate in shaping the role of AI in the future of democracy. The aim is to raise awareness, especially among young people, about the use of AI in democratic processes and citizen-government interactions, and to encourage youth participation by engaging them in thinking about the future of democracy.

If the revolutionary AI that is set to shape our futures is to be human-centered and beneficial for all, it’s important to improve the transparency, regulation, and understanding around the extent to which these powerful tools are already shaping our interactions with political organisations. At the same time, more people from different backgrounds and parts of the world need to be given a say in how the applications are designed, adopted, and governed. This is particularly crucial given the convergence of several tech and scientific breakthroughs, such as AI and synthetic biology, that will increase the complexity of the decisions our societies will have to make in the coming years.

By Dr Jérôme Duberry for I by IMD’s June 2023 Magazine
Academic Advisor of the Executive Diploma in Diplomacy, Negotiation and Policy.
Director of the Geneva Graduate Institute’s Techub and Senior Research Fellow at the Albert Hirschmann Center on Democracy and at the Centre for International Environmental Studies.

The AI-generated image was created using Microsoft Bing’s image creator.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related articles
___

The variable geometry of AI governance

The variable geometry of AI governance

Global tensions, important elections and the growing call for checks and balances on powerful industry players will shape the context for AI governance in 2024. Roxana Radu argues for the importance of finding a common vocabulary, protecting non-digital knowledge and...

read more

Newsletter
___

Receive our latest articles by subscribing to our newsletter!

Previous articles
___

Tags
___

Follow us
___

The views and opinions expressed in the articles are those of the authors and do not necessarily reflect the position of The Graduate Institute, Geneva.

SDG Portal
___

The Graduate Institute’s SDG Portal provides a window on our more than 150 IHEID experts, research projects, publications, courses, events and other activities connected to the 2030 Agenda for Sustainable Development.

Events
___

Empowering your Negotiation with Artificial Intelligence

Empowering your Negotiation with Artificial Intelligence
Online Conversation
Register here>

POW 15.07.24

Strategic Project Management for Development Q&A
Programme Overview Webinar
Register here>

Waterhub 2024

Programmes
___

Upskill series 4

Sharpen your International Negotiation Skills
Upskill Series - Executive Course
Apply now>