United Nations AI Resolution: a Significant Global Policy Effort to Harness the Technology for Sustainable Development

by | May 6, 2024 | Alumni, Digital, International Law, International Relations | 0 comments

On 21 March, the United Nations approved its first groundbreaking resolution on artificial intelligence, urging member states to guarantee that ‘safe, secure, and trustworthy AI systems’ be developed responsibly while respecting human rights and international law.

The United Nations General Assembly (UNGA) overwhelmingly passed the first global resolution on artificial intelligence (AI) on 21 March. Member states are urged to protect human rights and personal data and to monitor AI for potential harms so the technology can benefit all.

The unanimous adoption of the US-led United Nations (UN) resolution on the promotion of ‘safe, secure, and trustworthy artificial intelligence systems that will also benefit sustainable development for all’ is a historic global effort to ensure the ethical and sustainable use of AI. While nonbinding, the draft resolution was supported by more than 120 states, including China, and endorsed without a vote by all 193 UN member states.

Vice President Kamala Harris praised the agreement, stating that this “resolution, initiated by the US and co-sponsored by more than 100 nations, is a historic step towards establishing clear international norms for AI and fostering safe, secure, and trustworthy AI systems.”

Merve Hickok, President of the Center for AI and Digital Policy, lauded the Biden administration for driving global AI policy with a human rights focus: “We agree with the Administration that human rights and fundamental freedoms must be central to the development and use of AI systems. Our annual AI and Democratic Values Index focuses on the UN Universal Declaration of Human Rights, which is currently the foundation for the UN resolution on AI.”

To understand this resolution’s significance and potential impact on AI policy, we will examine five dimensions: global policy and regulation, ethical design, data privacy and protection, transparency and trust, and AI for sustainable development.

Global policy and regulation

EU policymakers have paved the way with the recently passed AI Act, the first comprehensive legislation covering the technology. The Council of Europe (CoE), a 46-member human rights body, has also agreed on a draft AI Treaty to protect human rights, democracy, and the rule of law.

The White House wants to play a leadership role in shaping global AI regulations. Last October, President Biden unveiled a landmark Executive Order on “Safe, Secure, and Trustworthy AI.” In March, Vice President Harris announced a new policy of the White House Office of Management and Budget for federal agencies’ use of AI.

Other countries and regions are also developing their own frameworks, guidelines, strategies, and policies. For instance, at the African Union (AU) level, its Development Agency (AUDA) released a White Paper on a pan-African AI policy and continental roadmap in March.

The UN resolution acknowledges that multiple initiatives may lead the way in the right direction and further encourages member states, international organisations, and others to assist developing countries in their national process.

Ethical design

The text highlights the need for ethical design in all AI-based decision-making systems (6.b, p5/8). AI systems should be designed, developed, and operated within the frameworks of national, regional, and international laws to minimise risks and liabilities and ensure the preservation of human rights and fundamental freedoms (5., p5/8). A collaborative approach combining AI, ethics, law, philosophy, and social sciences can help craft comprehensive ethical frameworks and standards to govern the design, deployment, and use of AI-powered decision-making tools. Ethical design is a critical aspect of promoting safe, secure, and trustworthy AI systems.

The resolution urges member states and other stakeholders to integrate ethical considerations in the design, development, deployment, and use of AI to safeguard human rights and fundamental freedoms, including the right to life, privacy, and freedom of expression. Introducing the draft, Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, added that ‘AI should be created and deployed through the lens of humanity and dignity, safety and security, human rights, and fundamental freedoms’.

Data privacy and protection

The UN resolution addresses data privacy safeguards to guarantee safe AI development, especially when the data used is sensitive personal information such as health, biometrics, or financial data. Member states and relevant stakeholders are encouraged to monitor AI systems for risk and assess their impact on data security measures and personal data protection throughout their life cycle (6.e, p5/8). Privacy impact assessments and detailed product testing during development are suggested as mechanisms to protect data and preserve our fundamental privacy rights.

Transparency and trust

The document highlights the value of transparency and consent in AI systems. Transparency, inclusivity, and fairness promote our diverse needs, preferences, and emotions. To preserve fundamental human rights, algorithms that affect our lives must be developed in a way that does not cause any harm to us or the environment. This includes providing notice and explanation, promoting human oversight and ensuring that automated decisions are reviewed. When necessary, human decision-making alternatives should be accessible, as well as effective redress. Transparent, interpretable, and explainable AI systems facilitate reliability and accountability, allowing end-users to better understand, accept, and trust outcomes and decisions that impact them. 

AI for sustainable development

The resolution affirms that safe, secure, and trustworthy AI systems can accelerate progress toward achieving the UN’s 17 Sustainable Development Goals (SDGs) in all three dimensions – economic, social, and environmental – in a balanced way. 

AI technologies can augment human intelligence and capabilities, improve efficiency, and help reduce environmental impact. For instance, AI models can predict and unveil errors, plan more effectively, and boost renewable energy efficiency. AI can also streamline transportation and traffic management and anticipate energy needs and production. Any AI system designed, developed, deployed, and used without proper safeguards engenders potential threats that could hamper progress toward the 2030 Agenda and its SDGs.

The aim is to reduce the digital divide between wealthy industrialised nations and developing countries and within countries and give all nations proper representation at the table of discussions on AI governance for sustainable development. The intention is also to ensure that less developed nations have access to the needed technology, infrastructure, and capabilities to reap the promised gains of AI, such as disease detection, flood forecasting, effective capacity building, and a workforce upskilled for the future.

The UN resolution is a remarkable step in global AI policy because it addresses many key drivers for AI to play a safe and effective role in sustainable development that will benefit all. It also recognises that innovation and regulation, far from being mutually exclusive, complement and reinforce one another.

By following up on the current consensus, implementing the recommendations, and aligning them with other regional and global initiatives, governments, public and private sectors, and other involved stakeholders can harness AI’s potential while minimising its risks.

The road ahead

This year, South Korea will co-host a virtual conference on AI safety in May, and six months later, France will hold the next in-person global gathering. This follows UK Prime Minister Rishi Sunak, who inaugurated the first AI Safety summit in Bletchley Park last November. 

More important developments in global AI policy and governance can be expected by September 2024 and the Summit of The Future in New York.

One is the work of the UN’s High-Level Advisory Body on AI leading to its final report. This will progress in parallel with and feed into the long-awaited Global Digital Compact process. 

Another will be the adoption of the CoE Convention on artificial intelligence, human rights, democracy, and the rule of law, and its subsequent ratification open to member and non-member states.

In the EU, the Commission has already started staffing and structuring the newly established AI Office. The AI Act has been adopted by the Parliament, and it awaits the EU Council’s formal approval. The legislation will enter into force 20 days after it is published in the Official Journal, with a phased implementation and enforcement. After six months, unacceptable risks will be prohibited; after 12 months, obligations for providers of general-purpose AI systems come into effect and obligation to member states to designate their relevant national authority; and after 24 months, the legislation becomes fully applicable.

On the African continent, the African Union Commission has begun holding a series of online consultations with various stakeholders across the continent to gather input and inform the development of an Africa-wide AI policy, with a focus on “building the capabilities of AU member states in AI skills, research and development, data availability, infrastructure, governance, and private sector-led innovation.”

The rapid advance of AI technologies poses new challenges for legislators around the world since existing rules struggle to keep up with the acceleration of technical progress. This demonstrates the critical need for regulatory frameworks that can adapt to AI’s evolving landscape. The governance of AI systems requires ongoing discussions on appropriate approaches that are agile, adaptable, interoperable, inclusive, and responsive to the needs of both developed and developing countries. The UNGA resolution opens the door to global cooperation on a safe, secure, and trustworthy AI for sustainable development that benefits all.

By Mouloud Khelif
Consultant – Digital Strategy, Policy, Governance
Alumnus of the Executive Master INP (2021), Executive Certificate in SDG Investing (2022), enrolled in the Executive Master in International Relations (2024)
mouloud.khelif@graduateinstitute.ch
X/Twitter
Linkedin

0 Comments

Submit a Comment

Your email address will not be published.

Related articles
___

The variable geometry of AI governance

The variable geometry of AI governance

Global tensions, important elections and the growing call for checks and balances on powerful industry players will shape the context for AI governance in 2024. Roxana Radu argues for the importance of finding a common vocabulary, protecting non-digital knowledge and...

read more

Newsletter
___

Receive our latest articles by subscribing to our newsletter!

Previous articles
___

Tags
___

Follow us
___

The views and opinions expressed in the articles are those of the authors and do not necessarily reflect the position of The Graduate Institute, Geneva.

SDG Portal
___

The Graduate Institute’s SDG Portal provides a window on our more than 150 IHEID experts, research projects, publications, courses, events and other activities connected to the 2030 Agenda for Sustainable Development.

Events
___

POW NPE 16.05

Nature-Positive Economy Q&A
Programme Overview Webinar
Register here>

POW DNP 14.05

Diplomacy, Negotiation and Policy Q&A
Programme Overview Webinar
Register here>

Waterhub 2024

Programmes
___

Upskill series 4

Sharpen your International Negotiation Skills
Upskill Series - Executive Course
Apply now>