AI Governance Briefing Series for Permanent Missions to the UN in Geneva

· 4 min read
By

From November 7th to December 8th 2023, the Simon Institute for Longterm Governance (SI) organized a briefing series on AI governance for Permanent Missions to the United Nations (UN) in Geneva, in collaboration with the Permanent Missions of the Republic of Costa Rica and Republic of Kenya. The briefing series sought to bring Member State representatives together to discuss the emerging opportunities and risks associated with artificial intelligence (AI), and the role of the multilateral system in addressing AI governance. 

The series consisted of three 90-minute briefings, held across a span of 6 weeks. You can explore the links below to access and download the corresponding memos from each briefing.

(Note: You can see here for a briefing note that reflects more recent developments in AI governance, up to April 2024.)

In recent months, AI has gained increased attention across the UN. While it holds the potential to accelerate the Sustainable Development Goals and boost global development, it also poses risks, including bias, disinformation and misuse. The rapid and uncertain development of AI underscores the need for effective governance solutions, especially at the multilateral level. This briefing series aimed to equip permanent missions with the knowledge needed to effectively participate in discussions on AI governance, and offer a platform for discussion and exchange on the subject. 

Held on November 7th, the first briefing offered participants an overview of the basics of AI. The session covered the technical underpinnings of AI, the different types of AI, AI’s development trajectory, and key opportunities and risks. During the second half of the briefing, Haydn Belfield, researcher at the Centre for the Study of Existential Risk at Cambridge University, provided an overview of key risks associated with frontier AI systems, including societal harm, misuse, and a potential loss of control. 

The second briefing, on November 21st, explored strategies and frameworks for governing AI at a multilateral level. It covered key governance challenges, existing initiatives, and proposed models for international AI governance. The second half of the briefing featured a Q&A with Lewis Ho, strategy and governance researcher from Google DeepMind, who discussed the challenges of achieving coherence in global AI governance and the potential role of the private sector in establishing a global governance regime. During the Q&A, participants discussed the risk of overly fragmented government efforts, while also noting that pre-existing institutional structures likely cannot be copy-pasted to the case of AI. 

The final briefing, on December 8th, provided an overview of ongoing AI-related initiatives at the UN-level, including the Global Digital Compact (GDC). The presentation was followed by a participant-driven fish-bowl discussion, featuring the co-facilitators of the GDC (H.E. Ms. Anna Karin Eneström, Permanent Representative of Sweden to the UN in New York, and H.E. Mr. Chola Milambo, Permanent Representative of the Republic of Zambia to the UN in New York), Renata Dwan, Special Advisor, Office of the UN Secretary-General’s Envoy on Technology, and and moderation by Sam Daws, Director of the Project on UN Governance and Reform at Oxford University. The discussion focused on ways the GDC could effectively contribute to AI governance, and offered the Genevan diplomatic community a platform to have their voices heard on the subject. Participants discussed the importance of emphasizing AI’s benefits, rather than just risks, and underscored the need to incorporate existing issues, such as the digital divide, in discussions related to AI governance. 

Going forward, SI aims to continue supporting permanent missions in navigating AI governance at the multilateral level. For more information on the briefing series, or to inquire about further support on the subject, please don’t hesitate to reach out to Belinda Cleeland at belinda@simoninstitute.ch.