Skip to content

Statement in the run-up to the Seoul AI Safety Summit

We provide some recommendations for the upcoming AI Safety Summit in Seoul, most notably the appointment of a coordinator for collaborations between the AI Safety Institutes.
Published:
May 20, 2024
Author:
Imane Bello (Ima)

Contents

View this statement as a PDF in English (Anglais) or French (Français)

The second AI Safety Summit will take place on May 21-22 in Seoul, as a follow up to the first Summit in Bletchley Park last November, and the Future of Life Institute is honoured to be participating once again. The first day in Seoul will feature a virtual session of heads of state and government and is expected to result in the signing of a Leaders’ Declaration calling for international cooperation in AI governance. A physical meeting between digital ministers the following day is then set to adopt a Ministerial Statement reaffirming the participants’ commitment to AI safety, innovation and inclusivity. An Annex labeled “Seoul Statement of Intent towards International Cooperation in AI Safety Science” is also expected.

Context

The Future of Life Institute (FLI) is an independent non-profit organisation founded in 2014 that works on reducing global catastrophic risks from powerful technologies. At a 2017 conference, FLI formulated one of the earliest sets of artificial intelligence (AI) governance principles, the Asilomar AI principles. The organisation has since become one of the leading voices on AI policy in Washington D.C. and Brussels, and is now the designated civil society actor for AI recommendations for the UN Secretary General’s Digital Cooperation Roadmap.

Since the inaugural AI Safety Summit, held at Bletchley Park in November 2023, at which FLI was a selected civil society participant, we have seen meaningful steps forward in AI governance at the national and international levels. The landmark EU AI Act, the first comprehensive legal framework for this transformative technology, was successfully passed. Crucially, it included the regulation of foundation models, thanks in large part to the advocacy of FLI and civil society partners.

Despite this progress, we have a long way to go. In March, FLI partnered with The Elders, an international organization founded by Nelson Mandela to bring together former world leaders including former UN Secretary General Ban Ki-moon in pursuit of peace, justice, human rights and a sustainable planet – to release an open letter calling on world leaders urgently to address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI. The AI Safety Summits are crucial global opportunities to do precisely this with regards to AI.

Recommendations

We would like to thank the Summit Organizers for their leadership in convening the world’s second AI Safety Summit, and for inviting FLI to participate. We are pleased to see so many global leaders coming together to address this urgent and crucial issue.

With this in mind, FLI would like to submit the following recommendations, for suggested actions coming out of this historic convening:

  • Prioritization of AI safety research as a key element in furthering responsible AI innovation;
  • International cooperation to advance AI safety;
  • Compatible binding guardrails to address AI safety risks;
  • Leveraging, promoting and fostering common scientific understanding through:
    • The independently-led International AI Safety Report and its iterations;
    • Synergy on AI testing capabilities;
    • The joint creation of evaluations, data sets and risk thresholds;
    • Cooperation on safety research and best practices via the network of AI safety institutes.
  • Credible external evaluations undertaken for advanced AI models or systems developed or used in respective jurisdictions;
  • Collaboration with all stakeholders to establish common frameworks for developing proposals in advance of the third Safety Summit, in France.

As we build on this work and look ahead to the French Summit, our key recommendation is that AI safety institutes formalise cooperation, including through the appointment of a coordinator. In so doing, the Seoul and Paris safety summits can pave the way for an international agency on AI as was called for by the UN Secretary-General, and more broadly help to build a new international architecture for AI regulation.

Stakeholders are encouraged to direct any questions to FLI’s AI Safety Summit Representative, Imane Bello (ima@futureoflife.org).

For all press enquiries, please get in touch with FLI Communications Director Ben Cumming (ben.cumming@futureoflife.org).

This content was first published at futureoflife.org on May 20, 2024.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about ,

If you enjoyed this content, you also might also be interested in:

Poll Shows Broad Popularity of CA SB1047 to Regulate AI

A new poll from the AI Policy Institute shows broad and overwhelming support for SB1047, a bill to evaluate the risk of catastrophic harm posed by AI models.
23 July, 2024

FLI Praises AI Whistleblowers While Calling for Stronger Protections and Regulation 

We need to strengthen current whistleblower protections. Lawmakers should act immediately to pass legal measures that provide the protection these individuals deserve.
16 July, 2024

Future of Life Institute Announces 16 Grants for Problem-Solving AI

Announcing the 16 recipients of our newest grants program supporting research on how AI can be safely harnessed to solve specific, intractable problems facing humanity around the world.
11 July, 2024

Future of Life Institute Statement on the Pope’s G7 AI Speech

Max Tegmark provides a response to the Pope's remarks on autonomous weapons to G7 leaders.
18 June, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram