Statement Archives - Future of Life Institute https://futureoflife.org/category/statement/ Preserving the long-term future of life. Fri, 02 Aug 2024 11:10:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 FLI Praises AI Whistleblowers While Calling for Stronger Protections and Regulation  https://futureoflife.org/recent-news/ai-whistleblowers-and-stronger-protections/ Tue, 16 Jul 2024 18:15:20 +0000 https://futureoflife.org/?p=133002 Recent revelations spotlight the crucial role that whistleblowers and investigative journalists play in making AI safe from Big Tech’s reckless race to the bottom.

Reports of pressure to fast-track safety testing and attempts to muzzle employees from publicly voicing concerns reveal an alarming lack of accountability and transparency. This puts us all at risk. 

As AI companies frantically compete to create increasingly powerful and potentially dangerous systems without meaningful governance or oversight, it has never been more important that courageous employees bring bad behavior and safety issues to light. Our continued wellbeing and national security depend on it. 

We need to strengthen current whistleblower protections. Today, many of these protections only apply when a law is being broken. Given that AI is largely unregulated, employees and ex-employees cannot safely speak out when they witness dangerous and irresponsible practices. We urgently need stronger laws to ensure transparency, like California’s proposed SB1047 which looks to deliver safe and secure innovation for frontier AI. 

The Future of Life Institute commends the brave individuals who are striving to bring all-important incidents and transgressions to the attention of governments and the general public. Lawmakers should act immediately to pass legal measures that provide the protection these individuals deserve.

Anthony Aguirre, Executive Director of the Future of Life Institute

]]>
Future of Life Institute Statement on the Pope’s G7 AI Speech https://futureoflife.org/aws/future-of-life-institute-statement-on-the-popes-g7-ai-speech/ Tue, 18 Jun 2024 11:42:03 +0000 https://futureoflife.org/?p=132773 CAMBRIDGE, MA – Future of Life Institute (FLI) President and Co-Founder Max Tegmark today released the following statement after the Pope gave a speech at the G7 in Italy, raising the alarm about the risks of out-of-control AI development.

“The Future of Life Institute strongly supports the Pope’s call at the G7 for urgent political action to ensure artificial intelligence acts in service of humanity. This includes banning lethal autonomous weapons and ensuring that future AI systems stay under human control. I urge the leaders of the G7 nations to set an example for the rest of the world, enacting standards that keep future powerful AI systems safe, ethical, reliable, and beneficial.”

]]>
Statement in the run-up to the Seoul AI Safety Summit https://futureoflife.org/ai-policy/statement-seoul-ai-safety-summit/ Mon, 20 May 2024 13:31:42 +0000 https://futureoflife.org/?p=132408 View this statement as a PDF in English (Anglais) or French (Français)

The second AI Safety Summit will take place on May 21-22 in Seoul, as a follow up to the first Summit in Bletchley Park last November, and the Future of Life Institute is honoured to be participating once again. The first day in Seoul will feature a virtual session of heads of state and government and is expected to result in the signing of a Leaders’ Declaration calling for international cooperation in AI governance. A physical meeting between digital ministers the following day is then set to adopt a Ministerial Statement reaffirming the participants’ commitment to AI safety, innovation and inclusivity. An Annex labeled “Seoul Statement of Intent towards International Cooperation in AI Safety Science” is also expected.

Context

The Future of Life Institute (FLI) is an independent non-profit organisation founded in 2014 that works on reducing global catastrophic risks from powerful technologies. At a 2017 conference, FLI formulated one of the earliest sets of artificial intelligence (AI) governance principles, the Asilomar AI principles. The organisation has since become one of the leading voices on AI policy in Washington D.C. and Brussels, and is now the designated civil society actor for AI recommendations for the UN Secretary General’s Digital Cooperation Roadmap.

Since the inaugural AI Safety Summit, held at Bletchley Park in November 2023, at which FLI was a selected civil society participant, we have seen meaningful steps forward in AI governance at the national and international levels. The landmark EU AI Act, the first comprehensive legal framework for this transformative technology, was successfully passed. Crucially, it included the regulation of foundation models, thanks in large part to the advocacy of FLI and civil society partners.

Despite this progress, we have a long way to go. In March, FLI partnered with The Elders, an international organization founded by Nelson Mandela to bring together former world leaders including former UN Secretary General Ban Ki-moon in pursuit of peace, justice, human rights and a sustainable planet – to release an open letter calling on world leaders urgently to address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI. The AI Safety Summits are crucial global opportunities to do precisely this with regards to AI.

Recommendations

We would like to thank the Summit Organizers for their leadership in convening the world’s second AI Safety Summit, and for inviting FLI to participate. We are pleased to see so many global leaders coming together to address this urgent and crucial issue.

With this in mind, FLI would like to submit the following recommendations, for suggested actions coming out of this historic convening:

  • Prioritization of AI safety research as a key element in furthering responsible AI innovation;
  • International cooperation to advance AI safety;
  • Compatible binding guardrails to address AI safety risks;
  • Leveraging, promoting and fostering common scientific understanding through:
    • The independently-led International AI Safety Report and its iterations;
    • Synergy on AI testing capabilities;
    • The joint creation of evaluations, data sets and risk thresholds;
    • Cooperation on safety research and best practices via the network of AI safety institutes.
  • Credible external evaluations undertaken for advanced AI models or systems developed or used in respective jurisdictions;
  • Collaboration with all stakeholders to establish common frameworks for developing proposals in advance of the third Safety Summit, in France.

As we build on this work and look ahead to the French Summit, our key recommendation is that AI safety institutes formalise cooperation, including through the appointment of a coordinator. In so doing, the Seoul and Paris safety summits can pave the way for an international agency on AI as was called for by the UN Secretary-General, and more broadly help to build a new international architecture for AI regulation.

Stakeholders are encouraged to direct any questions to FLI’s AI Safety Summit Representative, Imane Bello (ima@futureoflife.org).

For all press enquiries, please get in touch with FLI Communications Director Ben Cumming (ben.cumming@futureoflife.org).

]]>
FLI Statement on Senate AI Roadmap https://futureoflife.org/ai-policy/fli-statement-on-senate-ai-roadmap/ Thu, 16 May 2024 13:00:00 +0000 https://futureoflife.org/?p=124547 CAMBRIDGE, MA – Future of Life Institute (FLI) President and Co-Founder Max Tegmark today released the following statement after Senate Majority Leader Chuck Schumer released the long awaited Senate AI Roadmap:

“I applaud Senators Schumer, Rounds, Young, and Heinrich for this important step toward tangible legislation to rein in the AI arms race that is driven by corporate profits, not what’s best for people around the world. It is good that this roadmap recognizes the risks from AGI and other powerful AI systems. However, we need more action as soon as possible.

“The reality is that the United States is already far behind Europe in developing and implementing policies that can make technological innovation sustainable by reducing the threats and harms presented by out-of-control, unchecked AI development. While this report is a good step in the right direction, more steps are urgently needed, including commonsense regulation to ensure that AI remains safe, ethical, reliable, and beneficial. As we have seen this week with OpenAI’s and Google’s release of their latest models, these companies remain locked in an accelerating race to create increasingly powerful and risky systems, without meaningful guardrails or oversight, even as the leaders of these corporations have stated that future more advanced AI could potentially cause human extinction.

“In order to harness the massive benefits of AI and minimize its considerable risks, policymakers and elected officials must be vigilant in the face of Big Tech recklessness and make sure that technological advancement is in the best interests of all – not just a handful of private corporations and billionaires.

Tegmark participated in the Senate’s bipartisan AI Insight Forum in October. He made headlines last year when he led an open letter calling for a six month pause on giant AI experiments.

See Max Tegmark’s full written testimony for the Senate AI Insight Forum.

Max Tegmark is a professor doing AI research at MIT, with more than three hundred technical papers and two bestselling books. He recently made headlines around the world by leading FLI’s open letter calling for a six-month pause on the training of advanced AI systems. It was signed by more than 30,000 experts, researchers, industry figures, and other leaders, and sounded the alarm on ongoing and unchecked AI development.

The Future of Life Institute is a global non-profit organization working to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.

]]>