Featured Archives - Future of Life Institute https://futureoflife.org/category/featured/ Preserving the long-term future of life. Mon, 01 Jul 2024 15:55:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats https://futureoflife.org/open-letter/ai-policy-for-a-better-future-on-addressing-both-present-harms-and-emerging-threats/ Wed, 25 Oct 2023 21:24:01 +0000 https://futureoflife.org/?post_type=open-letter&p=118519 This open letter is also available as a PDF.

Dear Senate Majority Leader Schumer, Senator Mike Rounds, Senator Martin Heinrich, Senator Todd Young, Representative Anna Eshoo, Representative Michael McCaul, Representative Don Beyer, and Representative Jay Obernolte,

As two leading organizations dedicated to building an AI future that supports human flourishing, Encode Justice and the Future of Life Institute represent an intergenerational coalition of advocates, researchers, and technologists. We acknowledge that without decisive action, AI may continue to pose civilization-changing threats to our society, economy, and democracy.

At present, we find ourselves face-to-face with tangible, wide-reaching challenges from AI like algorithmic bias, disinformation, democratic erosion, and labor displacement. We simultaneously stand on the brink of even larger-scale risks from increasingly powerful systems: early reports indicate that GPT-4 can be jailbroken to generate bomb-making instructions, and that AI intended for drug discovery can be repurposed to design tens of thousands of lethal chemical weapons in just hours. If AI surpasses human capabilities at most tasks, we may struggle to control it altogether, with potentially existential consequences. We must act fast.

With Congress slated to consider sweeping AI legislation, lawmakers are increasingly looking to experts to advise on the most pressing concerns raised by AI and the proper policies to address them. Fortunately, AI governance is not zero-sum – effectively regulating AI now can meaningfully limit present harms and ethical concerns, while mitigating the most significant safety risks that the future may hold. We must reject the false choice between addressing the documented harms of today and the potentially catastrophic threats of tomorrow.

Encode Justice and the Future of Life Institute stand in firm support of a tiered federal licensing regime, similar to that proposed jointly by Sen. Blumenthal (D-CT) and Sen. Hawley (R-MO), to measure and minimize the full spectrum of risks AI poses to individuals, communities, society, and humanity. Such a regime must be precisely scoped, encompassing general-purpose AI and high-risk use cases of narrow AI, and should apply the strictest scrutiny to the most capable models that pose the greatest risk. It should include independent evaluation of potential societal harms like bias, discrimination, and behavioral manipulation, as well as catastrophic risks such as loss of control and facilitated manufacture of WMDs. Critically, it should not authorize the deployment of an advanced AI system unless the developer can demonstrate it is ethical, fair, safe, and reliable, and that its potential benefits outweigh its risks.

We offer the following additional recommendations:

  • A federal oversight body, similar to the National Highway Traffic Safety Administration, should be created to administer this AI licensing regime. Since AI is a moving target, pre- and post-deployment regulations should be designed with agility in mind.
  • Given that AI harms are borderless, we need rules of the road with global buy-in. The U.S. should lead in intergovernmental standard-setting discussions. Events aimed at regulatory consensus-building, like the upcoming U.K. AI Safety Summit, must continue to bring both allies and adversaries to the negotiating table, with an eye toward binding international agreements. International efforts to manage AI risks must include the voices of all major AI players, including the U.S., U.K., E.U., and China, as well as countries that are not developing advanced AI but are nonetheless subject to its risks, including much of the Global South.
  • Lawmakers must move towards a more participatory approach to AI policymaking that centers the voices of civil society, academia, and the public. Industry voices should not dominate the conversation, and a concerted effort should be made to platform a diverse range of voices so that the policies we craft today can serve everyone, not just the wealthiest few.

Encode Justice, a movement of nearly 900 young people worldwide, represents a generation that will inherit the AI reality we are currently building. In the face of one of the most significant threats to our generation’s shared future—the specter of catastrophic AI—we refuse to bury our heads in the sand. At the same time, we refuse to abandon our unfinished efforts to mitigate existing harms and create a more equal and just America. The Future of Life Institute remains committed to steering this transformative technology for the good of humanity, and the ongoing, out-of-control AI arms race risks our lives, our civil liberties, and our wellbeing. Together, we see an urgent moral imperative to confront present-day risks and future-proof for oncoming ones. AI licensing presents an opportunity to do both.

Sincerely,

Encode Justice
Future of Life Institute
]]>
Pause Giant AI Experiments: An Open Letter https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Wed, 22 Mar 2023 14:37:08 +0000 https://futureoflife.org/staging-environm/?post_type=open-letter&p=46296 AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.

In addition to this open letter, we have published a set of policy recommendations which can be found here:

Policymaking in the Pause

12th April 2023

View paper

This open letter is available in French, Arabic, and Brazilian Portuguese. You can also download this open letter as a PDF.

]]>
Global AI Policy https://futureoflife.org/resource/ai-policy/ Fri, 16 Dec 2022 00:00:00 +0000 https://futureoflife.org/ai-policy-new/ How countries and organizations around the world are approaching the benefits and risks of AI

Artificial intelligence (AI) holds great economic, social, medical, security, and environmental promise. AI systems can help people acquire new skills and training, democratize services, design and deliver faster production times and quicker iteration cycles, reduce energy usage, provide real-time environmental monitoring for pollution and air quality, enhance cybersecurity defenses, boost national output, reduce healthcare inefficiencies, create new kinds of enjoyable experiences and interactions for people, and improve real-time translation services to connect people around the world. For all of these reasons and many more researchers are thrilled with the potential uses of AI systems to help manage some of the world’s hardest problems and improve countless lives.

But in order to realize this potential, the challenges associated with AI development have to be addressed. This page highlights four complementary resources to help decision makers navigate AI policy: A dashboard that helps analyse the current documents published on the OECD website, a global landscape of national and international AI strategies; a list of prominent AI policy challenges and key recommendations that have been made to address them; and a list of AI policy resources for those hoping to learn more.

1. National Strategy Radar

NOTE: This resource is not designed for use on mobile. Please view on desktop for the best experience.

The Future of Life Institute has partnered with PricewaterhouseCoopers in the development of an initiative to analyze the soft and hard law efforts to govern artificial intelligence (AI). The dashboard below was created with the help of a natural language processing tool that categorized documents downloaded from the OECD’s AI governance database in February of 2022. Further background information on this initiative is available in this blog post and users can expect periodic updates to this resource.

Summary View

This dashboard summarizes the distribution of AI documents published by governments and sorted by geography, year, and topic. The fact that a country lacks a bubble does not mean it lacks documents relevant to artificial intelligence. Rather, it indicates that they are not available within the OECD database.

How to use:

  • Clicking on one of the countries on the map will display the year and topic distribution of that country.
  • Clicking on a topic in the bottom right frame will display the distribution of that topic on the map and the bar chart.


Document View

This view gives an in depth look at all the documents individually, organized by their country of origin and the topics identified via a natural language processing-based dashboard developed by PwC.

How to use:

  • Clicking on the download icon to the left of the file name will open the document in question
  • On the right hand side, documents can be filtered by year of publication
  • On the left hand side, users can select and filter by topic

2. AI Policy Challenges

This page is intended as an introduction to the major challenges that society faces when attempting to govern Artificial Intelligence (AI). FLI acknowledges that this list is not comprehensive, but rather a sample of the issues we believe are consequential.

Here are ten areas of particular concern for the safe and beneficial development of AI in the near- and far-future. These should be prioritised by policymakers seeking to prepare for and mitigate the risks of AI, as well as harness its benefits.

3. AI Policy Resources

The evolution of AI systems has proved to be so rapid and continuous that society now expects novel methods and applications everyday. To keep up, stakeholders in the public, private, and nonprofit worlds are responding with a variety of instruments, from soft to hard law, and academic or grey literature. As a result, the resources that describe and respond to the policy challenges generated by AI are always in flux. This page contains a few excellent resources to help you stay up to date.

]]>
Daniela and Dario Amodei on Anthropic https://futureoflife.org/podcast/daniela-and-dario-amodei-on-anthropic/ Fri, 04 Mar 2022 00:00:00 +0000 https://futureoflife.org/uncategorized/daniela-and-dario-amodei-on-anthropic/ Anthony Aguirre and Anna Yelizarova on FLI’s Worldbuilding Contest https://futureoflife.org/podcast/anthony-aguirre-and-anna-yelizarova-on-flis-worldbuilding-contest/ Wed, 09 Feb 2022 00:00:00 +0000 https://futureoflife.org/uncategorized/anthony-aguirre-and-anna-yelizarova-on-flis-worldbuilding-contest/ David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy https://futureoflife.org/podcast/david-chalmers-on-reality-virtual-worlds-and-the-problems-of-philosophy/ Wed, 26 Jan 2022 00:00:00 +0000 https://futureoflife.org/uncategorized/david-chalmers-on-reality-virtual-worlds-and-the-problems-of-philosophy/ Rohin Shah on the State of AGI Safety Research in 2021 https://futureoflife.org/podcast/rohin-shah-on-the-state-of-agi-safety-research-in-2021/ Tue, 02 Nov 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/rohin-shah-on-the-state-of-agi-safety-research-in-2021/ Filippa Lentzos on Global Catastrophic Biological Risks https://futureoflife.org/podcast/filippa-lentzos-on-global-catastrophic-biological-risks/ Fri, 01 Oct 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/filippa-lentzos-on-global-catastrophic-biological-risks/ Susan Solomon and Stephen Andersen on Saving the Ozone Layer https://futureoflife.org/podcast/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/ Thu, 16 Sep 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/ James Manyika on Global Economic and Technological Trends https://futureoflife.org/podcast/james-manyika-on-global-economic-and-technological-trends/ Tue, 07 Sep 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/james-manyika-on-global-economic-and-technological-trends/ Michael Klare on the Pentagon’s view of Climate Change and the Risks of State Collapse https://futureoflife.org/podcast/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/ Fri, 30 Jul 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/ Avi Loeb on ‘Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures https://futureoflife.org/podcast/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/ Fri, 09 Jul 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/ Avi Loeb on UFOs and if they’re Alien in Origin https://futureoflife.org/podcast/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/ Fri, 09 Jul 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/ Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI https://futureoflife.org/podcast/nicolas-berggruen-on-the-dynamics-of-power-wisdom-and-ideas-in-the-age-of-ai/ Tue, 01 Jun 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/nicolas-berggruen-on-the-dynamics-of-power-wisdom-and-ideas-in-the-age-of-ai/ Bart Selman on the Promises and Perils of Artificial Intelligence https://futureoflife.org/podcast/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Thu, 20 May 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/ Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century https://futureoflife.org/podcast/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Wed, 21 Apr 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/ Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures https://futureoflife.org/podcast/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ Thu, 01 Apr 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/ Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI https://futureoflife.org/podcast/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ Sat, 20 Mar 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/ Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons https://futureoflife.org/podcast/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ Thu, 25 Feb 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ Beatrice Fihn on the Total Elimination of Nuclear Weapons https://futureoflife.org/podcast/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/ Fri, 22 Jan 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/