Open Letters Archives - Future of Life Institute https://futureoflife.org/category/open-letters/ Preserving the long-term future of life. Tue, 06 Feb 2024 12:07:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Written Statement of Dr. Max Tegmark to the AI Insight Forum https://futureoflife.org/ai-policy/written-statement-of-dr-max-tegmark-to-the-ai-insight-forum/ Tue, 24 Oct 2023 19:00:00 +0000 https://futureoflife.org/?p=118492

AI Insight Forum: Innovation
October 24, 2023


Written Statement of Dr. Max Tegmark
Co-Founder and President of the Future of Life Institute
Professor of Physics at Massachusetts Institute of Technology

I first want to thank Majority Leader Schumer, the AI Caucus, and the rest of the Senators and staff who organized today’s event. I am grateful for the opportunity to speak with you all, and for your diligence in understanding and addressing this critical issue.

My name is Max Tegmark, and I am a Professor of Physics at MIT’s Institute for Artificial Intelligence and Fundamental Interactions and the Center for Brains, Minds and Machines. I am also the President and Co-Founder of the Future of Life Institute (FLI), an independent non-profit dedicated to realizing the benefits of emerging technologies and minimizing their potential for catastrophic harm.

Since 2014, FLI has worked closely with experts in government, industry, civil society, and academia to steer transformative technologies toward improving life through policy research, advocacy, grant-making, and educational outreach. In 2017, FLI coordinated development of the Asilomar AI Principles, one of the earliest and most influential frameworks for the governance of AI. FLI serves as the United Nations Secretary General’s designated civil society organization for recommendations on the governance of AI, and has been a leading voice in identifying principles for responsible development and use of AI for nearly a decade.

More recently, FLI made headlines by publishing an open letter calling for a six-month pause on the training of advanced AI systems more powerful than GPT-4, the state-of-the-art at the time of its publication. It was signed by more than 30,000 experts, researchers, industry figures, and other leaders, and sounded the alarm on ongoing, unchecked, and out-of-control AI development. As the Letter explained, the purpose of this pause was to allow our social and political institutions, our understanding of the capabilities and risks, and our tools for ensuring the systems are safe, to catch up as Big Tech companies continued to race ahead with the creation of increasingly powerful, and increasingly risky, systems. In other words, “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Innovation does not require uncontrollable AI

The call for a pause was widely reported, but many headlines missed a crucial nuance, a clarification in the subsequent paragraphs key to realizing the incredible promise of this transformative technology. The letter went on to read:

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

It is not my position, nor is it the position of FLI, that AI is inherently bad. AI promises remarkable benefits – advances in healthcare, new avenues for scientific discovery, increased productivity, among many more. What I am hoping to convey, however, is that we have no reason to believe vastly more complex, powerful, opaque, and uncontrollable systems are necessary to achieve these benefits. That innovation in AI, and reaping its untold benefits, does not have to mean the creation of dangerous and unpredictable systems that cannot be understood or proven safe, with the potential to cause immeasurable harm and even wipe out humanity.

AI can broadly be grouped into three categories:

  • “Narrow” AI systems – AI systems that are designed and optimized to accomplish a specific task or to be used in a specific domain.
  • Controllable general-purpose AI systems – AI systems that can be applied to a wide range of tasks, including some for which they were not specifically designed, with general proficiency up to or similar to the brightest human minds, and potentially exceeding the brightest human minds in some domains.
  • Uncontrollable AI systems – Often referred to as “superintelligence,” these are AI systems that far exceed human capacity across virtually all cognitive tasks, and therefore by definition cannot be understood or effectively controlled by humans.

The first two categories have already yielded incredible advances in biochemistry, medicine, transportation, logistics, meteorology, and many other fields. There is nothing to suggest that these benefits have been exhausted. In fact, experts argue that with continued optimization, fine-tuning, research, and creative application, the current generation of AI systems can effectively accomplish nearly all of the benefits from AI we have thus far conceived, with several decades of accelerating growth. We do not need more powerful systems to reap these benefits.

Yet it is the stated goal of the leading AI companies to develop the third, most dangerous category of AI systems. A May 2023 blog post from OpenAI rightly points out that “it’s worth considering why we are building this technology at all.” In addition to some of the benefits mentioned above, the blog post justifies continued efforts to develop superintelligence by espousing that “it would be […] difficult to stop the creation of superintelligence” because “it’s inherently part of the technological path we are on.”

The executives of these companies have acknowledged that the risk of this could be catastrophic, with the legitimate potential to cause mass casualties and even human extinction.  In a January 2023 interview, Sam Altman, CEO of OpenAI, said that “the bad case […] is, like, lights out for all of us.”  In May 2023, Altman, along with Demis Hassabis, CEO of Google Deepmind, Dario Amodei, CEO of Anthropic, and more than 350 other executives, researchers, and engineers working on AI endorsed a statement asserting that “itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

It is important to understand that creation of these systems is not inevitable, particularly before we can establish the societal, governmental, and technical mechanisms to prepare for and protect against their risks. The race toward creating these uncontrollable AI systems is the result of a tech sector market dynamic where prospective investment and perverse profit incentives drive reckless, runaway scaling to create the most powerful possible systems, at the expense of safety considerations. This is what “innovation” means to them.

But creating the most powerful system does not always mean creating the system that best serves the well-being of the American people. Even if we “win” the global race to develop these uncontrollable AI systems, we risk losing our social stability, security, and possibly even our species in the process. Far from ensuring geopolitical dominance, the destabilizing effect of haphazard proliferation of increasingly powerful AI systems is likely to put the United States at a substantial geopolitical disadvantage, sewing domestic discord, threatening national security, and harming quality of life. Our aspirations should instead be focused on innovation that improves our nation and our lives by ensuring that the systems we deploy are controllable, predictable, reliable, and safe – systems that do what we want them to, and do it well.

For a cautionary example, we can look to the emergence of recommender algorithms in social media. Over the past decade, tremendous strides were made in developing more effective algorithms for recommending content based on the behavior of users. Social media in general, and these algorithms in particular, promised to facilitate interpersonal connection, social discourse, and exposure to high-quality content.

Because these systems were so powerful and yet so poorly understood, however, society was not adequately equipped to protect against their potential harms. The prioritization of engagement in recommender systems led to an unforeseen preference for content evocative of negative emotion, extreme polarization, and the promotion of sensationalized and even fabricated “news,” fracturing public discourse and significantly harming mental and social health in the process. The technology was also weaponized against the American people by our adversaries, exacerbating these harms. 

For uncontrollable AI systems, these types of misaligned preferences and unexpected ramifications are likely to be even more dangerous, unless adequate oversight and regulation are imposed. Much of my ongoing research at MIT seeks to advance our understanding of mechanistic interpretability, a field of study dedicated to understanding how and why these opaque systems behave the way they do. My talented students and colleagues have made incredible strides in this endeavor, but there is still much work to be done before we can reliably understand and predict the behavior of today’s most advanced AI systems, let alone potential systems that can operate far beyond human cognitive performance.

AI innovation depends on regulation and oversight

Though AI may be technically complex, Congress has extensive experience putting in place the necessary governance to mitigate risks from new technologies without foreclosing their benefits. In establishing the Federal Aviation Administration, you have facilitated convenient air travel, while ensuring that airplanes are safe and reliable. In establishing the Food and Drug Administration, you have cultivated the world’s leading pharmaceutical industry, treating ailments previously thought untreatable, while ensuring that the medicine we take is safe and will not cause undue harm.

The same can and should be done for AI. In order to harness the benefits of AI and minimize its risks, it is essential that we invest in further improving our understanding of how these systems work, and that we put in place the oversight and regulation necessary to ensure that if these systems are created and deployed, that they will be safe, ethical, reliable, and beneficial. 

Regulation is often framed as an obstacle to innovation. But history has shown that failure to adequately regulate industries that pose catastrophic risk can be a far greater obstacle to technological progress. In 1979, the Three Mile Island nuclear reactor suffered a partial meltdown resulting from a mechanical failure, compounded by inadequate training and safety procedures among plant operators and management. 

Had the nuclear energy industry been subject to sufficient oversight for quality assurance of materials, robust auditing for safe operating conditions, and required training standards for emergency response procedures, the crisis could likely have been avoided. In fact, subsequent investigations showed that engineers from Babcock & Wilcox, the developers of the defective mechanism, had identified the design issue that caused the meltdown prior to the event, but failed to notify customers.

The result of this disaster was a near-complete shuttering of the American nuclear energy industry. The catastrophe fueled ardent anti-nuclear sentiment among the general public, and encouraged reactionary measures that made development of new nuclear power plants costly and infeasible. Following the incident at Three Mile Island, no new nuclear power plants were authorized for construction in the United States for over 30 years, foreclosing an abundant source of clean energy, squandering a promising opportunity for American energy independence, and significantly hampering innovation in the nuclear sector.

We cannot afford to risk a similar outcome with AI. The promise is too great. By immediately implementing proactive, meaningful regulation of the AI industry, we can reduce the probability of a Three Mile Island-like catastrophe, and safeguard the future of American AI innovation.

Recommendations

To foster sustained innovation that improves our lives and strengthens our economy, the federal government should take urgent steps by enacting the following measures:

  1. Protect against catastrophes that could derail innovation, and ensure that powerful systems are developed and deployed only if they will safely benefit the general public. To do so, we must require that highly-capable general purpose AI systems, and narrow AI systems intended for use in high-risk applications such as critical infrastructure, receive independent audits and licensure before deployment. Importantly, the burden of proving suitability for deployment should fall on the developer of the system, and if such proof cannot be provided, the system should not be deployed. This means approval and licensure for development of uncontrollable AI should not be granted at all, at least until we can be absolutely certain that we have established sufficient protocols for training and deployment to keep these systems in check.
    Auditing should include pre-training evaluation of safety and security protocols, and rigorous pre-deployment assessment of risk, reliability, and ethical considerations to ensure that the system does not present an undue risk to the well-being of individuals or society, and that the expected benefits of deployment outweigh the risks and harmful side effects. These assessments should include evaluation of potential risk from publishing the system’s model weights – an irreversible act that makes controlling the system and derivative systems virtually impossible – and provide requisite limitations on publication of and access to model weights as a condition of licensure. The process should also include continued monitoring and reporting of potential safety, security, and ethical concerns throughout the lifetime of the AI system. This will help identify and correct emerging and unforeseen risks, similar to the pharmacovigilance requirements imposed by the FDA.
  2. Develop and mandate rigorous cybersecurity standards that must be met by developers of advanced AI to avoid the potential compromise of American intellectual property, and prevent the use of our most powerful systems against us. To enforce these standards, the federal government should also require registration when acquiring or leasing access to large amounts of computational hardware, as well as when conducting large training runs. This would facilitate monitoring of proliferation of these systems, and enhance preparedness to respond in the event of an incident.
  3. Establish a centralized federal authority responsible for monitoring, evaluating, and regulating general-purpose AI systems, and advising other agencies on activities related to AI within their respective jurisdictions. In many cases, existing regulatory frameworks may be sufficient, or require only minor adjustments, to be applicable to narrow AI systems within specific sectors (e.g. financial sector, healthcare, education, employment, etc.). Advanced general-purpose AI systems, on the other hand, cut across several jurisdictional domains, present unique risks and novel capabilities, and are not adequately addressed by existing, domain-specific regulations or authorities. The centralized body would increase the efficiency of regulating these systems, and help to coordinate responses in the event of an emergency caused by an AI system.
  4. Subject developers of advanced general-purpose AI systems (i.e. those with broad, unpredictable, and emergent capabilities) to liability for harms caused by their systems. This includes clarifying that Section 230 of the Communications Decency Act does not apply to content generated by AI systems, even if a third-party provided the prompt to generate that content. This would incentivize caution and responsibility in the design of advanced AI systems, aligning profit motives with the safety and security of the general public to further protect against catastrophes that could derail AI innovation.
  5. Increase federal funding for research and development into technical AI safety, reliable assessments and benchmarks for evaluating and quantifying risks from advanced AI systems, and countermeasures for identifying and mitigating harms that emerge from misuse, malicious use, or unforeseen behavior of advanced AI systems. This will allow our tools for assessing and enhancing the safety of systems to keep pace with advancements in the capabilities of those systems, and will present new opportunities for innovating systems better aligned with the public interest.

Innovation is what is best, not what is biggest

I have no doubt there is consensus among those participating in this Forum, whether from government, industry, civil society, or academia, that the best path forward for AI must foster innovation, that American ingenuity should not be stifled, and that the United States should continue to act as a leader in technological progress on the global stage. That’s the easy part.

The hard part is defining what exactly “innovation” means, and what type of leader we seek to be. To me, “innovation” means manifesting new ideas that make life better. When we talk about American Innovation, we are talking not just about the creation of new technology, but about how that technology helps to further democratic values and strengthen our social fabric. How it allows us to spend more time doing what we love with those we love, and keeps us safe and secure, both physically and financially.

Again, the nuance here is crucial. “Innovation” is not just the manifestation of new ideas, but also ensuring that the realization of those ideas drives us toward a positive future. This means that a future where America is a global leader in AI innovation does not necessarily mean that we have created a more powerful system — that is, a system with more raw power, that can do more things. What it means is that we have created the systems that lead to the best possible America. Systems that are provably safe and controllable, where the benefits outweigh the risks. This future is simply not possible without robust regulation of the AI industry. 

]]>
FLI on “A Statement on AI Risk” and Next Steps https://futureoflife.org/ai-policy/fli-on-a-statement-on-ai-risk-and-next-steps/ Tue, 30 May 2023 23:04:38 +0000 https://futureoflife.org/?p=117881 The view that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” is now mainstream, with that statement being endorsed by a who’s who of AI experts and thought leaders from industry, academia, and beyond.

Although FLI did not develop this statement, we strongly support it, and believe the progress in regulating nuclear technology and synthetic biology is instructive for mitigating AI risk. FLI therefore recommends immediate action to implement the following recommendations.

Recommendations:

  • Akin to the Nuclear Non-Proliferation Treaty (NPT) and the Biological Weapons Convention (BWC), develop and institute international agreements to limit particularly high-risk AI proliferation and mitigate the risks of advanced AI, including track 1 diplomatic engagements between nations leading AI development, and significant contributions from non-proliferating nations that unduly bear risks of technology being developed elsewhere.
  • Develop intergovernmental organizations, akin to the International Atomic Energy Agency (IAEA), to promote peaceful uses of AI while mitigating risk and ensuring guardrails are enforced.
  • At the national level, establish rigorous auditing and licensing regimes, applicable to the most powerful AI systems, that place the burden of proving suitability for deployment on the developers of the system. Specifically:
    • Require pre-training auditing and documentation of a developer’s sociotechnical safety and security protocols prior to conducting large training runs, akin to the biocontainment precautions established for research and development that could pose a risk to biosafety.
    • Similar to the Food and Drug Administration’s (FDA) approval process for the introduction of new pharmaceuticals to the market, require the developer of an AI system above a specified capability threshold to obtain prior approval for the deployment of that system by providing evidence sufficient to demonstrate that the system does not present an undue risk to the wellbeing of individuals, communities, or society, and that the expected benefits of deployment outweigh risks and harmful side effects.
    • After approval and deployment, require continued monitoring of potential safety, security, and ethical risks to identify and correct emerging and unforeseen risks throughout the lifetime of the AI system, similar to pharmacovigilance requirements imposed by the FDA.
  • Prohibit the open-source publication of the most powerful AI systems unless particularly rigorous safety and ethics requirements are met, akin to constraints on the publication of “dual-use research of concern” in biological sciences and nuclear domains.
  • Pause the development of extremely powerful AI systems that significantly exceed the current state-of-the-art for large, general-purpose AI systems.

The success of these actions is neither impossible nor unprecedented: the last decades have seen successful projects at the national and international levels to avert major risks presented by nuclear technology and synthetic biology, all without stifling the innovative spirit and progress of academia and industry. International cooperation has led to, among other things, adoption of the NPT and establishment of the IAEA, which have mitigated the development and proliferation of dangerous nuclear weapons and encouraged more equitable distribution of peaceful nuclear technology.  Both of these achievements came during the height of the Cold War, when the United States, the USSR, and many others prudently recognized that geopolitical competition should not be prioritized over humanity’s continued existence.  

Only five years after the NPT went into effect, the BWC came into force, similarly establishing strong international norms against the development and use of biological weapons, encouraging peaceful innovation in bioengineering, and ensuring international cooperation in responding to dangers resulting from violation of those norms.  Domestically, the United States adopted federal regulations requiring extreme caution in the conduct of research and when storing or transporting materials that pose considerable risk to biosafety.  The Centers for Disease Control and Prevention (CDC) also published detailed guidance establishing biocontainment precautions commensurate to different levels of biosafety risk.  These precautions are monitored and enforced at a range of levels, including through internal institutional review processes and supplementary state and local laws.  Analogous regulations have been adopted by nations around the world.

Not since the dawn of the nuclear age has a new technology so profoundly elevated the risk of global catastrophe.  FLI’s own letter called on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”  It also stated that “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”  

Now, two months later – despite discussions at the White House, Senate hearings, widespread calls for regulation, public opinion strongly in favor of a pause, and an explicit agreement by the leaders of most advanced AI efforts that AI can pose an existential risk – there has been no hint of a pause, or even a slowdown.  If anything, the breakneck pace of these efforts has accelerated and competition has intensified.

The governments of the world must recognize the gravity of this moment, and treat advanced AI with the care and caution it deserves. AI, if properly controlled, can usher in a very long age of abundance and human flourishing. It would be foolhardy to jeopardize this promising future by charging recklessly ahead without allowing the time necessary to keep AI safe and beneficial.

]]>
Open Letter Against Reckless Nuclear Escalation and Use https://futureoflife.org/open-letter/open-letter-against-reckless-nuclear-escalation-and-use/ Tue, 18 Oct 2022 17:40:21 +0000 https://futureoflife.org/?post_type=open-letter&p=43938 The abhorrent Ukraine war has the potential to escalate into an all-out NATO-Russia nuclear conflict that would be the greatest catastrophe in human history. Although uncertainties remain about the humanitarian impact, recent peer-reviewed work1 has suggested that we cannot rule out that nuclear winter would kill about 99% of people in America, Europe, Russia and China. More must be done to prevent such escalation. We, the undersigned, call for unequivocal condemnation of the following behaviors by any party:

  1. Nuclear first-use and threats or advocacy thereof
  2. Reckless escalation without significant military benefit, such as the perpetration of atrocities, militarily irrelevant assassinations, misleading atrocity propaganda2, attacks on nuclear reactors, and the destruction of militarily irrelevant infrastructure (e.g. Nord Stream)
  3. Suppression of constructive discussion on how to avoid nuclear war, such as falsely framing opposition to (2) as appeasement or disloyalty

As was recently affirmed in a P5 statement3, a nuclear war cannot be won and must never be fought. There are many de-escalation strategies that involve no concessions or appeasement. Breaking the current escalation spiral is a global moral imperative, because nuclear weapons threaten not merely those targeted by them, but all people on Earth.

References:

  1. Xia et al, Nature Food, 3, 586–596 (2022)
  2. Atrocity propaganda“, Wikipedia (2022)
  3. Joint statement of the 5 permanent security council members”, White House Briefing Room (Jan. 2 2022)
]]>
Foresight in AI Regulation Open Letter https://futureoflife.org/open-letter/foresight-in-ai-regulation-open-letter/ Sun, 14 Jun 2020 00:00:00 +0000 https://futureoflife.org/uncategorized/foresight-in-ai-regulation-open-letter/ The emergence of artificial intelligence (AI) promises dramatic changes in our economic and social structures as well as everyday life in Europe and elsewhere; it has been compared to both electricity and the internet. Both are general, ubiquitous, and reshaped the world. But the internet analogy is more apt, for while electricity requires standards and regulation, it just works the way it works; but the functioning of the internet and its economy were largely shaped by key policy choices made along the way. We now sit in the early days of AI, and the choices we make over the next decade will crucially shape its place in and relation to society. We applaud the European Commission for tackling the challenge of determining the role that government can and should play and support meaningful regulations of AI systems in high-risk application areas. The stakes are high, and the potential ability of AI to remake institutions means that it is wise to consider novel approaches to governance and regulation, rather than assuming that existing structures will suffice.

The Commission will undoubtedly receive detailed feedback from many corporations, industry groups, and think tanks representing their own and others’ interests, which in some cases involve weakening regulation and downplaying potential risks related to AI. We hope that the Commission will stand firm in doing neither. Moreover, as experts who have been involved for years or decades in developing the core technologies, we would like to emphasize one central point: that while it is difficult to forecast exactly how or how fast technological progress will occur, it is easy to predict that it will occur. It is imperative, then, to consider AI not just as it is now, represented largely by a few particular classes of data-driven machine learning systems, but in the forms it is likely to take.

AI does and will come in many forms, including as intelligent software tools, as integrated into massive online systems, and as instantiated as software agents designed to substitute for humans. Each of these raises particular issues and challenges: how do we govern recommendation tools whose recommendations are difficult to predict or understand? How do we manage massive systems that mediate interactions between people, and in which people serve as part of the system? What do we do with software agents that replace people in their jobs or impersonate people in their interactions?
These and many other questions are challenging but largely addressable through proper governance for today’s AI systems. But in each case AI systems of the future will be more capable, more flexible, more general, more continually learning — in short, more intelligent! Laws and regulations can have a defining role in industries, set powerful precedents, and can sometimes hold sway long after their intended lifespan. It is important that in crafting legislation now, the Commission considers, in consultation with high-level experts, the many forms that AI is likely to take, and the capabilities that it will at least potentially have in years to come.

The EU has already shown foresight and clear leadership in adopting meaningful regulations in other technology issues. We, the co-signed experts, support the Commission in taking a meaningful, future-oriented approach regarding the effects of AI systems on the rights and safety of EU citizens.

]]>
2019 Statement to the United Nations in Support of a Ban on LAWS https://futureoflife.org/open-letters/2019-statement-to-the-united-nations-in-support-of-a-ban-on-laws/ Thu, 28 Mar 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/2019-statement-to-the-united-nations-in-support-of-a-ban-on-laws/ The following statement was read on the floor of the United Nations during the March, 2019 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons. 

Thank you chair for your leadership.

The Future of Life Institute (FLI) is a research and outreach organization that works with scientists to mitigate existential risks facing humanity. FLI is deeply worried about an imprudent application of technology in warfare, especially with regard to emerging technologies in the field of artificial intelligence.

Let me give you an example: In just the last few months, researchers from various universities have shown how easy it is to trick image recognition software. For example, researchers at Auburn University found that if objects, like a school bus or a firetruck were simply shifted into unnatural positions, so that they were upended or turned on their sides in an image, the image classifier would not recognize them. And this is just one of many, many examples of image recognition software failing because it does not understand the context within the image.

This is the same technology that would analyze and interpret data picked up by the sensors of an autonomous weapons system. It’s not hard to see how quickly image recognition software could misinterpret situations on the battlefield if it has to quickly assess everyday objects that have been upended or destroyed.

And challenges with image recognition is only one of many examples why an increasing number of people in AI research and in the tech field – that is an increasing number of the people who are most familiar with how the technology works, and how it can go wrong – are all saying that this technology cannot be used safely or fairly to select and engage a target. In the last few years, over 4500 aritificial intelligence and robotics researchers have called for a ban on lethal autonomous weapons, over 100 CEOs of prominent AI companies have called for a ban on lethal autonomous weapons, and over 240 companies and nearly 4000 people have pledged to never develop lethal autonomous weapons.

But as we turn our attention to human-machine teaming, we must also carefully consider research coming from the field of psychology and recognize the limitations there as well. I’m sure everyone in this room has had a beneficial personal experience working with artificial intelligence. But when under extreme pressure, as in life and death situations, psychologists find that humans become overly reliant on technology. In one study at Georgia Tech, students were taking a test alone in a room, when a fire alarm went off. The students had the choice of leaving through a clearly marked exit that was right by them, or following a robot that was guiding them away from the exit. Almost every student followed the robot, away from the safe exit. In fact, even when the students had been warned in advance that the robot couldn’t be trusted, they still followed it away from the exit.

As the delegate from Costa Rica mentioned yesterday, the New York Times has reported that pilots on the Boeing 737 Max had only 40 seconds to fix the malfunctioning automated software on the plane. These accidents represent tragic examples of how difficult it can be for a human to correct an autonomous system at the last minute if something has gone wrong.

Meaningful human control is something we must strive for, but as our colleagues from ICRAC said yesterday, “If states wanted genuine meaningful human control of weapons systems, they would not be using autonomous weapons systems.”

I want to be clear. Artificial intelligence will be incredibly helpful for militaries, and militaries should move to adopt systems that can be implemented safely in areas such as improving the situational awareness of the military personnel who would be in the loop, logistics, and defense. But we cannot allow algorithms to make the decision to harm a human – they simply cannot be trusted, and we have no reason to believe they will be trustworthy anytime soon. Given the incredible pace at which the technology is advancing, thousands of AI researchers from around the world call with great urgency for a ban on lethal autonomous weapons.

There is a strong sense in the science and technology community that only a binding legal instrument can ensure continued research and development of beneficial civilian applications without the endeavor being tainted by the spectre of lethal algorithms. We thus call on states to take real leadership on this issue! We must move to negotiate a legally binding instrument that will ensure algorithms are not allowed to make the decision – or to unduly influence the decision — to harm or kill a human.

Thank you.

]]>
2018 Statement to United Nations on Behalf of LAWS Open Letter Signatories https://futureoflife.org/open-letter/statement-to-united-nations-on-behalf-of-laws-open-letter-signatories/ Tue, 04 Sep 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/statement-to-united-nations-on-behalf-of-laws-open-letter-signatories/ The following statement was read on the floor of the United Nations during the August, 2018 CCW meeting, in which delegates discussed a possible ban on lethal autonomous weapons. No conclusions were reached at this meeting.

Thank you, Mr. Chair, and I thank the Chair for his excellent leadership during this meeting. I’m grateful for the opportunity to share comments on behalf of the Future of Life Institute.

First, I read the following on behalf of the nearly 4,000 AI and robotics researchers and scientists from around the world who have called on the United Nations to move forward to negotiations to consider a legally binding instrument on lethal autonomous weapons.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

Second, on behalf of 137 CEOs of AI and robotics companies around the world, and in light of the rapid progress we’re seeing in artificial intelligence, I add:

We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Finally, I would add that nearly 240 AI-related organizations and over 3,000 individuals have taken their concerns about LAWS a step further, and they have pledged that they will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

Thousands of artificial intelligence researchers around the world are calling on states to begin negotiations toward a legally binding instrument regarding LAWS, and we are happy to do all we can to help clarify technical issues surrounding delegates’ concerns about definitions and meaningful human control.

Thank you.

]]>
UN Ban on Nuclear Weapons Open Letter https://futureoflife.org/open-letter/nuclear-open-letter/ Tue, 19 Jun 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/nuclear-open-letter/ An Open Letter from Scientists in Support of the UN Nuclear Weapons Negotiations
Click here to see this page in other languages : Russian 
Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created. We scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought. Individual explosions can obliterate cities, radioactive fallout can contaminate regions, and a high-altitude electromagnetic pulse may cause mayhem by frying electrical grids and electronics across a continent. The most horrible hazard is a nuclear-induced winter, in which the fires and smoke from as few as a thousand detonations might darken the atmosphere enough to trigger a global mini ice age with year-round winter-like conditions. This could cause a complete collapse of the global food system and apocalyptic unrest, potentially killing most people on Earth – even if the nuclear war involved only a small fraction of the roughly 14,000 nuclear weapons that today’s nine nuclear powers control. As Ronald Reagan said: “A nuclear war cannot be won and must never be fought.”

Unfortunately, such a war is more likely than one may hope, because it can start by mistake, miscalculation or terrorist provocation. There is a steady stream of accidents and false alarms that could trigger all-out war, and relying on never-ending luck is not a sustainable strategy. Many nuclear powers have larger nuclear arsenals than needed for deterrence, yet prioritize making them more lethal over reducing them and the risk that they get used.

But there is also cause for optimism. On March 27 2017, an unprecedented process begins at the United Nations: most of the world’s nations convene to negotiate a ban on nuclear arms, to stigmatize them like biological and chemical weapons, with the ultimate goal of a world free of these weapons of mass destruction. We support this, and urge our national governments to do the same, because nuclear weapons threaten not merely those who have them, but all people on Earth.

If you have questions about this letter, please contact Max Tegmark.

Sources

* 1979 report by the US Government estimating that nuclear war would kill 28%-88% without including nuclear winter effects
* Electromagnetic pulse: p79 of US Army Report AD-A278230 (unclassified)
* Peer-reviewed 2007 nuclear winter calculation
* Estimate of current nuclear warhead inventory from Federation of American Scientists
* Timeline of nuclear close calls
* UN General Assembly Resolution to launch the above-mentioned negotiations

]]>
An Open Letter to the United Nations Convention on Certain Conventional Weapons https://futureoflife.org/open-letter/autonomous-weapons-open-letter-2017/ Sun, 20 Aug 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/autonomous-weapons-open-letter-2017/ As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Translations: Chinese GermanJapanese    Russian

]]>
The Principles – Signatories List https://futureoflife.org/open-letter/principles-signatories/ Wed, 11 Jan 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/principles-signatories/ Autonomous Weapons Open Letter: AI & Robotics Researchers – Signatories List https://futureoflife.org/open-letter/awos-signatories/ Tue, 09 Feb 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/awos-signatories/ Click here to view the Autonomous Weapons Open Letter for AI & Robotics Researchers.

]]>
Digital Economy Open Letter https://futureoflife.org/open-letter/digital-economy-open-letter/ Mon, 25 Jan 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/digital-economy-open-letter/ An open letter by a team of economists about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact. (Jun 4, 2015)

]]>
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter https://futureoflife.org/open-letter/ai-open-letter/ Wed, 28 Oct 2015 00:00:00 +0000 https://futureoflife.org/uncategorized/ai-open-letter/ Click here to see this page in other languages: Chinese   German Japanese  Russian 

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

If you have questions about this letter, please contact Max Tegmark.

]]>