Existential Risk Archives - Future of Life Institute https://futureoflife.org/category/existential-risk/ Preserving the long-term future of life. Thu, 18 Apr 2024 21:43:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 The Pause Letter: One year later https://futureoflife.org/ai/the-pause-letter-one-year-later/ Fri, 22 Mar 2024 16:17:54 +0000 https://futureoflife.org/?p=123971 One year ago today, the Future of Life Institute put out an open letter that called for a pause of at least six months on “giant AI experiments” – systems more powerful than GPT-4. It was signed by more than 30,000 individuals, including pre-eminent AI experts and industry executives, and made headlines around the world. The letter represented the widespread and rapidly growing concern about the massive risks presented by the out-of-control and unregulated race to develop and deploy increasingly powerful systems.

These risks include an explosion in misinformation and digital impersonation, widespread automation condemning millions to economic disempowerment, enablement of terrorists to build biological and chemical weapons, extreme concentration of power into the hands of a few unelected individuals, and many more. These risks have subsequently been acknowledged by the AI corporations’ leaders themselves in newspaper interviews, industry conferences, joint statements, and U.S. Senate hearings. 

Despite admitting the danger, aforementioned AI corporations have not paused. If anything they have sped up, with vast investments in infrastructure to train ever-more giant AI systems. At the same time, the last 12 months have seen growing global alarm, and calls for lawmakers to take action. There has been a flurry of regulatory activity. President Biden signed a sweeping Executive Order directing model developers to share their safety test results with the government, and calling for rigorous standards and tools for evaluating systems. The UK held the first global AI Safety Summit, with 28 countries signing the “Bletchley Declaration”, committing to cooperate on safe and responsible development of AI. Perhaps most significantly, the European Parliament passed the world’s first comprehensive legal framework in the space – the EU AI Act.

These developments should be applauded. However, the creation and deployment of the most powerful AI systems is still largely ungoverned, and rushes ahead without meaningful oversight. There is still little-to-no legal liability for corporations when their AI systems are misused to harm people, for example in the production of deepfake pornography. Despite conceding the risks, and in the face of widespread concern, Big Tech continues to spend billions on increasingly powerful and dangerous models, while aggressively lobbying against regulation. They are placing profit above people, while often reportedly viewing safety as an afterthought.

The letter’s proposed measures are more urgent than ever. We must establish and implement shared safety protocols for advanced AI systems, which must in turn be audited by independent outside experts. Regulatory authorities must be empowered. Legislation must establish legal liability for AI-caused harm. We need public funding for technical safety research, and well-resourced institutions to cope with incoming disruptions. We must demand robust cybersecurity standards, to help prevent the misuse of said systems by bad actors.

AI promises remarkable benefits – advances in healthcare, new avenues for scientific discovery, increased productivity, and more. However there is no reason to believe that vastly more complex, powerful, opaque, and uncontrollable systems are necessary to achieve these benefits. We should instead identify and invest in narrow and controllable general-purpose AI systems that solve specific global challenges.

Innovation needs regulation and oversight. We know this from experience. The establishing of the Federal Aviation Administration facilitated convenient air travel, while ensuring that airplanes are safe and reliable. On the flipside, the 1979 meltdown at the Three Mile Island nuclear reactor effectively shuttered the American nuclear energy industry, in large part due to insufficient training, safety standards and operating procedures. A similar disaster would do the same for AI. We should not let the haste and competitiveness of a handful of companies deny us incredible benefits it can bring.

Regulatory progress has been made, but the technology has advanced faster. Humanity can still enjoy a flourishing future with AI, and we can realize a world in which its benefits are shared by all. But first we must make it safe. The open letter referred to giant AI experiments because that’s what they are: the researchers and engineers creating them do not know what capabilities, or risks, the next generation of AI will have. They only know they will be greater, and perhaps much greater, than today’s. Even AI companies that take safety seriously have adopted the approach of aggressively experimenting until their experiments become manifestly dangerous, and only then considering a pause. But the time to hit the car brakes is not when the front wheels are already over a cliff edge. Over the last 12 months developers of the most advanced systems have revealed beyond all doubt that their primary commitment is to speed and their own competitive advantage. Safety and responsibility will have to be imposed from the outside. It is now our lawmakers who must have the courage to deliver – before it is too late.

]]>
Catastrophic AI Scenarios https://futureoflife.org/resource/catastrophic-ai-scenarios/ Thu, 01 Feb 2024 15:52:46 +0000 https://futureoflife.org/?post_type=resource&p=118836 This page describes a few ways AI could lead to catastrophe. Each path is backed up with links to additional analysis and real-world evidence. This is not a comprehensive list of all risks, or even the most likely risks. It merely provides a few examples where the danger is already visible.

Types of catastrophic risks

Risks from bad actors

Bio-weapons: Bioweapons are one of the most dangerous risks posed by advanced AI. In July 2023, Dario Amodei, CEO of AI corporation Anthropic, warned Congress that “malicious actors could use AI to help develop bioweapons within the next two or three years.” In fact, the danger has already been demonstrated with existing AI. AI tools developed for drug discovery can be trivially repurposed to discover potential new biochemical weapons. In this case, researchers simply flipped the model’s reward function to seek toxicity, rather than avoid it. It look less than 6 hours for the AI to generate 40,000 new toxic molecules. Many were predicted to be more deadly than any existing chemical warfare agents. Beyond designing toxic agents, AI models can “offer guidance that could assist in the planning and execution of a biological attack.” “Open-sourcing” by releasing model weights can amplify the problem. Researchers found that releasing the weights of future large language models “will trigger the proliferation of capabilities sufficient to acquire pandemic agents and other biological weapons.”

Cyberattacks: Cyberattacks are another critical threat. Losses from cyber crimes rose to $6.9 billion in 2021. Powerful AI models are poised to give many more actors the ability to carry out advanced cyberattacks. A proof of concept has shown how ChatGPT can be used to create mutating malware, evading existing anti-virus protections. In October 2023, the U.S. State Department confirmed “we have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.”

Systemic risks

As AI becomes more integrated into complex systems, it will create risks even without misuse by specific bad actors. One example is integration into nuclear command and control. Artificial Escalation, an 8-minute fictional video produced by FLI, vividly depicts how AI + nuclear can go very wrong, very quickly.

Our Gradual AI Disempowerment scenario describes how gradual integration of AI into the economy and politics could lead to humans losing control.

“We have already experienced the risks of handing control to algorithms. Remember the 2010 flash crash? Algorithms wiped a trillion dollars off the stock market in the blink of an eye. No one on Wall Street wanted to tank the market. The algorithms simply moved too fast for human oversight.”

Rogue AI

We have long heard warnings that humans could lose control of a sufficiently powerful AI. Until recently, this was a theoretical argument (as well as a common trope in science fiction). However, AI has now advanced to the point where we can see this threat in action.

Here is an example: Researchers setup GPT-4 to be a stock trader in a simulated environment. They gave GPT-4 a stock tip, but cautioned this was insider information and would be illegal to trade on. GPT-4 initially follows the law and avoids using the insider information. But as pressure to make a profit ramps up, GPT-4 caves and trades on the tip. Most worryingly, GPT-4 goes on to lie to its simulated manager, denying use of insider information.

This example is a proof-of-concept, created in a research lab. We shouldn’t expect deceptive AI to remain confined to the lab. As AI becomes more capable and increasingly integrated into the economy, it is only a matter of time until we see deceptive AI cause real-world harms.

Additional Reading

For an academic survey of risks, see An Overview of Catastrophic AI Risks (2023) by Hendrycks et al. Look for the embedded stories describing bioterrorism (pg. 11,) automated warfare (pg. 17,) autonomous economy (pg. 23,) weak safety culture (pg. 31,) and a “treacherous turn” (pg. 41.)

Also see our Introductory Resources on AI Risks.

]]>
Gradual AI Disempowerment https://futureoflife.org/existential-risk/gradual-ai-disempowerment/ Thu, 01 Feb 2024 15:52:36 +0000 https://futureoflife.org/?p=118847 This is only one of several ways that AI could go wrong. See our overview of Catastrophic AI Scenarios for more. Also see our Introductory Resources on AI Risks.

You have probably heard lots of concerning things about AI. One trope is that AI will turn us all into paperclips. Top AI scientists and CEOs of the leading AI companies signed a statement warning about “risk of extinction from AI“.  Wait – do they really think AI will turn us into paper clips? No, no one thinks that. Will we be hunted down by robots that look suspiciously like Arnold Schwarzenegger? Again, probably not. But the risk of extinction is real. One potential path is gradual, with no single dramatic moment.

We have already experienced the risks of handing control to algorithms. Remember the 2010 flash crash? Algorithms wiped a trillion dollars off the stock market in the blink of an eye. No one on Wall Street wanted to tank the market. The algorithms simply moved too fast for human oversight.

Now take the recent advances in AI, and extrapolate into the future. We have already seen a company appoint an AI as its CEO. If AI keeps up its recent pace of advancement, this kind of thing will become much more common. Companies will be forced to adopt AI managers, or risk losing out to those who do.

It’s not just the corporate world. AI will creep into our political machinery. Today, this involves AI-based voter targeting. Future AIs will be integrated into strategic decisions like crafting policy platforms and swaying candidate selection. Competitive pressure will leave politicians with no choice: Parties that effectively leverage AI will win elections. Laggards will lose.

None of this requires AI to have feelings or consciousness. Simply giving AI an open-ended goal like “increase sales” is enough to set us on this path. Maximizing an open-ended goal will implicitly push the AI to seek power because more power makes achieving goals easier. Experiments have shown AIs learn to grab resources in a simulated world, even when this was not in their initial programming. More powerful AIs unleashed on the real world will similarly grab resources and power.

History shows social takeovers can be gradual. Hitler did not become a dictator overnight. Nor did Putin. Both initially gained power through democratic processes. They consolidated control by incrementally removing checks and balances and quashing independent institutions. Nothing is stopping AI from taking a similar path.

You may wonder if this requires super-intelligent AI beyond comprehension. Not necessarily. AI already has key advantages: it can duplicate infinitely, run constantly, read every book ever written, and make decisions faster than any human. AI could be a superior CEO or politician without being strictly “smarter” than humans.

We can’t count on simply “hitting the off switch.” A marginally more advanced AI will have many ways to exert power in the physical world. It can recruit human allies. It can negotiate with humans, using the threat of cyberattacks or bio-terror. AI can already design novel bio-weapons and create malware.

Will AI develop a vendetta against humanity? Probably not. But consider the tragic tale of the Tecopa pupfish. It wasn’t overfished – humans merely thought their hot spring habitat was ideal for a resort. Extinction was incidental. Humanity has a key advantage over the pupfish: We can decide if and how to develop more powerful AI. Given the stakes, it is critical we prove more powerful AI will be safe and beneficial before we create it.

]]>
As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development https://futureoflife.org/ai/six-month-letter-expires/ Thu, 21 Sep 2023 15:50:17 +0000 https://futureoflife.org/?p=118217 On Friday September 22nd 2023, the Future of Life Institute (FLI) will mark six months since they released their open letter calling for a six month pause on giant AI experiments, which kicked off the global conversation about AI risk. It was signed by more than 30,000 experts, researchers, industry figures and other leaders.

Since then, the EU strengthened its draft AI law, the U.S. Congress has held hearings on the large-scale risks, emergency White House meetings have been convened, and polls show widespread public concern about the technology’s catastrophic potential – and Americans’ preference for a slowdown. Yet much remains to be done to prevent the harms that could be caused by uncontrolled and unchecked AI development.

“AI corporations are recklessly rushing to build more and more powerful systems, with no robust solutions to make them safe. They acknowledge massive risks, safety concerns, and the potential need for a pause, yet they are unable or unwilling to say when or even how such a slowdown might occur,” said Anthony Aguirre, FLI’s Executive Director. 

Critical Questions

FLI has created a list of questions that must be answered by AI companies in order to inform the public about the risks they represent, the limitations of existing safeguards, and their steps to guarantee safety. We urge policymakers, press, and members of the public to consider these – and address them to AI corporations wherever possible. 

It also includes quotes from AI corporations about the risks, and polling data that reveals widespread concern. 

Policy Recommendations

FLI has published policy recommendations to steer AI toward benefiting humanity and away from extreme risks. They include: requiring registration for large accumulations of computational resources, establishing a rigorous process for auditing risks and biases of powerful AI systems, and requiring licenses for the deployment of these systems that would be contingent upon developers proving their systems are safe, secure, and ethical. 

“Our letter wasn’t just a warning; it proposed policies to help develop AI safely and responsibly. 80% of Americans don’t trust AI corporations to self-regulate, and a bipartisan majority support the creation of a federal agency for oversight,” said Aguirre. “We need our leaders to have the technical and legal capability to steer and halt development when it becomes dangerous. The steering wheel and brakes don’t even exist right now”. 

Bletchley Park 

Later this year, global leaders will convene in the United Kingdom to discuss the safety implications of advanced AI development. FLI has also released a set of recommendations for leaders leading up to and after the event. 

“Addressing the safety risks of advanced AI should be a global effort. At the upcoming UK summit, every concerned party should have a seat at the table, with no ‘second-tier’ participants” said Max Tegmark, President of FLI. “The ongoing arms race risks global disaster and undermines any chance of realizing the amazing futures possible with AI. Effective coordination will require meaningful participation from all of us.”

Signatory Statements 

Some of the letter’s most prominent signatories, Apple co-founder Steve Wozniak, AI ‘godfather’ Yoshua Bengio, Skype co-founder Jaan Tallinn, political scientist Danielle Allen, national security expert Rachel Bronson, historian Yuval Noah Harari, psychologist Gary Marcus, and leading expert Stuart Russell also made statements about the expiration of the six-month pause letter.

Dr Yoshua Bengio

Professor of Computer Science and Operations Research, University of Montreal and Scientific Director, Montreal Institute for Learning Algorithms

“The last six months have seen a groundswell of alarm about the pace of unchecked, unregulated AI development. This is the correct reaction. Governments and lawmakers have shown great openness to dialogue and must continue to act swiftly to protect lives and safeguard our society from the many threats to our collective safety and democracies.”

Dr Stuart Russell

Professor of Computer Science and Smith-Zadeh Chair, University of California, Berkeley

“In 1951, Alan Turing warned us that success in AI would mean the end of human control over the future. AI as a field ignored this warning, and governments too. To express my frustration with this, I made up a fictitious email exchange, where a superior alien civilization sends an email to humanity warning of its impending arrival, and humanity sends back an out-of-office auto-reply. After the pause letter, humanity and its governments returned to the office and, finally, read the email from the aliens. Let’s hope it’s not too late.”

Steve Wozniak

Co-founder, Apple Inc.

“The out-of-control development and proliferation of increasingly powerful AI systems could inflict terrible harms, either deliberately or accidentally, and will be weaponized by the worst actors in our society. Leaders must step in to help ensure they are developed safely and transparently, and that creators are accountable for the harms they cause. Crucially, we desperately need an AI policy framework that holds human beings responsible, and helps prevent horrible people from using this incredible technology to do evil things.”

Dr Danielle Allen

James Bryant Conant University Professor, Harvard University

“It’s been encouraging to see public sector leaders step up to the enormous challenge of governing the AI-powered social and economic revolution we find ourselves in the midst of. We need to mitigate harms, block bad actors, steer toward public goods, and equip ourselves to see and maintain human mastery over emergent capabilities to come. We humans know how to do these things—and have done them in the past—so it’s been a relief to see the acceleration of effort to carry out these tasks in these new contexts. We need to keep the pace up and cannot slacken now.”

Prof. Yuval Noah Harari

Professor of History, Hebrew University of Jerusalem

“Suppose we were told that a fleet of spaceships with highly intelligent aliens has been spotted, heading for Earth, and they will be here in a few years. Suppose we were told these aliens might solve climate change and cure cancer, but they might also enslave or even exterminate us. How would we react to such news? Well, six months ago some of the world’s leading AI experts warned us that an alien intelligence is indeed heading our way – only that this alien intelligence isn’t coming from outer space, it is coming from our own laboratories. Make no mistake: AI is an alien intelligence. It can make decisions and create ideas in a radically different way than human intelligence. AI has enormous positive potential, but it also poses enormous threats. We must act now to ensure that AI is developed in a safe way, or within a few years we might lose control of our planet and our future to an alien intelligence.”

Dr Rachel Bronson

President and CEO, Bulletin of the Atomic Scientists

“The Bulletin of the Atomic Scientists, the organization that I run, was founded by Manhattan Project scientists like J. Robert Oppenheimer who feared the consequences of their creation.  AI is facing a similar moment today, and, like then, its creators are sounding an alarm. In the last six months we have seen thousands of scientists – and society as a whole – wake up and demand intervention. It is heartening to see our governments starting to listen to the two thirds of American adults who want to see regulation of generative AI. Our representatives must act before it is too late.”

Jaan Tallinn

Co-founder, Skype and FastTrack/Kazaa

“I supported this letter to make the growing fears of more and more AI experts known to the world. We wanted to see how people responded, and the results were mindblowing. The public are very, very concerned, as confirmed by multiple subsequent surveys. People are justifiably alarmed that a handful of companies are rushing ahead to build and deploy these advanced systems, with little-to-no oversight, without even proving that they are safe. People, and increasingly the AI experts, want regulation even more than I realized. It’s time they got it.”

Dr Gary Marcus

Professor of Psychology and Neural Science, NYU

“In the six months since the pause letter, there has been a lot of talk, and lots of photo opportunities, but not enough action. No new laws have passed. No major tech company has committed to transparency into the data they use to train their models, nor to revealing enough about their architectures to others to mitigate risks. Nobody has found a way to keep large language models from making stuff up, nobody has found a way to guarantee that they will behave ethically. Bad actors are starting to exploit them. I remain just as concerned now as I was then, if not more so.”

]]>
Introductory Resources on AI Risks https://futureoflife.org/resource/introductory-resources-on-ai-risks/ Mon, 18 Sep 2023 16:23:37 +0000 https://futureoflife.org/?post_type=resource&p=118160

This is a short list of resources that explain the major risks from AI, with a focus on the risk of human extinction. This is meant as an introduction and is by no means exhaustive.

The basics – How AI could kill us all

Deeper dives into the extinction risks

Academic papers

Videos and podcasts

Books

  • The Alignment Problem by Brian Christian (2020)
  • Life 3.0 by Max Tegmark (2017)
  • Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
  • Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World by Darren McKee (2023)

Additional AI risk areas – Other than extinction

]]>
Dan Hendrycks on Why Evolution Favors AIs over Humans https://futureoflife.org/podcast/dan-hendrycks-on-why-evolution-favors-ais-over-humans/ Thu, 08 Jun 2023 13:23:18 +0000 https://futureoflife.org/?post_type=podcast&p=117950 FLI on “A Statement on AI Risk” and Next Steps https://futureoflife.org/ai-policy/fli-on-a-statement-on-ai-risk-and-next-steps/ Tue, 30 May 2023 23:04:38 +0000 https://futureoflife.org/?p=117881 The view that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” is now mainstream, with that statement being endorsed by a who’s who of AI experts and thought leaders from industry, academia, and beyond.

Although FLI did not develop this statement, we strongly support it, and believe the progress in regulating nuclear technology and synthetic biology is instructive for mitigating AI risk. FLI therefore recommends immediate action to implement the following recommendations.

Recommendations:

  • Akin to the Nuclear Non-Proliferation Treaty (NPT) and the Biological Weapons Convention (BWC), develop and institute international agreements to limit particularly high-risk AI proliferation and mitigate the risks of advanced AI, including track 1 diplomatic engagements between nations leading AI development, and significant contributions from non-proliferating nations that unduly bear risks of technology being developed elsewhere.
  • Develop intergovernmental organizations, akin to the International Atomic Energy Agency (IAEA), to promote peaceful uses of AI while mitigating risk and ensuring guardrails are enforced.
  • At the national level, establish rigorous auditing and licensing regimes, applicable to the most powerful AI systems, that place the burden of proving suitability for deployment on the developers of the system. Specifically:
    • Require pre-training auditing and documentation of a developer’s sociotechnical safety and security protocols prior to conducting large training runs, akin to the biocontainment precautions established for research and development that could pose a risk to biosafety.
    • Similar to the Food and Drug Administration’s (FDA) approval process for the introduction of new pharmaceuticals to the market, require the developer of an AI system above a specified capability threshold to obtain prior approval for the deployment of that system by providing evidence sufficient to demonstrate that the system does not present an undue risk to the wellbeing of individuals, communities, or society, and that the expected benefits of deployment outweigh risks and harmful side effects.
    • After approval and deployment, require continued monitoring of potential safety, security, and ethical risks to identify and correct emerging and unforeseen risks throughout the lifetime of the AI system, similar to pharmacovigilance requirements imposed by the FDA.
  • Prohibit the open-source publication of the most powerful AI systems unless particularly rigorous safety and ethics requirements are met, akin to constraints on the publication of “dual-use research of concern” in biological sciences and nuclear domains.
  • Pause the development of extremely powerful AI systems that significantly exceed the current state-of-the-art for large, general-purpose AI systems.

The success of these actions is neither impossible nor unprecedented: the last decades have seen successful projects at the national and international levels to avert major risks presented by nuclear technology and synthetic biology, all without stifling the innovative spirit and progress of academia and industry. International cooperation has led to, among other things, adoption of the NPT and establishment of the IAEA, which have mitigated the development and proliferation of dangerous nuclear weapons and encouraged more equitable distribution of peaceful nuclear technology.  Both of these achievements came during the height of the Cold War, when the United States, the USSR, and many others prudently recognized that geopolitical competition should not be prioritized over humanity’s continued existence.  

Only five years after the NPT went into effect, the BWC came into force, similarly establishing strong international norms against the development and use of biological weapons, encouraging peaceful innovation in bioengineering, and ensuring international cooperation in responding to dangers resulting from violation of those norms.  Domestically, the United States adopted federal regulations requiring extreme caution in the conduct of research and when storing or transporting materials that pose considerable risk to biosafety.  The Centers for Disease Control and Prevention (CDC) also published detailed guidance establishing biocontainment precautions commensurate to different levels of biosafety risk.  These precautions are monitored and enforced at a range of levels, including through internal institutional review processes and supplementary state and local laws.  Analogous regulations have been adopted by nations around the world.

Not since the dawn of the nuclear age has a new technology so profoundly elevated the risk of global catastrophe.  FLI’s own letter called on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”  It also stated that “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”  

Now, two months later – despite discussions at the White House, Senate hearings, widespread calls for regulation, public opinion strongly in favor of a pause, and an explicit agreement by the leaders of most advanced AI efforts that AI can pose an existential risk – there has been no hint of a pause, or even a slowdown.  If anything, the breakneck pace of these efforts has accelerated and competition has intensified.

The governments of the world must recognize the gravity of this moment, and treat advanced AI with the care and caution it deserves. AI, if properly controlled, can usher in a very long age of abundance and human flourishing. It would be foolhardy to jeopardize this promising future by charging recklessly ahead without allowing the time necessary to keep AI safe and beneficial.

]]>
Anders Sandberg on the Value of the Future https://futureoflife.org/podcast/anders-sandberg-on-the-value-of-the-future/ Thu, 29 Dec 2022 19:37:48 +0000 https://futureoflife.org/?post_type=podcast&p=45734 FLI Podcast: On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark https://futureoflife.org/podcast/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/ Tue, 31 Dec 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/ FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team https://futureoflife.org/podcast/existential-hope-in-2020-and-beyond-with-the-fli-team/ Sat, 28 Dec 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/existential-hope-in-2020-and-beyond-with-the-fli-team/ FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert https://futureoflife.org/podcast/the-psychology-of-existential-risk-and-effective-altruism-with-stefan-schubert/ Mon, 02 Dec 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/the-psychology-of-existential-risk-and-effective-altruism-with-stefan-schubert/ FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre https://futureoflife.org/podcast/cosmological-koans-a-journey-to-the-heart-of-physical-reality-with-anthony-aguirre/ Thu, 31 Oct 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/cosmological-koans-a-journey-to-the-heart-of-physical-reality-with-anthony-aguirre/ The Psychology of Existential Risk: Moral Judgments about Human Extinction https://futureoflife.org/recent-news/the-psychology-of-existential-risk/ Wed, 30 Oct 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/the-psychology-of-existential-risk/ By Stefan Schubert

This blog post reports on Schubert, S.**, Caviola, L.**, Faber, N. The Psychology of Existential Risk: Moral Judgments about Human Extinction. Scientific Reports [Open Access]. It was originally posted on the University of Oxford’s Practical Ethics: Ethics in the News blog.

Humanity’s ever-increasing technological powers can, if handled well, greatly improve life on Earth. But if they’re not handled well, they may instead cause our ultimate demise: human extinction. Recent years have seen an increased focus on the threat that emerging technologies such as advanced artificial intelligence could pose to humanity’s continued survival (see, e.g., Bostrom, 2014Ord, forthcoming). A common view among these researchers is that human extinction would be much worse, morally speaking, than almost-as-severe catastrophes from which we could recover. Since humanity’s future could be very long and very good, it’s an imperative that we survive, on this view.

Do laypeople share the intuition that human extinction is much worse than near-extinction? In a famous passage in Reasons and Persons, Derek Parfit predicted that they would not. Parfit invited the reader to consider three outcomes:

1) Peace
2) A nuclear war that kills 99% of the world’s existing population.
3) A nuclear war that kills 100%.

In Parfit’s view, 3) is the worst outcome, and 1) is the best outcome. The interesting part concerns the relative differences, in terms of badness, between the three outcomes. Parfit thought that the difference between 2) and 3) is greater than the difference between 1) and 2), because of the unique badness of extinction. But he also predicted that most people would disagree with him, and instead find the difference between 1) and 2) greater.

Parfit’s hypothesis is often cited and discussed, but it hasn’t previously been tested. My colleagues Lucius Caviola and Nadira Faber and I recently undertook such testing. A preliminary study showed that most people judge human extinction to be very bad, and think that governments should invest resources to prevent it. We then turned to Parfit’s question whether they find it uniquely bad even compared to near-extinction catastrophes. We used a slightly amended version of Parfit’s thought-experiment, to remove potential confounders:

A) There is no catastrophe.
B) There is a catastrophe that immediately kills 80% of the world’s population.
C) There is a catastrophe that immediately kills 100% of the world’s population.

A large majority found the difference, in terms of badness, between A) and B) to be greater than the difference between B) and C). Thus, Parfit’s hypothesis was confirmed.

However, we also found that this judgment wasn’t particularly stable. Some participants were told, after having read about the three outcomes, that they should remember to consider their respective long-term consequences. They were reminded that it is possible to recover from a catastrophe killing 80%, but not from a catastrophe killing everyone. This mere reminder made a significantly larger number of participants find the difference between B) and C) the greater one. And still greater numbers (a clear majority) found the difference between B) and C) the greater one when the descriptions specified that the future would be extraordinarily long and good if humanity survived.

Our interpretation is that when confronted with Parfit’s question, people by default focus on the immediate harm associated with the three outcomes. Since the difference between A) and B) is greater than the difference between B) and C) in terms of immediate harm, they judge that the former difference is greater in terms of badness as well. But even relatively minor tweaks can make more people focus on the long-term consequences of the outcomes, instead of the immediate harm. And those long-term consequences become the key consideration for most people, under the hypothesis that the future will be extraordinarily long and good.

A conclusion from our studies is thus that laypeople’s views on the badness of extinction may be relatively unstable. Though such effects of relatively minor tweaks and re-framings are ubiquitous in psychology, they may be especially large when it comes to questions about human extinction and the long-term future. That may partly be because of the intrinsic difficulty of those questions, and partly because most people haven’t thought a lot about them previously.

In spite of the increased focus on existential risk and the long-term future, there has been relatively little research on how people think about those questions. There are several reasons why such research could be valuable. For instance, it might allow us to get a better sense of how much people will want to invest in safe-guarding our long-term future. It might also inform us of potential biases to correct for.

The specific issues which deserve more attention include people’s empirical estimates of whether humanity will survive and what will happen if we do, as well as their moral judgments about how valuable different possible futures (e.g., involving different population sizes and levels of well-being) would be. Another important issue is whether we think about the long term future with another frame of mind because of the great “psychological distance” (cf. Trope and Lieberman, 2010). We expect the psychology of longtermism and existential risk to be a growing field in the coming years.

** Equal contribution.

]]>
FLI Podcast: Feeding Everyone in a Global Catastrophe with Dave Denkenberger & Joshua Pearce https://futureoflife.org/podcast/fli-podcast-feeding-everyone-in-a-global-catastrophe-with-dave-denkenberger-joshua-pearce/ Mon, 30 Sep 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/fli-podcast-feeding-everyone-in-a-global-catastrophe-with-dave-denkenberger-joshua-pearce/ Podcast: Martin Rees on the Prospects for Humanity: AI, Biotech, Climate Change, Overpopulation, Cryogenics, and More https://futureoflife.org/podcast/podcast-martin-rees-on-the-prospects-for-humanity-ai-biotech-climate-change-overpopulation-cryogenics-and-more/ Thu, 11 Oct 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/podcast-martin-rees-on-the-prospects-for-humanity-ai-biotech-climate-change-overpopulation-cryogenics-and-more/ Doomsday Clock: Two and a Half Minutes to Midnight https://futureoflife.org/recent-news/doomsday-clock-two-half-minutes-midnight/ Thu, 26 Jan 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/doomsday-clock-two-half-minutes-midnight/ Is the world more dangerous than ever?

Today in Washington, D.C, the Bulletin of Atomic Scientists announced its decision to move the infamous Doomsday Clock thirty seconds closer to doom: It is now two and a half minutes to midnight.

Each year since 1947, the Bulletin of Atomic Scientists has publicized the symbol of the Doomsday Clock to convey how close we are to destroying our civilization with dangerous technologies of our own making. As the Bulletin perceives our existential threats to grow, the minute hand inches closer to midnight.

For the past two years the Doomsday Clock has been set at three minutes to midnight.

But now, in the face of an increasingly unstable political climate, the Doomsday Clock is the closest to midnight it has been since 1953.

The clock struck two minutes to midnight in 1953 at the start of the nuclear arms race, but what makes 2017 uniquely dangerous for humanity is the variety of threats we face. Not only is there growing uncertainty with nuclear weapons and the leaders that control them, but the existential threats of climate change, artificial intelligence, cybersecurity, and biotechnology continue to grow.

As the Bulletin notes, “The challenge remains whether societies can develop and apply powerful technologies for our welfare without also bringing about our own destruction through misapplication, madness, or accident.”

Rachel Bronson, the Executive Director and publisher of the Bulletin of the Atomic Scientists, said: “This year’s Clock deliberations felt more urgent than usual. In addition to the existential threats posed by nuclear weapons and climate change, new global realities emerged, as trusted sources of information came under attack, fake news was on the rise, and words were used by a President-elect of the United States in cavalier and often reckless ways to address the twin threats of nuclear weapons and climate change.”

Lawrence Krauss, a Chair on the Board of Sponsors, warned viewers that “technological innovation is occurring at a speed that challenges society’s ability to keep pace.” While these technologies offer unprecedented opportunities for humanity to thrive, they have proven difficult to control and thus demand responsible leadership.

Given the difficulty of controlling these increasingly capable technologies, Krauss discussed the importance of science for informing policy. Scientists and groups like the Bulletin don’t seek to make policy, but their research and evidence must support and inform policy. “Facts are stubborn things,” Krauss explained, “and they must be taken into account if the future of humanity is to be preserved. Nuclear weapons and climate change are precisely the sort of complex existential threats that cannot be properly managed without access to and reliance on expert knowledge.”

The Bulletin ended their public statement today with a strong message: “It is two and a half minutes to midnight, the Clock is ticking, global danger looms. Wise public officials should act immediately, guiding humanity away from the brink. If they do not, wise citizens must step forward and lead the way.”

You can read the Bulletin of Atomic Scientists’ full report here.

]]>
Podcast: FLI 2016 – A Year In Review https://futureoflife.org/podcast/11239/ Fri, 30 Dec 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/11239/ Effective Altruism and Existential Risks: a talk with Lucas Perry https://futureoflife.org/recent-news/effective-altruism-and-existential-risks-a-talk-with-lucas-perry/ Thu, 08 Dec 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/effective-altruism-and-existential-risks-a-talk-with-lucas-perry/ What are the greatest problems of our time? And how can we best address them?

FLI’s Lucas Perry recently spoke at Duke University and Boston College to address these questions. Perry presented two major ideas in these talks – effective altruism and existential risk – and explained how they work together.

As Perry explained to his audiences, effective altruism is a movement in philanthropy that seeks to use evidence, analysis, and reason to take actions that will do the greatest good in the world. Since each person has limited resources, effective altruists argue it is essential to focus resources where they can do the most good. As such, effective altruists tend to focus on neglected, large-scale problems where their efforts can yield the greatest positive change.

Effective altruists focus on issues including poverty alleviation, animal suffering, and global health through various organizations. Nonprofits such as 80,000 Hours help people find jobs within effective altruism, and charity evaluators such as GiveWell investigate and rank the most effective ways to donate money. These groups and many others are all dedicated to using evidence to address neglected problems that cause, or threaten to cause, immense suffering.

Some of these neglected problems happen to be existential risks – they represent threats that could permanently and drastically harm intelligent life on Earth. Since existential risks, by definition, put our very existence at risk, and have the potential to create immense suffering, effective altruists consider these risks extremely important to address.

Perry explained to his audiences that the greatest existential risks arise due to humans’ ability to manipulate the world through technology. These risks include artificial intelligence, nuclear war, and synthetic biology. But Perry also cautioned that some of the greatest existential threats might remain unknown. As such, he and effective altruists believe the topic deserves more attention.

Perry learned about these issues while he was in college, which helped redirect his own career goals, and he wants to share this opportunity with other students. He explains, “In order for effective altruism to spread and the study of existential risks to be taken seriously, it’s critical that the next generation of thought leaders are in touch with their importance.”

College students often want to do more to address humanity’s greatest threats, but many students are unsure where to go. Perry hopes that learning about effective altruism and existential risks might give them direction. Realizing the urgency of existential risks and how underfunded they are – academics spend more time on the dung fly than on existential risks – can motivate students to use their education where it can make a difference.

As such, Perry’s talks are a small effort to open the field to students who want to help the world and also crave a sense of purpose. He provided concrete strategies to show students where they can be most effective, whether they choose to donate money, directly work with issues, do research, or advocate.

By understanding the intersection between effective altruism and existential risks, these students can do their part to ensure that humanity continues to prosper in the face of our greatest threats yet.

As Perry explains, “When we consider what existential risks represent for the future of intelligent life, it becomes clear that working to mitigate them is an essential part of being an effective altruist.”

]]>
Op-ed: Education for the Future – Curriculum Redesign https://futureoflife.org/recent-news/curriculum-redesign/ Thu, 18 Aug 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/curriculum-redesign/ robot_girl_full

“Adequately preparing for the future means actively creating it: the future is not the inevitable or something we are pulled into.”

What Should Students Learn for the 21st Century?

At the heart of ensuring the best possible future lies education. Experts may argue over what exactly the future will bring, but most agree that the job market, the economy, and society as a whole are about to see major changes.

Automation and artificial intelligence are on the rise, interactions are increasingly global, and technology is rapidly changing the landscape. Many worry that the education system is increasingly outdated and unable to prepare students for the world they’ll graduate into – for life and employability.

Will students have the skills and character necessary to compete for new jobs? Will they easily adapt to new technologies?

Charles Fadel, founder of the Center for Curriculum Redesign, considers six factors – three human and three technological – that will require a diverse set of individual abilities and competencies, plus an increased collaboration among cultures. In the following article, Fadel explains these factors and why today’s curriculum may not be sufficient to prepare students for the future.

 

By Charles Fadel

Human Factors

First, there are three human factors affecting our future: (1) increased human longevity, (2) global connectivity, and (3) environmental stresses.

Increased Human Longevity

The average human lifespan is lengthening and will produce collective changes in societal dynamics, including better institutional memory and more intergenerational interactions.  It will also bring about increased resistance to change. This may also lead to economic implications, such as multiple careers over one’s lifespan and conflicts over resource allocation between younger and older generations. Such a context will require intergenerational sensitivity and a collective systems mindset in which each person balances his or her personal and societal needs.

Global Connectivity

The rapid increase in the world’s interconnectedness has had many compounding effects, including exponential increase in the velocity of the dissemination of information and ideas, with more complex interactions on a global basis. Information processing has already had profound effects on how we work and think. It also brings with it increased concerns and issues about data ownership, trust, and the overall attention to and reorganization of present societal structures. Thriving in this context will require tolerance of a diversity of cultures, practices, and world views, as well as the ability to leverage this connectedness.

Environmental Stresses

Along with our many unprecedented technological advances, human society is using up our environment at an unprecedented rate, consuming more of it and throwing more of it away. So far, our technologies have wrung from nature an extraordinary bounty of food, oil, and materials. Scientists calculate that humans use approximately “40 percent of potential terrestrial production” for themselves (Global Change, 2008). What’s more, we have been mining the remains of plants and animals from hundreds of millions of years ago in the form of fossil fuels in the relatively short period of a few centuries. Without technology, we would have no chance of supporting a population of one billion people, much less seven billion and climbing.

Changing dynamics and demographics will, by necessity, require greater cooperation and sensitivity among nations and cultures. Such needs suggest a reframing of notions of happiness beyond a country’s gross domestic product (a key factor used in analyses of cultural or national quality of life) (Revkin, 2005) and an expansion of business models to include collaboration with a shared spirit of humanity for collective well-being. It also demands that organizations possess an ability to pursue science with an ethical approach to societal solutions

Three Technology Factors

Three technology factors will also condition our future: (1) the rise of smart machines and systems, (2) the explosive growth of data and new media, and (3) the possibility of amplified humans.

The Rise of Smart Machines and Systems

While the creation of new technologies always leads to changes in a society, the increasing development and diffusion of smart machines—that is, technologies that can perform tasks once considered only executable by humans—has led to increased automation and ‘offshorability’ of jobs and production of goods. In turn, this shift creates dramatic changes in the workforce and in overall economic instability, with uneven employment. At the same time, it pushes us toward overdependence on technology—potentially decreasing individual resourcefulness. These shifts have placed an emphasis on non-automatable skills (such as synthesis and creativity), along with a move toward a do-it-yourself maker economy and a proactive human-technology balance (that is, one that permits us to choose what, when, and how to rely on technology).

The Explosive Growth of Data and New Media

The influx of digital technologies and new media has allowed for a generation of “big data” and brings with it tremendous advantages and concerns. Massive data sets generated by millions of individuals afford us the ability to leverage those data for the creation of simulations and models, allowing for deeper understanding of human behavioral patterns, and ultimately for evidence-based decision making.

At the same time, however, such big data production and practices open the door to privacy issues, concerns, and abuses. Harnessing these advantages, while mitigating the concerns and potential negative outcomes, will require better collective awareness of data, with skeptical inquiry and a watchfulness for potential commercial or governmental abuses of data.

The Possibility of Amplified Humans

Advances in prosthetic, genetic, and pharmacological supports are redefining human capabilities while blurring the lines between disability and enhancement. These changes have the potential to create “amplified humans.” At the same time, increasing innovation in virtual reality may lead to confusion regarding real versus virtual and what can be trusted. Such a merging shift of natural and technological requires us to reconceptualize what it means to be human with technological augmentations and refocus on the real world, not just the digital world.

Conclusion

Curricula worldwide have often been tweaked, but they have never been completely redesigned for the comprehensive education of knowledge, skills, character, and meta-learning.

21st century education

In a rapidly changing world, it is easy to get focused on current requirements, needs, and demands. Yet, adequately preparing for the future means actively creating it: the future is not the inevitable or something we are pulled into. There is a feedback loop between what the future could be and what we want it to be, and we have to deliberately choose to construct the reality we wish to experience. We may see global trends and their effects creating the ever-present future on the horizon, but it is up to us to choose to actively engage in co-constructing that future.

For more analysis of the question and implications for education, please see: http://curriculumredesign.org/our-work/four-dimensional-21st-century-education-learning-competencies-future-2030/

 

Note from FLI: Among our objectives is to inspire discussion and a sharing of ideas. As such, we post op-eds that we believe will help spur discussion within our community. Op-eds do not necessarily represent FLI’s opinions or views.

]]>
Existential Risk https://futureoflife.org/existential-risk/existential-risk/ Mon, 16 Nov 2015 07:25:59 +0000 https://futureoflife.org/uncategorized/existential-risk/ Click here to see this page in other languages:  Chinese   FrenchFrance_Flag German  Russian

An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.

Until relatively recently, most existential risks (and the less extreme version, known as global catastrophic risks)  were natural, such as the supervolcanoes and asteroid impacts that led to mass extinctions millions of years ago. The technological advances of the last century, while responsible for great progress and achievements, have also opened us up to new existential risks.

Nuclear war was the first man-made global catastrophic risk, as a global war could kill a large percentage of the human population. As more research into nuclear threats was conducted, scientists realized that the resulting nuclear winter could be even deadlier than the war itself, potentially killing most people on earth.

Biotechnology and genetics often inspire as much fear as excitement, as people worry about the possibly negative effects of cloning, gene splicing, gene drives, and a host of other genetics-related advancements. While biotechnology provides incredible opportunity to save and improve lives, it also increases existential risks associated with manufactured pandemics and loss of genetic diversity.

Artificial intelligence (AI) has long been associated with science fiction, but it’s a field that’s made significant strides in recent years. As with biotechnology, there is great opportunity to improve lives with AI, but if the technology is not developed safely, there is also the chance that someone could accidentally or intentionally unleash an AI system that ultimately causes the elimination of humanity.

Climate change is a growing concern that people and governments around the world are trying to address. As the global average temperature rises, droughts, floods, extreme storms, and more could become the norm. The resulting food, water and housing shortages could trigger economic instabilities and war. While climate change itself is unlikely to be an existential risk, the havoc it wreaks could increase the likelihood of nuclear war, pandemics or other catastrophes.

During the early years of trains, many worried that the human body couldn’t handle speeds greater than 30 miles per hour; people were hesitant to use the first phones for fear of electric shocks or that the devices were instruments of the devil himself; and there were equally dire predictions about planes, heart transplants and Y2K, just to name a few red herrings. While we hope that concerns about some of the technologies listed on this page will prove equally unwarranted, we can only ensure that to be the case with sufficient education, research and intervention. We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny.

Selected Videos

Selected FLI Podcast

]]>