FLI projects Archives - Future of Life Institute https://futureoflife.org/category/fli-projects/ Preserving the long-term future of life. Mon, 05 Feb 2024 11:19:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Dr. Matthew Meselson Wins 2019 Future of Life Award https://futureoflife.org/recent-news/dr-matthew-meselson-wins-2019-future-of-life-award/ Tue, 09 Apr 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/dr-matthew-meselson-wins-2019-future-of-life-award/

On April 9th, Dr. Matthew Meselson received the $50,000 Future of Life Award at a ceremony at the University of Boulder’s Conference on World Affairs. Dr. Meselson was a driving force behind the 1972 Biological Weapons Convention, an international ban that has prevented one of the most inhumane forms of warfare known to humanity. April 9th marked the eve of the Convention’s 47th anniversary.

Meselson’s long career is studded with highlights: proving Watson and Crick’s hypothesis on DNA structure, solving the Sverdlovsk Anthrax mystery, ending the use of Agent Orange in Vietnam. But it is above all his work on biological weapons that makes him an international hero.

“Through his work in the US and internationally, Matt Meselson was one of the key forefathers of the 1972 Biological Weapons Convention,” said Daniel Feakes, Chief of the Biological Weapons Convention Implementation Support Unit. “The treaty bans biological weapons and today has 182 member states. He has continued to be a guardian of the BWC ever since. His seminal warning in 2000 about the potential for the hostile exploitation of biology foreshadowed many of the technological advances we are now witnessing in the life sciences and responses which have been adopted since.”

Meselson became interested in biological weapons during the 60s, while employed with the U.S. Arms Control and Disarmament Agency. It was on a tour of Fort Detrick, where the U.S. was then manufacturing anthrax, that he learned the motivation for developing biological weapons: they were cheaper than nuclear weapons. Meselson was struck, he says, by the illogic of this — it would be an obvious national security risk to decrease the production cost of WMDs.


Do you know someone deserving of the Future of Life Award? If so, please consider submitting their name to our Unsung Hero Search page. If we decide to give the award to your nominee, you will receive a $3,000 prize from FLI for your contribution.

The use of biological weapons was already prohibited by the 1925 Geneva Protocol, an international treaty that the U.S. had never ratified. So Meselson wrote a paper, “The United States and the Geneva Protocol,” outlining why it should do so. Meselson knew Henry Kissinger, who passed his paper along to President Nixon, and by the end of 1969 Nixon renounced biological weapons.

Next came the question of toxins — poisons derived from living organisms. Some of Nixon’s advisors believed that the U.S. should renounce the use of naturally derived toxins, but retain the right to use artificial versions of the same substances. It was another of Meselson’s papers, “What Policy for Toxins,” that led Nixon to reject this arbitrary distinction and to renounce the use of all toxin weapons.

On Meselson’s advice, Nixon had resubmitted the Geneva Protocol to the Senate for approval. But he also went beyond the terms of the Protocol — which only ban the use of biological weapons — to renounce offensive biological research itself. Stockpiles of offensive biological substances, like the anthrax that Meselson had discovered at Fort Detrick, were destroyed.

Once the U.S. adopted this more stringent policy, Meselson turned his attention to the global stage. He and his peers wanted an international agreement stronger than the Geneva Protocol, one that would ban stockpiling and offensive research in addition to use and would provide for a verification system. From their efforts came the Biological Weapons Convention, which was signed in 1972 and is still in effect today.

“Thanks in significant part to Professor Matthew Meselson’s tireless work, the world came together and banned biological weapons, ensuring that the ever more powerful science of biology helps rather than harms humankind. For this, he deserves humanity’s profound gratitude,” said former UN Secretary-General Ban Ki-Moon.

Meselson has said that biological warfare “could erase the distinction between war and peace.” Other forms of war have a beginning and an end — it’s clear what is warfare and what is not. Biological warfare would be different: “You don’t know what’s happening, or you know it’s happening but it’s always happening.”

And the consequences of biological warfare can be greater, even, than mass destruction; Attacks on DNA could fundamentally alter humankind. FLI honors Matthew Meselson for his efforts to protect not only human life but also the very definition of humanity.

Said Astronomer Royal Lord Martin Rees, “Matt Meselson is a great scientist — and one of very few who have been deeply committed to making the world safe from biological threats. This will become a challenge as important as the control of nuclear weapons — and much more challenging and intractable. His sustained and dedicated efforts fully deserve wider acclaim.”

“Today biotech is a force for good in the world, associated with saving rather than taking lives, because Matthew Meselson helped draw a clear red line between acceptable and unacceptable uses of biology”, added MIT Professor and FLI President Max Tegmark. “This is an inspiration for those who want to draw a similar red line between acceptable and unacceptable uses of artificial intelligence and ban lethal autonomous weapons.

To learn more about Matthew Meselson, listen to FLI’s two-part podcast featuring him in conversation with Ariel Conn and Max Tegmark. In Part One, Meselson describes how he helped prove Watson and Crick’s hypothesis of DNA structure and recounts the efforts he undertook to get biological weapons banned. Part Two focuses on three major incidents in the history of biological weapons and the role played by Meselson in resolving them.

Publications by Meselson include:

The Future of Life Award is a prize awarded by the Future of Life Institute for a heroic act that has greatly benefited humankind, done despite personal risk and without being rewarded at the time. This prize was established to help set the precedent that actions benefiting future generations will be rewarded by those generations. The inaugural Future of Life Award was given to the family of Vasili Arkhipov in 2017 for single-handedly preventing a Soviet nuclear attack against the US in 1962, and the 2nd Future of Life Award was given to the family of Stanislav Petrov for preventing a false-alarm nuclear war in 1983.

]]>
Support FLI This Giving Tuesday https://futureoflife.org/fli-projects/giving-tuesday-2018/ Tue, 27 Nov 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/giving-tuesday-2018/

We Need Your Help

We’ve accomplished a lot. FLI is a small organization that has only been around for a few years, but during that time, we’ve:

And that’s just what we’ve done so far. There’s so much more we’d like to do, but as a nonprofit, our work relies on your help. On Giving Tuesday this year, please consider a donation to FLI.

Get more for your money

Facebook and Paypal are joining forces on Giving Tuesday and they’re matching up to $7 million in donations. Please consider taking advantage of this opportunity and donate to FLI through the fundraiser set up here: https://www.facebook.com/donate/257686414919515/

Last year’s matching initiative maxed out in less than one and a half minutes, so to get the most for your money, we recommend getting your donations in as soon as the initiative begins at 8 AM eastern Tuesday morning.

Where would your money go?

  • More AI safety research,
  • More high-quality information and communication about AI safety and other existential threats,
  • More efforts to keep the future safe from lethal autonomous weapons,
  • More efforts to trim excess nuclear stockpiles & reduce nuclear war risk,
  • More efforts to guarantee a future we can all look forward to.

On Giving Tuesday, we encourage you to use the Facebook fundraiser to make the most of matching donations, but we welcome your help and donations any day of the year. The link below is the easiest to use once the Facebook fundraiser has passed.


Let’s create the best future possible!

 

]]>
$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa https://futureoflife.org/recent-news/50000-award-to-stanislav-petrov-for-helping-avert-wwiii-but-us-denies-visa/ Wed, 26 Sep 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/50000-award-to-stanislav-petrov-for-helping-avert-wwiii-but-us-denies-visa/ Click here to see this page in other languages:  German Russian 

To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)

Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.

One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.

In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.

But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.

Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”

University College London Mathematics Professor  Hannah Fry, author of  the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.

Stanislav Petrov around the time he helped avert WWIII

]]>
$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust https://futureoflife.org/ai/2-million-donated-to-keep-artificial-general-intelligence-beneficial-and-robust/ Wed, 25 Jul 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/2-million-donated-to-keep-artificial-general-intelligence-beneficial-and-robust/ $2 million has been allocated to fund research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially. The money was donated by Elon Musk to cover grants through the Future of Life Institute (FLI). Ten grants have been selected for funding.

Said Tegmark, “I’m optimistic that we can create an inspiring high-tech future with AI as long as we win the race between the growing power of AI and the wisdom with which the manage it. This research is to help develop that wisdom and increasing the likelihood that AGI will be best rather than worst thing to happen to humanity.”

Today’s artificial intelligence (AI) is still quite narrow. That is, it can only accomplish narrow sets of tasks, such as playing chess or Go, driving a car, performing an Internet search, or translating languages. While the AI systems that master each of these tasks can perform them at superhuman levels, they can’t learn a new, unrelated skill set (e.g. an AI system that can search the Internet can’t learn to play Go with only its search algorithms).

These AI systems lack that “general” ability that humans have to make connections between disparate activities and experiences and to apply knowledge to a variety of fields. However, a significant number of AI researchers agree that AI could achieve a more “general” intelligence in the coming decades. No one knows how AI that’s as smart or smarter than humans might impact our lives, whether it will prove to be beneficial or harmful, how we can design it safely, or even how to prepare society for advanced AI. And many researchers worry that the transition could occur quickly.

Anthony Aguirre, co-founder of FLI and physics professor at UC Santa Cruz, explains, “The breakthroughs necessary to have machine intelligences as flexible and powerful as our own may take 50 years. But with the major intellectual and financial resources now being directed at the problem it may take much less. If or when there is a breakthrough, what will that look like? Can we prepare? Can we design safety features now, and incorporate them into AI development, to ensure that powerful AI will continue to benefit society? Things may move very quickly and we need research in place to make sure they go well.”

Grant topics include: training multiple AIs to work together and learn from humans about how to coexist, training AI to understand individual human preferences, understanding what “general” actually means, incentivizing research groups to avoid a potentially dangerous AI race, and many more. As the request for proposals stated, “The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.”

FLI hopes that this round of grants will help ensure that AI remains beneficial as it becomes increasingly intelligent. The full list of FLI recipients and project titles includes:

Primary Investigator Project Title Amount Recommended Email
Allan Dafoe, Yale University Governance of AI Programme $276,000 allan.dafoe@yale.edu
Stefano Ermon, Stanford University Value Alignment and Multi-agent Inverse Reinforcement Learning $100,000 ermon@cs.stanford.edu
Owain Evans, Oxford University Factored Cognition: Amplifying Human Cognition for Safely Scalable AGI $225,000 owain.evans@philosophy.ox.ac.uk
The Anh Han, Teesside University Incentives for Safety Agreement Compliance in AI Race $224,747 t.han@tees.ac.uk
Jose Hernandez-Orallo, University of Cambridge Paradigms of Artificial General Intelligence and Their Associated Risks $220,000 jorallo@dsic.upv.es
Marcus Hutter, Australian National University The Control Problem for Universal AI: A Formal Investigation $276,000 marcus.hutter@anu.edu.au
James Miller, Smith College Utility Functions: A Guide for Artificial General Intelligence Theorists $78,289 jdmiller@smith.edu
Dorsa Sadigh, Stanford University Safe Learning and Verification of Human-AI Systems $250,000 dorsa@cs.stanford.edu
Peter Stone, University of Texas Ad hoc Teamwork and Moral Feedback as a Framework for Safe Robot Behavior $200,000 pstone@cs.utexas.edu
Josh Tenenbaum, MIT Reverse Engineering Fair Cooperation $150,000 jbt@mit.edu

 

Some of the grant recipients offered statements about why they’re excited about their new projects:

“The team here at the Governance of AI Program are excited to pursue this research with the support of FLI. We’ve identified a set of questions that we think are among the most important to tackle for securing robust governance of advanced AI, and strongly believe that with focused research and collaboration with others in this space, we can make productive headway on them.” -Allan Dafoe

“We are excited about this project because it provides a first unique and original opportunity to explicitly study the dynamics of safety-compliant behaviours within the ongoing AI research and development race, and hence potentially leading to model-based advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants. It also provides an important opportunity to validate our prior results on the importance of commitments and other mechanisms of trust in inducing global pro-social behavior, thereby further promoting AI for the common good.” -The Ahn Han

“We are excited about the potentials of this project. Our goal is to learn models of humans’ preferences, which can help us build algorithms for AGIs that can safely and reliably interact and collaborate with people.” -Dorsa Sadigh

This is FLI’s second grant round. The first launch in 2015, and a comprehensive list of papers, articles and information from that grant round can be found here. Both grant rounds are part of the original $10 million that Elon Musk pledged to AI safety research.

FLI cofounder, Viktoriya Krakovna, also added: “Our previous grant round promoted research on a diverse set of topics in AI safety and supported over 40 papers. The next grant round is more narrowly focused on research in AGI safety and strategy, and I am looking forward to great work in this area from our new grantees.”

Learn more about these projects here.

]]>
AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons https://futureoflife.org/fli-projects/ai-companies-researchers-engineers-scientists-entrepreneurs-and-others-sign-pledge-promising-not-to-develop-lethal-autonomous-weapons/ Wed, 18 Jul 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/ai-companies-researchers-engineers-scientists-entrepreneurs-and-others-sign-pledge-promising-not-to-develop-lethal-autonomous-weapons/ Leading AI companies and researchers take concrete action against killer robots, vowing never to develop them.

Stockholm, Sweden (July 18, 2018) After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 160 AI-related companies and organizations from 36 countries, and 2,400 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Max Tegmark, president of the Future of Life Institute (FLI) which organized the effort, announced the pledge on July 18 in Stockholm, Sweden during the annual International Joint Conference on Artificial Intelligence (IJCAI), which draws over 5,000 of the world’s leading AI researchers. SAIS and EurAI were also organizers of this year’s IJCAI.

Said Tegmark, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human “in-the-loop.” That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system. (This does not include today’s drones, which are under human control. It also does not include autonomous systems that merely defend against other weapons, since “lethal” implies killing a human.)

The pledge begins with the statement:

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

Another key organizer of the pledge, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, points out the thorny ethical issues surrounding LAWS. He states:

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, has long been a strong opponent of lethal autonomous weapons. He says:

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

In addition to the ethical questions associated with LAWS, many advocates of an international ban on LAWS are concerned that these weapons will be difficult to control – easier to hack, more likely to end up on the black market, and easier for bad actors to obtain –  which could become destabilizing for all countries, as illustrated in the FLI-released video “Slaughterbots”.

In December 2016, the Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion regarding LAWS at the UN. By the most recent meeting in April, twenty-six countries had announced support for some type of ban, including China. And such a ban is not without precedent. Biological weapons, chemical weapons, and space weapons were also banned not only for ethical and humanitarian reasons, but also for the destabilizing threat they posed.

The next UN meeting on LAWS will be held in August, and signatories of the pledge hope this commitment will encourage lawmakers to develop a commitment at the level of an international agreement between countries. As the pledge states:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

 

As seen in the press

]]>
Stephen Hawking in Memoriam https://futureoflife.org/fli-projects/stephen-hawking-memoriam/ Wed, 14 Mar 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/stephen-hawking-memoriam/ As we mourn the loss of Stephen Hawking, we should remember that his legacy goes far beyond science. Yes, of course he was one of the greatest scientists of the past century, discovering that black holes evaporate and helping found the modern quest for quantum gravity. But he also had a remarkable legacy as a social activist, who looked far beyond the next election cycle and used his powerful voice to bring out the best in us all. As a founding member of FLI’s Scientific Advisory board, he tirelessly helped us highlight the importance of long-term thinking and ensuring that we use technology to help humanity flourish rather than flounder. I marveled at how he could sometimes answer my emails faster than my grad students. His activism revealed the same visionary fearlessness as his scientific and personal life: he saw further ahead than most of those around him and wasn’t afraid of controversially sounding the alarm about humanity’s sloppy handling of powerful technology, from nuclear weapons to AI.

On a personal note, I’m saddened to have lost not only a long-time collaborator but, above all, a great inspiration, always reminding me of how seemingly insurmountable challenges can be overcome with creativity, willpower and positive attitude. Thanks Stephen for inspiring us all!

]]>
2018 International AI Safety Grants Competition https://futureoflife.org/ai/2018-international-ai-safety-grants-competition/ Wed, 20 Dec 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/2018-international-ai-safety-grants-competition/ I. THE FUTURE OF AI: REAPING THE BENEFITS WHILE AVOIDING PITFALLS

For many years, artificial intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant recent success, and great future promise. This recent success has raised an important question: how can we ensure that the growing power of AI is matched by the growing wisdom with which we manage it? In an open letter in 2015, a large international group of leading AI researchers from academia and industry argued that this success makes it important and timely to research also how to make AI systems robust and beneficial, and that this includes concrete research directions that can be pursued today. In early 2017, a broad coalition of AI leaders went further and signed the Asilomar AI Principles, which articulate beneficial AI requirements in greater detail.

The first Asilomar Principle is that The goal of AI research should be to create not undirected intelligence, but beneficial intelligence, and the second states that Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies…”  The aim of this request for proposals is to support research that serves these and other goals indicated by the Principles.

The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.

II. EVALUATION CRITERIA & PROJECT ELIGIBILITY

This 2018 grants competition is the second round of the multi-million dollar grants program announced in January 2015, and will give grants totaling millions more to researchers in academic and other nonprofit institutions for projects up to three years in duration, beginning September 1, 2018. Results-in-progress from the first round are here. Following the launch of the first round, the field of AI safety has expanded considerably in terms of institutions, research groups, and potential funding sources entering the field.  Many of these, however, focus on immediate or relatively short-term issues relevant to extrapolations of present machine learning and AI systems as they are applied more widely.  There are still relatively few resources devoted to issues that will become crucial if/when AI research attains its original goal: building artificial general intelligence (AGI) that can (or can learn to) outperform humans on all cognitive tasks (see Asilomar Principles 19-23).

For maximal positive impact, this new grants competition thus focuses on Artificial General Intelligence, specifically research for safe and beneficial AGI. Successful grant proposals will either relate directly to AGI issues, or clearly explain how the proposed work is a necessary stepping stone toward safe and beneficial AGI.

As with the previous round, grant applications will be subject to a competitive process of confidential expert peer review similar to that employed by all major U.S. scientific funding agencies, with reviewers being recognized experts in the relevant fields.

Project Grants (approx. $50K-$400K per project) will each fund a small group of collaborators at one or more research institutions for a focused research project of up to three years duration. Proposals will be evaluated according to how topical and impactful they are:

TOPICAL: This RFP is limited to research that aims to help maximize the societal benefits of AGI, explicitly focusing not on the standard goal of making AI more capable, but on making it more robust and/or beneficial. In consultation with other organizations, FLI has identified a list of relatively specific problems and projects of particular interest to the AGI safety field. These will serve both as examples and as topics for special consideration.

In our RFP examples, we give a list of research topics and questions that are germane to this RFP. We also refer proposers to FLI’s landscape of AI safety research and its accompanying literature survey, as well as the 2015 research priorities and the associated survey.

The relative amount of funding for different areas is not predetermined, but will be optimized to reflect the number and quality of applications received. Very roughly, the expectation is ~70% computer science and closely related technical fields, ~30% economics, law, ethics, sociology, policy, education, and outreach.

IMPACTFUL: Proposals will be rated according to their expected positive impact per dollar, taking all relevant factors into account, such as:

  1. Intrinsic intellectual merit, scientific rigor and originality
  2. A high product of likelihood for success and importance if successful (i.e., high-risk research can be supported as long as the potential payoff is also very high.)
  3. The likelihood of the research opening fruitful new lines of scientific inquiry
  4. The feasibility of the research in the given time frame
  5. The qualifications of the Principal Investigator and team with respect to the proposed topic
  6. The part a grant may play in career development
  7. Cost effectiveness: Tight budgeting is encouraged in order to maximize the research impact of the project as a whole, with emphasis on scientific return per dollar rather than per proposal.
  8. Potential to impact the greater community as well as the general public via effective outreach and dissemination of the research results
  9. Engagement of appropriate communities (e.g. engaging research collaborators in AI safety outside of North America and Europe)

Strong proposals will make it easy for FLI to evaluate their impact by explicitly stating what they aim to produce (publications, algorithms, software, events, etc.) and when (after 1st, 2nd and 3rd year, say). Preference will be given to proposals whose deliverables are made freely available (open access publications, open source software, etc.) where appropriate.

To maximize its impact per dollar, this RFP is intended to complement, not supplement, conventional funding. We wish to enable research that, because of its long-term focus or its non-commercial, speculative or non-mainstream nature would otherwise go unperformed due to lack of available resources. Thus, although there will be inevitable overlaps, an otherwise scientifically rigorous proposal that is a good candidate for an FLI grant will generally not be a good candidate for funding by the NSF, DARPA, corporate R&D, etc. – and vice versa. To be eligible, research must focus on making AI more robust/beneficial as opposed to the standard goal of making AI more capable, and it must be AGI-relevant.

Acceptable use of grant funds for Project Grants include:

  • Student/postdoc/researcher salary and benefits
  • Summer salary and teaching buyout for academics
  • Support for specific projects during sabbaticals
  • Assistance in writing or publishing books or journal articles, including page charges
  • Modest allowance for justifiable lab equipment, computers, and other research supplies
  • Modest travel allowance
  • Development of workshops, conferences, or lecture series for professionals in the relevant fields
  • Overhead of at most 15% (Please note that if this is an issue with your institution, or if your organization is not nonprofit, you can contact FLI to learn about other organizations that can help administer an FLI grant for you.)

Subawards are discouraged but possible in special circumstances.

III. APPLICATION PROCESS

To save time for both you and the reviewers, applications will be accepted electronically through a standard form on our website (click here for the application) and evaluated in a two-part process, as follows:

INITIAL PROPOSAL — DUE FEBRUARY 25 2018, 11:59 PM Eastern Time — must include:

  • A 200-500 word summary of the project, explicitly addressing why it is topical and impactful.
  • A draft budget description not exceeding 200 words, including an approximate total cost over the life of the award and explanation of how funds would be spent.
  • A PDF Curriculum Vitae for the Principal Investigator, including
    • Education and employment history
    • Full publication list
    • Optional: if the PI has any previous publications relevant to the proposed research, they may list to up to five of these as well, for a total of up to 10 representative and relevant publications. We do wish to encourage PIs to enter relevant research areas where they may not have had opportunities before, so prior relevant publications are not required.

A review panel assembled by FLI will screen each initial proposal according to the criteria in Section II. Based on their assessment, the principal investigator (PI) may be invited to submit a full proposal, on or about MARCH 23 2018, perhaps with feedback from reviewers for improving the proposal. Please keep in mind that however positive reviewers may be about a proposal at any stage, it may still be turned down for funding after full peer review.

FULL PROPOSAL — DUE MAY 20 2018 — Must Include:

  • Cover sheet
  • A 200-word project abstract, suitable for publication in an academic journal
  • A project summary not exceeding 200 words, explaining the work and its significance to laypeople
  • A detailed description of the proposed research, of between 5 and 15 single-spaced 11-point pages, including a short statement of how the application fits into the applicant’s present research program, and a description of how the results might be communicated to the wider scientific community and general public
  • A detailed budget over the life of the award, with justification and utilization distribution (preferably drafted by your institution’s grant officer or equivalent)
  • A list, for all project senior personnel, of all present and pending financial support, including project name, funding source, dates, amount, and status (current or pending)
  • Evidence of tax-exempt status of grantee institution, if other than a US university. For information on determining tax-exempt status of international organizations and institutes, please review the information here.
  • Optional: names of three recommended referees
  • Curricula Vitae for all project senior personnel, including:
    • Education and employment history
    • A list of references of up to five previous publications relevant to the proposed research, and up to five additional representative publications
    • Full publication list

Completed full proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described in Section III. A review panel of scientists in the relevant fields will be convened to produce a final rank ordering of the proposals, which will determine the grant winners, and make budgetary adjustments if necessary. Public award recommendations will be made on or about JULY 31, 2018.

FUNDING PROCESS

The peer review and administration of this grants program will be managed by the Future of Life Institute. FLI is an independent, philanthropically funded nonprofit organization whose mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

FLI will direct these grants through a Donor Advised Fund (DAF) at the Silicon Valley Community Foundation. FLI will solicit grant applications and have them peer reviewed, and on the basis of these reviews, FLI will advise the DAF on what grants to make. After grants have been made by the DAF, FLI will work with the DAF to monitor the grantee’s performance via grant reports. In this way, researchers will continue to interact with FLI, while the DAF interacts mostly with their institutes’ administrative or grants management offices.

RESEARCH TOPIC LIST

We have solicited and synthesized suggestions from a number of technical AI safety researchers to provide a list of project requests.  Proposals on the requested topics are all germane to the RFP, but the list is not meant to be either comprehensive or exclusive: proposals on other topics that similarly address long-term safety and benefits of AI are also welcomed. We also refer the reader to FLI’s AI safety landscape and its accompanying paper as a more general summary of relevant issues as well as definitions of many key terms.

TO SUBMIT AN INITIAL PROPOSAL, CLICK HERE.

IV. An International Request for Proposals – Timeline

December 20, 2017: RFP is released

February 25, 2018 (by 11:59 PM EST): Initial Proposals due

March 23, 2018: Full Proposals invited

May 20, 2018 (by 11:59 PM EST): Full Proposals (invite only) due

July 31, 2018: Grant Recommendations are publicly announced; FLI Fund conducts due diligence on grants

September 1, 2018: Grants disbursed; Earliest date for grants to start

August 31, 2021: Latest end date for multi-year Grants

TO SUBMIT AN INITIAL PROPOSAL, CLICK HERE.

An International Request for Proposals – Frequently Asked Questions

Does FLI have particular agenda or position on AI and AI safety?

FLI’s position is well summarized by the open letter that FLI’s founders and many of its advisory board members have signed, and by the Asilomar Principles.

Who is eligible for grants?

Researchers and outreach specialists working in academic and other nonprofit institutions are eligible, as well as independent researchers. Grant awards are sent to the PI’s institution and the institution’s administration is responsible for disbursing the awards to the PI. When submitting your application, please make sure to list the appropriate grant administrator that we should contact at your institution.

If you are not affiliated with a research institution, there are many organizations that will help administer your grant. If you need suggestions, please contact FLI. Applicants are not required to be affiliated with an institution for the Initial Proposal, only for the Full Proposal.

Can researchers from outside the U.S. apply?

Yes, applications will be welcomed from any country. Please note that the US Government imposes restrictions on the types of organizations to which US nonprofits (such as FLI) can give grants. Given this, if you are awarded a grant, your institution must a) prove their equivalency to a nonprofit institution by providing the institution’s establishing law or charter, list of key staff and board members, and a signed affidavit for public universities and, b) comply with the U.S. Patriot Act. Please note that this is included to provide information about the equivalency determination process that will take place if you are awarded a grant. If there are any issues with your granting institution proving its equivalency, FLI can help provide a list of organizations that can act as a go-between to administer the grant. More detail about international grant compliance is available on our website here. Please contact FLI if you have any questions about whether your institution is eligible, to get a list of organizations that can help administer your grant, or if you want to review the affidavit that public universities must fill out.

Can I submit an application in a language other than English?

All proposals must be in English. Since our grant program has an international focus, we will not penalize applications by people who do not speak English as their first language. We will encourage the review panel to be accommodating of language differences when reviewing applications. All applications must be coherent.

How and when do we apply?

Apply online here. Please submit an Initial Proposal by February 25, 2018. After screening, you may then be invited to submit a Full Proposal, due May 20, 2018. Please see Section IV for more information.

What kinds of programs and requests are eligible for funding?

Acceptable use of grant funds for Project Grants include:

  • Student/postdoc/researcher salary and benefits
  • Summer salary and teaching buyout for academics
  • Support for specific projects during sabbaticals
  • Assistance in writing or publishing books or journal articles, including page charges
  • Modest allowance for justifiable lab equipment, computers, cloud computing services, and other research supplies
  • Modest travel allowance
  • Development of workshops, conferences, or lecture series for professionals in the relevant fields
  • Overhead of at most 15% (Please note if this is an issue with your institution, or if your organization is not nonprofit, you can contact FLI to learn about other organizations that can help administer an FLI grant for you.)
  • Subawards are discouraged but possible in special circumstances.

What is your policy on overhead?

The highest allowed overhead rate is 15%. (As mentioned before, if this is an issue with your institution, you can contact FLI to learn about other organizations that can help administer FLI grants.)

How will proposals be judged?

After screening of the Initial Proposal, applicants may be asked to submit a Full Proposal. All Full Proposals will undergo a competitive process of external and confidential expert peer review. An expert panel will evaluate and rank the reviews according to the criteria described in Section III of the RFP overview (see above).

Will FLI provide feedback on initial proposals?

FLI will generally not provide significant feedback on initial Project Proposals, but may in some cases. Please keep in mind that however positive FLI may be about a proposal at any stage, it may still be turned down for funding after peer review.

Can I submit multiple proposals?

We will consider multiple Initial Proposals from the same PI; however, we will invite at most one Full Proposal from each PI or closely associated group of applicants.

What if I am unable to submit my application electronically?

Only applications submitted through the form on our website are accepted. If you encounter problems, please contact FLI.

Is there a maximum amount of money for which we can apply?

No. You may apply for as much money as you think is necessary to achieve your goals. However, you should carefully justify your proposed expenditure. Keep in mind that projects will be assessed on potential impact per dollar requested; an inappropriately high budget may harm the proposal’s prospects, effectively pricing it out of the market. Referees are authorized to suggest budget adjustments. As mentioned in the RFP overview above, there may be an opportunity to apply for greater follow-up funding.

What will an average award be?

We expect that Project awards will typically be in the range of $50,000-$400,000 total over the life of the award (usually two to three years).

What are the reporting requirements?

Grantees will be asked to submit a progress report (if a multi-year Grantee) and/or annual report consisting of narrative and financial reports. Renewal of multi-year grants will be contingent on satisfactory demonstration in these reports that the supported research is progressing appropriately, and continues to be consistent with the spirit of the original proposal. (see below question regarding renewal.)

How are multi-year grants renewed?

This program has been formulated to maximize impact by re-allocating (and potentially adding) resources during each year of the grant program. Decisions regarding the renewal of multi-year grants will be made by a review committee on the basis of the annual progress report. This report is not pro-forma. The committee is likely to recommend that some grants not be renewed, some be renewed at reduced level, some renewed at the same level, and that some be offered the opportunity for increased funding in later years.

What are the qualifications for a Principal Investigator?

A Principal Investigator can be anyone – there are no qualification requirements (though qualifications will be taken into account during the review process). Lacking conventional academic credentials or publications does not disqualify a P.I. We encourage applications from industry and independent researchers. Please list any relevant experience or achievements in the attached resume/CV.

As noted above, Principal Investigators need not even be affiliated with a university or nonprofit. If a PI is affiliated with an academic institution, then their Principal Investigator status must be allowed by their institution. Should they be invited to submit a Full Proposal, they must obtain co-signatures on the proposal from the department head, as well as a department host with a post exceeding the duration of the grant.

My colleague(s) and I would like to apply as co-PIs. Can we do this?

Yes. For administrative purposes, however, please select a primary contact for the life of the award. The primary contact, which must be a Principal Investigator, will be the reference for your application(s) and all future correspondence, documents, etc.

Will the grants pay for laboratory or computational expenses?

Yes, however due to budgetary limitations FLI cannot fund capital-intensive equipment or computing facilities. Also, such expenses must be clearly required by the proposed research.

I have a proposal for my usual, relatively mainstream AI research program that I may be able to repackage as an appropriate proposal for this FLI program. Sound OK?

FLI is very sensitive to the problem of “fishing for money”—that is, the re-casting of an existing research program to make it appear to fit the overall thematic nature of this Request For Proposals. Such proposals will not be funded, nor renewed if erroneously funded initially.

Do proposals have to be as long as possible?

Please note that the 15-page limit is an upper limit, not a lower limit. You should simply write as much as you feel that you need in order to explain your proposal in sufficient detail for the review panel to understand it properly.

What are the “referees” in the instructions?

If there are specific reviewers whom you feel are particularly qualified to evaluate your proposal, please feel free to list them (this is completely optional)

Who are FLI’s reviewers?

FLI follows the standard practice of protecting the identities of our external reviewers and selecting them based on expertise in the relevant research areas. For example, the external reviewers in the first-round of this RFP were highly qualified experts in AI, law and economics, mostly professors and also some industry experts.

TO SUBMIT AN INITIAL PROPOSAL, CLICK HERE.

If you have additional questions that were not answered above, please email us.

]]>
AI Researchers Create Video to Call for Autonomous Weapons Ban at UN https://futureoflife.org/ai/ai-researchers-create-video-call-autonomous-weapons-ban-un/ Tue, 14 Nov 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/ai-researchers-create-video-call-autonomous-weapons-ban-un/ In response to growing concerns about autonomous weapons, a coalition of AI researchers and advocacy organizations released a fictitious video on Monday that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous.

The video was launched in Geneva, where AI researcher Stuart Russell presented it at an event at the United Nations Convention on Conventional Weapons hosted by the Campaign to Stop Killer Robots.

Russell, in an appearance at the end of the video, warns that the technology described in the film already exists and that the window to act is closing fast.

Support for a ban has been mounting. Just this past week, over 200 Canadian scientists and over 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban. Earlier this summer, over 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/Robotics researchers and others, including Elon Musk and Stephen Hawking.

These letters indicate both grave concern and a sense that the opportunity to curtail lethal autonomous weapons is running out.

Noel Sharkey of the International Committee for Robot Arms Control explains, “The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

Drone technology today is very close to having fully autonomous capabilities. And many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability. The US and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.

A ban can exert great power on the trajectory of technological development without needing to stop every instance of misuse. Max Tegmark, MIT Professor and co-founder of the Future of Life Institute, points out, “People’s knee-jerk reaction that bans can’t help isn’t historically accurate: the bioweapon ban created such a powerful stigma that, despite treaty cheating, we have almost no bioterror attacks today and almost all biotech funding is civilian.”

As Toby Walsh, an AI professor at the University of New South Wales, argues: “The academic community has sent a clear and consistent message. Autonomous weapons will be weapons of terror, the perfect tool for those who have no qualms about the terrible uses to which they are put. We need to act now before this future arrives.”

More than 70 countries are participating in the meeting taking place November 13 – 17 organized by the 2016 Fifth Review Conference at the UN, which established a Group of Governmental Experts on lethal autonomous weapons. The meeting is chaired by Ambassador Amandeep Singh Gill of India, and the countries will continue negotiations of what could become an historic international treaty.

For more information about autonomous weapons, see the following resources:

]]>
55 Years After Preventing Nuclear Attack, Arkhipov Honored With Inaugural Future of Life Award https://futureoflife.org/recent-news/55-years-preventing-nuclear-attack-arkhipov-honored-inaugural-future-life-award/ Fri, 27 Oct 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/55-years-preventing-nuclear-attack-arkhipov-honored-inaugural-future-life-award/ Celebrating the contributions of Vasili Arkhipov

Click here to see this page in other languages: Russian

London, UK – On October 27, 1962, a soft-spoken naval officer named Vasili Arkhipov single-handedly prevented nuclear war during the height of the Cuban Missile Crisis. Arkhipov’s submarine captain, thinking their sub was under attack by American forces, wanted to launch a nuclear weapon at the ships above. Arkhipov, with the power of veto, said no, thus averting nuclear war.

Now, 55 years after his courageous actions, the Future of Life Institute has presented the Arkhipov family with the inaugural Future of Life Award to honor humanity’s late hero.

Arkhipov’s surviving family members, represented by his daughter Elena and grandson Sergei, flew into London for the ceremony, which was held at the Institute of Engineering & Technology. After explaining Arkhipov’s heroics to the audience, Max Tegmark, president of FLI, presented the Arkhipov family with their award and $50,000. Elena and Sergei were both honored by the gesture and by the overall message of the award.

Elena explained that her father “always thought that he did what he had to do and never consider his actions as heroism. … Our family is grateful for the prize and considers it as a recognition of his work and heroism. He did his part for the future so that everyone can live on our planet.”

Elena and Sergei with the Future of Life Award

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. Arkhipov, whose courage and composure potentially saved billions of lives, was an obvious choice for the inaugural event.

“Vasili Arkhipov is arguably the most important person in modern history, thanks to whom October 27 2017 isn’t the 55th anniversary of World War III,” FLI president Max Tegmark explained. “We’re showing our gratitude in a way he’d have appreciated, by supporting his loved ones.”

The award also aims to foster a dialogue about the growing existential risks that humanity faces, and the people that work to mitigate them.

Jaan Tallinn, co-founder of FLI, said: “Given that this century will likely bring technologies that can be even more dangerous than nukes, we will badly need more people like Arkhipov — people who will represent humanity’s interests even in the heated moments of a crisis.”

FLI president Max Tegmark presenting the Future of Life Award to Arkhipov’s daughter, Elena, and grandson, Sergei.

Arkhipov’s Story

On October 27 1962, during the Cuban Missile Crisis, eleven US Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the US “quarantine” area. Arkhipov was one of the officers on board. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them which, unbeknownst to the crew, they’d informed Moscow were merely meant to force the sub to surface and leave.

“We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

What the Americans didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. As the depth charges intensified and temperatures onboard climbed above 45ºC (113ºF), many crew members fainted from carbon dioxide poisoning, and in the midst of this panic, Captain Savitsky decided to launch their nuclear weapon.

“Maybe the war has already started up there,” he shouted. “We’re gonna blast them now! We will die, but we will sink them all – we will not disgrace our Navy!”

The combination of depth charges, extreme heat, stress, and isolation from the outside world almost lit the fuse of full-scale nuclear war. But it didn’t. The decision to launch a nuclear weapon had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no.

Amidst the panic, the 34-year old Arkhipov remained calm and tried to talk Captain Savitsky down. He eventually convinced Savitsky that these depth charges were signals for the Soviet submarine to surface, and the sub surfaced safely and headed north, back to the Soviet Union.

It is sobering that very few have heard of Arkhipov, although his decision was perhaps the most valuable individual contribution to human survival in modern history. PBS made a documentary, The Man Who Saved the World, documenting Arkhipov’s moving heroism, and National Geographic profiled him as well in an article titled – You (and almost everyone you know) Owe Your Life to This Man.

The Cold War never became a hot war, in large part thanks to Arkhipov, but the threat of nuclear war remains high. Beatrice Fihn, Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN) and this year’s recipient of the Nobel Peace Prize, hopes that the Future of Life Award will help draw attention to the current threat of nuclear weapons and encourage more people to stand up to that threat. Fihn explains: “Arkhipov’s story shows how close to nuclear catastrophe we have been in the past. And as the risk of nuclear war is on the rise right now, all states must urgently join the Treaty on the Prohibition of Nuclear Weapons to prevent such catastrophe.”

Of her father’s role in preventing nuclear catastrophe, Elena explained: “We must strive so that the powerful people around the world learn from Vasili’s example. Everybody with power and influence should act within their competence for world peace.”

Read more about the Future of Life Award.

]]>
An Open Letter to the United Nations Convention on Certain Conventional Weapons https://futureoflife.org/open-letter/autonomous-weapons-open-letter-2017/ Sun, 20 Aug 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/autonomous-weapons-open-letter-2017/ As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Translations: Chinese GermanJapanese    Russian

]]>
United Nations Adopts Ban on Nuclear Weapons https://futureoflife.org/nuclear/united-nations-adopts-ban-nuclear-weapons/ Fri, 07 Jul 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/united-nations-adopts-ban-nuclear-weapons/ Today, 72 years after their invention, states at the United Nations formally adopted a treaty which categorically prohibits nuclear weapons.

With 122 votes in favor, one vote against, and one country abstaining, the “Treaty on the Prohibition of Nuclear Weapons” was adopted Friday morning and will open for signature by states at the United Nations in New York on September 20, 2017. Civil society organizations and more than 140 states have participated throughout negotiations.

On adoption of the treaty, ICAN Executive Director Beatrice Fihn said:

“We hope that today marks the beginning of the end of the nuclear age. It is beyond question that nuclear weapons violate the laws of war and pose a clear danger to global security. No one believes that indiscriminately killing millions of civilians is acceptable – no matter the circumstance – yet that is what nuclear weapons are designed to do.”

In a public statement, Former Secretary of Defense William Perry said:

“The new UN Treaty on the Prohibition of Nuclear Weapons is an important step towards delegitimizing nuclear war as an acceptable risk of modern civilization. Though the treaty will not have the power to eliminate existing nuclear weapons, it provides a vision of a safer world, one that will require great purpose, persistence, and patience to make a reality. Nuclear catastrophe is one of the greatest existential threats facing society today, and we must dream in equal measure in order to imagine a world without these terrible weapons.”

Until now, nuclear weapons were the only weapons of mass destruction without a prohibition treaty, despite the widespread and catastrophic humanitarian consequences of their intentional or accidental detonation. Biological weapons were banned in 1972 and chemical weapons in 1992.

This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and does not consider them legitimate tools of war. The repeated objection and boycott of the negotiations by many nuclear-weapon states demonstrates that this treaty has the potential to significantly impact their behavior and stature. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviors, even in states not party to the treaty.

“This is a triumph for global democracy, where the pro-nuclear coalition of Putin, Trump and Kim Jong-Un were outvoted by the majority of Earth’s countries and citizens,” said MIT Professor and FLI President Max Tegmark.

“The strenuous and repeated objections of nuclear armed states is an admission that this treaty will have a real and lasting impact,” Fihn said.

The treaty also creates obligations to support the victims of nuclear weapons use (Hibakusha) and testing and to remediate the environmental damage caused by nuclear weapons.

From the beginning, the effort to ban nuclear weapons has benefited from the broad support of international humanitarian, environmental, nonproliferation, and disarmament organizations in more than 100 states. Significant political and grassroots organizing has taken place around the world, and many thousands have signed petitions, joined protests, contacted representatives, and pressured governments.

“The UN treaty places a strong moral imperative against possessing nuclear weapons and gives a voice to some 130 non-nuclear weapons states who are equally affected by the existential risk of nuclear weapons. … My hope is that this treaty will mark a sea change towards global support for the abolition of nuclear weapons. This global threat requires unified global action,” said Perry.

Fihn added, “Today the international community rejected nuclear weapons and made it clear they are unacceptable.It is time for leaders around the world to match their values and words with action by signing and ratifying this treaty as a first step towards eliminating nuclear weapons.”

 

Images courtesy of ICAN.

 

WHAT THE TREATY DOES

Comprehensively bans nuclear weapons and related activity. It will be illegal for parties to undertake any activities related to nuclear weapons. It bans the use, development, testing, production, manufacturing, acquiring, possession, stockpiling, transferring, receiving, threatening to use, stationing, installation, or deploying of nuclear weapons. 

Bans any assistance with prohibited acts. The treaty bans assistance with prohibited acts, and should be interpreted as prohibiting states from engaging in military preparations and planning to use nuclear weapons, financing their development and manufacture, or permitting the transit of them through territorial waters or airspace.

Creates a path for nuclear states which join to eliminate weapons, stockpiles, and programs. It requires states with nuclear weapons that join the treaty to remove them from operational status and destroy them and their programs, all according to plans they would submit for approval. It also requires states which have other country’s weapons on their territory to have them removed.

Verifies and safeguards that states meet their obligations. The treaty requires a verifiable, time-bound, transparent, and irreversible destruction of nuclear weapons and programs and requires the maintenance and/or implementation of international safeguards agreements. The treaty permits safeguards to become stronger over time and prohibits weakening of the safeguard regime.

Requires victim and international assistance and environmental remediation. The treaty requires states to assist victims of nuclear weapons use and testing, and requires environmental remediation of contaminated areas. The treaty also obliges states to provide international assistance to support the implementation of the treaty. The text requires states to join the Treaty, and to encourage others to join, as well as to meet regularly to review progress.

NEXT STEPS

Opening for signature. The treaty will be open for signature on 20 September at the United Nations in New York.

Entry into force. Fifty states are required to ratify the treaty for it to enter into force.  At a national level, the process of ratification varies, but usually requires parliamentary approval and the development of national legislation to turn prohibitions into national legislation. This process is also an opportunity to elaborate additional measures, such as prohibiting the financing of nuclear weapons.

First meeting of States Parties. The first Meeting of States Parties will take place within a year after the entry into force of the Convention.

SIGNIFICANCE AND IMPACT OF THE TREATY

Delegitimizes nuclear weapons. This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and do not consider them legitimate weapons, creating the foundation of a new norm of international behaviour.

Changes party and non-party behaviour. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviours, even in states not party to the treaty. This is true for treaties ranging from those banning cluster munitions and land mines to the Convention on the law of the sea. The prohibition on assistance will play a significant role in changing behaviour given the impact it may have on financing and military planning and preparation for their use.

Completes the prohibitions on weapons of mass destruction. The treaty completes work begun in the 1970s, when Chemical weapons were banned, and the 1990s when biological weapons were banned.

Strengthens International Humanitarian Law (“Laws of War”). Nuclear weapons are intended to kill millions of civilians – non-combatants – a gross violation of International Humanitarian Law. Few would argue that the mass slaughter of civilians is acceptable and there is no way to use a nuclear weapon in line with international law. The treaty strengthens these bodies of law and norm.

Remove the prestige associated with proliferation. Countries often seek nuclear weapons for the prestige of being seen as part of an important club. By more clearly making nuclear weapons an object of scorn rather than achievement, their spread can be deterred.

FLI sought to increase support for the negotiations from the scientific community this year. We organized an open letter signed by over 3700 scientists in 100 countries, including 30 Nobel Laureates. You can see the letter here and the video we presented recently at the UN here.

This post is a modified version of the press release provided by the International Campaign to Abolish Nuclear Weapons (ICAN).

]]>
Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations https://futureoflife.org/nuclear/3000-scientists-support-un-nuclear-ban-negotiations/ Mon, 27 Mar 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/3000-scientists-support-un-nuclear-ban-negotiations/ Click here to see this page in other languages: Chinese    German

Delegates from most UN member states are gathering in New York to negotiate a nuclear weapons ban, where they will also receive a letter of support that has been signed by thousands of scientists from around over 80 countries – including 28 Nobel Laureates and a former US Secretary of Defense. “Scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought”, the letter explains.

The letter was delivered at a ceremony at 1pm on Monday March 27 in the UN General Assembly Hall to Her Excellency Ms. Elayne Whyte Gómez from Costa Rica, who is presiding over the negotiations.

Despite all the attention to nuclear terrorism and nuclear rogue states, one of the greatest threats from nuclear weapons has always been mishaps and accidents among the established nuclear nations. With political tensions and instability increasing, this threat is growing to alarming levels: “The probability of a nuclear calamity is higher today, I believe, that it was during the cold war,” according to former U.S. Secretary of Defense William J. Perry, who signed the letter.

“Nuclear weapons represent one of the biggest threats to our civilization. With the unpredictability of the current world situation, it is more important than ever to get negotiations about a ban on nuclear weapons on track, and to make these negotiations a truly global effort,” says neuroscience professor Edvard Moser from Norway, 2014 Nobel Laureate in Physiology/Medicine.

Professor Wolfgang Ketterle from MIT, 2001 Nobel Laureate in Physics, agrees: “I see nuclear weapons as a real threat to the human race and we need an international consensus to reduce this threat.”

Currently, the US and Russia have about 14,000 nuclear weapons combined, many on hair-trigger alert and ready to be launched on minutes notice, even though a Pentagon report argued that a few hundred would suffice for rock-solid deterrence. Yet rather than trim their excess arsenals, the superpowers plan massive investments to replace their nuclear weapons by new destabilizing ones that are more lethal for a first strike attack.

“Unlike many of the world’s leaders I care deeply about the future of my grandchildren. Even the remote possibility of a nuclear war presents an unconscionable threat to their welfare. We must find a way to eliminate nuclear weapons,” says Sir Richard J. Roberts, 1993 Nobel Laureate in Physiology or Medicine.

“Most governments are frustrated that a small group of countries with a small fraction of the world’s population insist on retaining the right to ruin life on Earth for everyone else with nuclear weapons, ignoring their disarmament promises in the non-proliferation treaty”, says physics professor Max Tegmark from MIT, who helped organize the letter. “In South Africa, the minority in control of the unethical Apartheid system didn’t give it up spontaneously on their own initiative, but because they were pressured into doing so by the majority. Similarly, the minority in control of unethical nuclear weapons won’t give them up spontaneously on their own initiative, but only if they’re pressured into doing so by the majority of the world’s nations and citizens.”

The idea behind the proposed ban is to provide such pressure by stigmatizing nuclear weapons.

Beatrice Fihn, who helped launch the ban movement as Executive Director of the International Campaign to Abolish Nuclear Weapons, explains that such stigmatization made the landmine and cluster munitions bans succeed and can succeed again: “The market for landmines is pretty much extinct—nobody wants to produce them anymore because countries have banned and stigmatized them.  Just a few years ago, the United States—who never signed the landmines treaty—announced that it’s basically complying with the treaty. If the world comes together in support of a nuclear ban, then nuclear weapons countries will likely follow suit, even if it doesn’t happen right away.

Susi Snyder from from the Dutch “Don’t Bank on the Bomb” project explains:

If you prohibit the production, possession, and use of these weapons and the assistance with doing those things, we’re setting a stage to also prohibit the financing of the weapons. And that’s one way that I believe the ban treaty is going to have a direct and concrete impact on the ongoing upgrades of existing nuclear arsenals, which are largely being carried out by private contractors.”

Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created”, the letter states, motivating a ban.

“The horror that happened at Hiroshima and Nagasaki should never be repeated.  Nuclear weapons should be banned,” says Columbia University professor Martin Chalfie, 2008 Nobel Laureate in Chemistry.

Norwegian neuroscience professor May-Britt Moser, a 2014 Nobel Laureate in Physiology/Medicine, says, “In a world with increased aggression and decreasing diplomacy – the availability nuclear weapons is more dangerous than ever. Politicians are urged to ban nuclear weapons. The world today and future generations depend on that decision.”

The open letter: https://futureoflife.org/nuclear-open-letter/

]]>
Why You Should Care About Nukes https://futureoflife.org/nuclear/nuclear-weapons-the-uber-bad-to-the-really-really-bad/ Sun, 30 Oct 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/nuclear-weapons-the-uber-bad-to-the-really-really-bad/ Henry Reich with MinutePhysics and FLI’s Max Tegmark got together to produce an awesome and entertaining video about just how scary nuclear weapons are. They’re a lot scarier than most people realize — as you might have picked up on if you’ve flipped through our nuclear accidents and close calls timeline. Happily there’s easy action you can take to help make the world a safer place. How?

Check out the video above to learn more!

This video was also featured in:

]]>
Obama’s Nuclear Legacy https://futureoflife.org/nuclear/obamas-nuclear-legacy/ Thu, 06 Oct 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/obamas-nuclear-legacy/ The following article and infographic were originally posted on Futurism.

The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”

One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.

The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.

How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.

]]>
EA Global X Boston Conference https://futureoflife.org/fli-projects/ea-global-x-boston-conference/ Tue, 10 May 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/ea-global-x-boston-conference/

The first EA Global X conference, EAGxBoston, is being held at MIT on April 30th, 12:30-6:30pm. Boston EAs have created an incredible lineup bringing together a who’s who of researchers, EAs, EA orgs, and up-and-coming orgs including:
Dean Karlan (Yale, Innovations for Poverty Action)
Joshua Greene (Harvard, Moral Cognition Lab)
Rachel Glennerster (MIT, Poverty Action Lab)
Piali Mukhopadhyay (GiveDirectly)
Bruce Friedrich (The Good Food Institute)
Julia Wise (The Centre for Effective Altruism)
Ian Ross (Hampton Creek, Facebook)
Allison Smith (Animal Charity Evaluators)
Elizabeth Pearce (Boston University, Iodine Global Network)
Cher-Wen DeWitt (One Acre Fund)
Rhonda Zapatka (Trickle Up)
Elijah Goldberg (ImpactMatters)
Jason Ketola (MaxMind)
Lucia Sanchez (Innovations for Poverty Action)
Sharon Nunez Gough (Animal Equality)
Bruce Friedrich (The Good Food Institute, New Crop Capital)
Jon Camp (The Humane League)
Victoria Krakovna (Harvard, Future of Life Institute)
Eric Gastfriend (Harvard Business School EA, FLI, and formerly 80,000 Hours)
Dillon Bowen (Tufts EA, formerly 80,000 Hours and Giving What We Can)
Jason Trigg (earning-to-give at a startup and formerly as a hedge fund quant)
and more

The day will be filled with talks, panels, and networking opportunities. The program will address the major effective altruist cause areas of global health poverty and development, animal agriculture, and global catastrophic risk, as well as movement concerns like conducting research, building community, and choosing a career direction. We will also be introducing some up-and-coming organizations.

FLI’s Victoria Krakovna, Richard Mallah, and Lucas Perry participated in a panel about Global Catastrophic Risks.

More information and registration can be found on the conference website:
http://eagxboston.com

All proceeds after our minimum costs will be donated to EA charities. If you need a tax-receipt, please contact Randy Carlton <>. Please note that the early bird special ends on April 19th.

We have a limited amount of space, so if you’d like to join, please register today and share this invitation with interested friends via our Facebook group:
https://www.facebook.com/EAGxBoston/

Let’s get together, and learn what we can do even better together!

EAGxBoston Team from MIT Sloan EA, MIT EA, Tufts EA, Harvard EA, HBS EA, Animal Charity Evaluators and The Commonwealth Market
http://eagxboston.com

]]>
Hawking Says ‘Don’t Bank on the Bomb’ and Cambridge Votes to Divest $ 1Billion From Nuclear Weapons https://futureoflife.org/nuclear/hawking-says-dont-bank-on-the-bomb-and-cambridge-votes-to-divest-1billion-from-nuclear-weapons/ Mon, 04 Apr 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/hawking-says-dont-bank-on-the-bomb-and-cambridge-votes-to-divest-1billion-from-nuclear-weapons/

1,000 nuclear weapons are plenty enough to deter any nation from nuking the US, but we’re hoarding over 7,000, and a long string of near-misses have highlighted the continuing risk of an accidental nuclear war which could trigger a nuclear winter, potentially killing most people on Earth. Yet rather than trimming our excess nukes, we’re planning to spend $4 million per hour for the next 30 years making them more lethal.

Although I’m used to politicians wasting my tax dollars, I was shocked to realize that I was voluntarily using my money for this nuclear boondoggle by investing in the very companies that are lobbying for and building new nukes: some of the money in my bank account gets loaned to them and my S&P500 mutual fund invests in them. “If you want to slow the nuclear arms race, then put your money where your mouth is and don’t bank on the bomb!”, my physics colleague Stephen Hawking told me. To make it easier for others to follow his sage advice, I made an app for that together with my friends at the Future of Life Institute, and launched this “Brief History of Nukes” that’s 3.14 long in honor of Hawking’s fascination with pi.

Our campaign got off to an amazing start this weekend at an MIT conferencewhere our Mayor Denise Simmons announced that the Cambridge City Council has unanimously decided to divest their billion dollar city pension fund from nuclear weapons production. “Not in our name!”, she said, and drew a standing ovation. “It’s my hope that this will inspire other municipalities, companies and individuals to look at their investments and make similar moves”.

“In Europe, over 50 large institutions have already limited their nuclear weapon investments, but this is our first big success in America”, said Susi Snyder, who leads the global nuclear divestment campaign dontbankonthebomb.com. Boston College philosophy major Lucas Perry, who led the effort to persuade Cambridge to divest, hoped that this online analysis tool will create a domino effect: “I want to empower other students opposing the nuclear arms race to persuade their own towns and universities to follow suit.”

Many financial institutions now offer mutual funds that cater to the growing interest in socially responsible investing, including Ariel, Calvert, Domini, Neuberger, Parnassuss, Pax World and TIAA-CREF. “We appreciate and share Cambridge’s desire to exclude nuclear weapons production from its pension fund. Pension funds are meant to serve the long-term needs of retirees, a service that nuclear weapons do not offer”, said Julie Fox Gorte, Senior Vice President for Sustainable Investing at Pax World.

“Divestment is a powerful way to stigmatize the nuclear arms race through grassroots campaigning, without having to wait for politicians who aren’t listening”, said conference co-organizer Cole Harrison, Executive Director of Massachusetts Peace Action, the nation’s largest grassroots peace organization. “If you’re against spending more money making us less safe, then make sure it’s not your money.”

You’ll find our divestment app here. If you’d like to persuade your own municipality to follow Cambridge’s lead, using their policy order as a model, here it is:

WHEREAS: Nations across the globe still maintain over 15,000 nuclear weapons, some of which are hundreds of times more powerful than those that obliterated Hiroshima and Nagasaki, and detonation of even a small fraction of these weapons could create a decade-long nuclear winter that could destroy most of the Earth’s population; and
WHEREAS: The United States has plans to invest roughly one trillion dollars over the coming decades to upgrade its nuclear arsenal, which many experts believe actually increases the risk of nuclear proliferation, nuclear terrorism, and accidental nuclear war; and
WHEREAS: In a period where federal funds are desperately needed in communities like Cambridge in order to build affordable housing, improve public transit, and develop sustainable energy sources, our tax dollars are being diverted to and wasted on nuclear weapons upgrades that would make us less safe; and
WHEREAS: Investing in companies producing nuclear weapons implicitly supports this misdirection of our tax dollars; and
WHEREAS: Socially responsible mutual funds and other investment vehicles are available that accurately match the current asset mix of the City of Cambridge Retirement Fund while excluding nuclear weapons producers; and
WHEREAS: The City of Cambridge is already on record in supporting the abolition of nuclear weapons, opposing the development of new nuclear weapons, and calling on President Obama to lead the nuclear disarmament effort; now therefore be it
ORDERED: That the City Council go on record opposing investing funds from the Cambridge Retirement System in any entities that are involved in or support the production or upgrading of nuclear weapons systems; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Cambridge Peace Commissioner and other appropriate City staff to organize an informational forum on possibilities for Cambridge individuals and institutions to divest their pension funds from investments in nuclear weapons contractors; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Board of the Cambridge Retirement System and other appropriate City staff to ensure divestment from all companies involved in production of nuclear weapons systems, and in entities investing in such companies, and the City Manager is requested to report back to the City Council about the implementation of said divestment in a timely manner.

]]>
AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research https://futureoflife.org/ai/aaai-workshop-highlights-debate-discussion-and-future-research/ Wed, 17 Feb 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/aaai-workshop-highlights-debate-discussion-and-future-research/ The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!

]]>
2015: An Amazing Year in Review https://futureoflife.org/newsletter/2015-a-year-in-review/ Thu, 31 Dec 2015 00:00:00 +0000 https://futureoflife.org/uncategorized/2015-a-year-in-review/ Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

]]>
What’s so exciting about AI? Conversations at the Nobel Week Dialogue https://futureoflife.org/fli-projects/whats-so-exciting-about-ai-conversations-at-the-nobel-week-dialogue/ Tue, 22 Dec 2015 00:00:00 +0000 https://futureoflife.org/uncategorized/whats-so-exciting-about-ai-conversations-at-the-nobel-week-dialogue/ Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”

]]>
NIPS Symposium 2015 https://futureoflife.org/event/nips-symposium-2015/ Thu, 10 Dec 2015 00:00:00 +0000 https://futureoflife.org/uncategorized/nips-symposium-2015/ FLI is excited to be a co-sponsor for the upcoming NIPS Symposium on the Societal Impacts of Machine Learning:

Algorithms Among Us
The Societal Impacts of Machine Learning

Public interest in Machine Learning is mounting as the societal impacts of technologies derived from our community become evident. This symposium aims to turn the attention of Machine Learning researchers to the present and future consequences of our work, particularly in the areas of privacy, military robotics,  employment, and liability. These topics now deserve concerted attention to ensure the best interests of those both within and without Machine Learning: the community must engage with public discourse so as not to become the victim of it (as other fields have e.g. genetic engineering). The symposium will bring leaders within academic and industrial Machine Learning together with experts outside the field in order to debate the impacts of our algorithms and the possible responses we might adopt. A particular focus will be paid to technical areas of Machine Learning research that might serve to tackle some of the highlighted issues. A call for contributed content will be circulated (including the FLI grant awardees) to furnish a poster session.

Participants

Nick Bostrom, Professor, Faculty of Philosophy & Oxford Martin School, Director, Future of Humanity Institute. Bostrom is renown for his work on existential risk, the anthropic principle, human enhancement ethics, the reversal test, superintelligence risks, and consequentialism.  He is the author of over 200 publications, and has recently published his new New York Times bestseller “Superintelligence: Paths, Dangers, Strategies.”

Erik Brynjolfsson is a Professor at the MIT Sloan School of Management, a Reasearch Associate at NBER,  and a Director of the MIT Initiative on the Digital Economy. His current research is examining the effects of information technologies on business strategy, internet commerce, productivity and performance, pricing models, and intangible assets. Brynjolfsson has authored books, dozens of papers, and has even been able to be among the first researchers to measure and quantify productivity contributions of IT and the value of online product variety.

Tom Dietterich, Distinguished Professor and Director of Intelligent Systems, School of Electrical Engineering and Computer Science, Oregon State University. Dietterich is currently engaged in a wide range of research projects which work on Ecosystem Informatics and Computational Sustainability, Approximate Optimization for Bio-Economic Models, and Machine Learning for Species Distribution, to name a few. 

Ian Kerr is a recognized international expert in emerging technology and law issues. He is currently holding a Canada Research Chair in Ethics, Law, and Technology at the University of Ottawa where he is also teaching a course on the ethical and legal implications of robots in society.

Neil Lawrence is a Professor of Machine Learning at the University of Sheffield where he is working to develop the Open Data Science Initiative. His other research interests include probabilistic models with applications in computational biology and personalized helath.

Yann LeCun, Director of AI Research at Facebook & Silver Professor at the Courant Institute, New York University. LeCun is also affiliated with the NYU Center for Data Science and the Center for Neural Science. His current work involves AI, machine learning, computer perception, mobile robotics, and computational neuroscience. Among many other things, he has published over 180 technical papers and book chapters on topics such as handwriting recognition, image processing and compression, and dedicated circuits and architectures in computer perception.

Shane Legg, co-founder, Google DeepMind. Legg was awarded the Canadian Singularity Institute for Artificial Intelligence Prize, and he has spent time at the Swiss Finance Institute working on models of cognitive bias in investor behavior. Legg now constructs powerful learning algorithms at Google DeepMind.

Percy Liang, Assistant Professor of Computer Science and Statistics at Stanford University. The primary focus of Liang’s research is to create software that will allow humans to communicate with computers and to develop algorithms that can infer latent structures from raw data. He is a strong proponent of efficient and reproducible research and is currently working to develop CodaLab, a new platform that will allow researches to maintain full provenance of an experiment.

Andrew Ng, Associate Professor at Stanford; Chief Scientist of Baidu; and Chairman and Co-Founder of Coursera. He led the development of Stanford’s main MOOC (Massive Open Online Courses) platform and also taught an online Machine Learning class that was offered to over 100,000 students. He founded and led the Google Brain project and is focused on deep machine learning.

Organizers

Michael Osborne (DPhil Oxon) is an Associate Professor in Machine Learning and co-director of the Oxford Martin programme on Technology and Employment at the University of Oxford. Dr Osborne has organised three previous NIPS workshops: ‘Bayesian Optimization in Theory and Practice’ (2013), ‘Probabilistic Numerics’ (2012) and ‘Bayesian Optimization, Experimental Design and Bandits: Theory and Applications’ (2011, and was also an Area Chair for NIPS 2014. Coupled to Dr Osborne’s work on foundational Machine Learning topics is an interest in inter-disciplinary collaboration to study the impact of computerization upon labour markets. This latter work has enjoyed broad and sustained media coverage (featured in BBC Newsnight, CNN, The Economist, Financial Times, Wall Street Journal, New York Times, Washington Post, Der Spiegel, Scientific American, TIME Magazine).

Murray Shanahan is Professor of Cognitive Robotics in the Dept. of Computing at Imperial College London, where he heads the Neurodynamics Group. He gained his PhD in computer science from Cambridge University (King’s College) in 1988. He took up a lectureship at Imperial College in 1998, where he became full professor in 2006. His publications span artificial intelligence, robotics, logic, dynamical systems, computational neuroscience, and philosophy of mind. His work up to 2000 was in the tradition of classical, symbolic AI, but since then has concerned brain-inspired cognitive architectures, neurodynamics, and consciousness. He was scientific adviser to the film Ex-Machina, which was partly inspired by by his book “Embodiment and the Inner Life”  (OUP, 2010). His new book “The Technological Singularity” will be published by MIT Press in August 2015. 

Adrian Weller is a Senior Research Associate in Machine Learning at the University of Cambridge. He is very interested in all issues related to intelligence (natural and artificial) and their applications, with primary research focus in inference in probabilistic graphical models. Dr Weller is an active angel investor and adviser, and previously held senior roles in investing and trading at Goldman Sachs, Salomon Brothers and Citadel. He received a PhD in computer science from Columbia and a first class degree in mathematics from Trinity College, Cambridge. 

FLI would also like to extend a special thank you to Richard Mallah for his efforts to help the official NIPS organizers get this symposium set up.

Visit the official NIPS Symposium page for more details about the event.

]]>