Contents
Our 2017 fundraiser was a huge success, with 341 donors contributing a total of $2.5 million!
Some of the largest donations came from Ethereum inventor Vitalik Buterin, bitcoin investors Christian Calderon and Marius van Voorden, poker players Dan Smith and Tom and Martin Crowley (as part of a matching challenge), and the Berkeley Existential Risk Initiative. Thank you to everyone who contributed!
Research updates
- The winners of the first AI Alignment Prize include Scott Garrabrant’s Goodhart Taxonomy and recent IAFF posts: Vadim Kosoy’s Why Delegative RL Doesn’t Work for Arbitrary Environments and More Precise Regret Bound for DRL, and Alex Mennen’s Being Legible to Other Agents by Committing to Using Weaker Reasoning Systems and Learning Goals of Simple Agents.
- New at AI Impacts: Human-Level Hardware Timeline; Effect of Marginal Hardware on Artificial General Intelligence
- We’re hiring for a new position at MIRI: ML Living Library, a specialist on the newest developments in machine learning.
General updates
- From Eliezer Yudkowsky: A Reply to Francois Chollet on Intelligence Explosion.
- Counterterrorism experts Richard Clarke and R. P. Eddy profile Yudkowsky in their new book Warnings: Finding Cassandras to Stop Catastrophes.
- There have been several recent blog posts recommending MIRI as a donation target: from Ben Hoskin, Zvi Mowshowitz, Putanumonit, and the Open Philanthropy Project’s Daniel Dewey and Nick Beckstead.
News and links
- A generalization of the AlphaGo algorithm, AlphaZero, achieves rapid superhuman performance on Chess and Shogi.
- Also from Google DeepMind: “Specifying AI Safety Problems in Simple Environments.”
- Viktoriya Krakovna reports on NIPS 2017: “This year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. […] There was a lot of great content on the long-term side, including several oral / spotlight presentations and the Aligned AI workshop.”
- 80,000 Hours interviews Phil Tetlock and investigates the most important talent gaps in the EA community.
- From Seth Baum: “A Survey of AGI Projects for Ethics, Risk, and Policy.” And from the Foresight Institute: “AGI: Timeframes & Policy.”
- The Future of Life Institute is collecting proposals for a second round of AI safety grants, due February 18.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: New $4 million grants program!
Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
Maggie Munro
1 August, 2024
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
Maggie Munro
5 July, 2024
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Maggie Munro
31 May, 2024
All Newsletters