Contents
Research updates
- New paper: “Safely Interruptible Agents.” The paper will be presented at UAI-16, and is a collaboration between Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute (FHI) and MIRI; see FHI’s press release. The paper has received (often hyperbolic) coverage from a number of press outlets, including Business Insider,Motherboard, Newsweek, Gizmodo, BBC News, eWeek, and Computerworld.
- New at IAFF: All Mathematicians are Trollable: Divergence of Naturalistic Logical Updates;Two Problems with Causal-Counterfactual Utility Indifference
- New at AI Impacts: Metasurvey: Predict the Predictors; Error in Armstrong and Sotala 2012
- Marcus Hutter’s research group has released a new paper based on results from a MIRIx workshop: “Self-Modification of Policy and Utility Function in Rational Agents.” Hutter’s team is presenting several other AI alignment papers at AGI-16 next month: “Death and Suicide in Universal Artificial Intelligence” and “Avoiding Wireheading with Value Reinforcement Learning.”
- “Asymptotic Logical Uncertainty and The Benford Test” has been accepted to AGI-16.
General updates
- MIRI and FHI’s Colloquium Series on Robust and Beneficial AI (talk abstracts and slides now up) has kicked off with opening talks by Stuart Russell, Francesca Rossi, Tom Dietterich, and Alan Fern.
- We visited FHI to discuss new results in logical uncertainty, our new machine-learning-oriented research program, and a range of other topics.
News and links
- Following an increase in US spending on autonomous weapons, The New York Times reports that the Pentagon is turning to Silicon Valley for an edge.
- IARPA director Jason Matheny, a former researcher at FHI, discusses forecasting and risk from emerging technologies (video).
- FHI Research Fellow Owen Cotton-Barratt gives oral evidence to the UK Parliament on the need for robust and transparent AI systems.
- Google reveals a hidden reason for AlphaGo’s exceptional performance against Lee Se-dol: a new integrated circuit design that can speed up machine learning applications by an order of magnitude.
- Elon Musk answers questions about SpaceX, Tesla, OpenAI, and more (video).
- Why worry about advanced AI? Stuart Russell (in Scientific American), George Dvorsky (inGizmodo), and SETI director Seth Shostak (in Tech Times) explain.
- Olle Häggeström’s new book, Here Be Dragons, serves as an unusually thoughtful and thorough introduction to existential risk and future technological development, including a lucid discussion of artificial superintelligence.
- Robin Hanson examines the implications of widespread whole-brain emulation in his new book, The Age of Em: Work, Love, and Life when Robots Rule the Earth.
- Bill Gates highly recommends Nick Bostrom’s Superintelligence. The paperback edition is now out, with a newly added afterword.
- FHI Research Associate Paul Christiano has joined OpenAI as an intern. Christiano has also written new posts on AI alignment: Efficient and Safely Scalable, Learning with Catastrophes,Red Teams, and The Reward Engineering Problem.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: New $4 million grants program!
Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
Maggie Munro
1 August, 2024
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
Maggie Munro
5 July, 2024
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Maggie Munro
31 May, 2024
All Newsletters