Contents
Research updates
- New at IAFF: Modeling the Capabilities of Advanced AI Systems as Episodic Reinforcement Learning; Simplified Explanation of Stratification
- New at AI Impacts: Friendly AI as a Global Public Good
- We ran two research workshops this month: a veterans’ workshop on decision theory for long-time collaborators and staff, and a machine learning workshop focusing on generalizable environmental goals, impact measures, and mild optimization.
- AI researcher Abram Demski has accepted a research fellowship at MIRI, pending the completion of his PhD. He’ll be starting here in late 2016 / early 2017.
- Data scientist Ryan Carey is joining MIRI’s ML-oriented team this month as an assistant research fellow.
General updates
- MIRI’s 2016 strategy update outlines how our research plans have changed in light of recent developments. We also announce a generous $300,000 gift — our second-largest single donation to date.
- We’ve uploaded nine talks from CSRBAI’s robustness and preference specification weeks, including Jessica Taylor on “Alignment for Advanced Machine Learning Systems” (video), Jan Leike on “General Reinforcement Learning” (video), Paul Christiano on “Training an Aligned RL Agent” (video), and Dylan Hadfield-Menell on “The Off-Switch” (video).
- MIRI COO Malo Bourgon has been co-chairing a committee of IEEE’s Global Initiative for Ethical Considerations in the Design of Autonomous Systems. He recently moderated a workshop on general AI and superintelligence at the initiative’s first meeting.
- We had a great time at Effective Altruism Global, and taught at SPARC.
- We hired two new admins: Office Manager Aaron Silverbook, and Communications and Development Strategist Colm Ó Riain.
News and links
- The Open Philanthropy Project awards $5.6 million to Stuart Russell to launch an academic AI safety research institute: the Center for Human-Compatible AI.
- “Who Should Control Our Thinking Machines?“: Jack Clark interviews DeepMind’s Demis Hassabis.
- Elon Musk explains: “I think the biggest risk is not that the AI will develop a will of its own, but rather that it will follow the will of people that establish its utility function, or its optimization function. And that optimization function, if it is not well-thought-out — even if its intent is benign, it could have quite a bad outcome.”
- Modeling Intelligence as a Project-Specific Factor of Production: Ben Hoffman compares different AI takeoff scenarios.
- Clopen AI: Viktoriya Krakovna weighs the advantages of closed vs. open AI.
- Google X director Astro Teller expresses optimism about the future of AI in a Medium post announcing the first report of the Stanford AI100 study.
- Buzzfeed reports on efforts to prevent the development of lethal autonomous weapons systems.
- In controlled settings, researchers find ways to detect keystrokes via distortions in WiFi signals and jump air-gaps using hard drive actuator noises.
- Solid discussions on the EA Forum: Should Donors Make Commitments About Future Donations? and Should You Switch Away From Earning to Give?
See the original newsletter on MIRI’s website.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: New $4 million grants program!
Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
Maggie Munro
1 August, 2024
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
Maggie Munro
5 July, 2024
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Maggie Munro
31 May, 2024
All Newsletters