Contents
Research updates
- Two new papers split logical uncertainty into two distinct subproblems: “Uniform Coherence” and “Asymptotic Convergence in Online Learning with Unbounded Delays.”
- New at IAFF: An Approach to the Agent Simulates Predictor Problem; Games for Factoring Out Variables; Time Hierarchy Theorems for Distributional Estimation Problems
- We will be presenting “The Value Learning Problem” at the IJCAI-16 Ethics for Artificial Intelligence workshop instead of the AAAI Spring Symposium where it was previously accepted.
General updates
- We’re launching a new research program with a machine learning focus. Half of MIRI’s team will be investigating potential ways to specify goals and guard against errors in advanced neural-network-inspired systems.
- We ran a type theory and formal verification workshop this past month.
News and links
- The Open Philanthropy Project explains its strategy of high-risk, high-reward hits-based givingand its decision to make AI risk its top focus area this year.
- Also from OpenPhil: Is it true that past researchers over-hyped AI? Is there a realistic chance of AI fundamentally changing civilization in the next 20 years?
- From Wired: Inside OpenAI, and Facebook is Building AI That Builds AI.
- The White House announces a public workshop series on the future of AI.
- The Wilberforce Society suggests policies for narrow and general AI development.
- Two new AI safety papers: “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis” and “The AGI Containment Problem.”
- Peter Singer weighs in on catastrophic AI risk.
- Digital Genies: Stuart Russell discusses the problems of value learning and corrigibility in AI.
- Nick Bostrom is interviewed at CeBIT (video) and also gives a presentation on intelligence amplification and the status quo bias (video).
- Jeff MacMahan critiques philosophical critiques of effective altruism.
- Yale political scientist Allan Dafoe is seeking research assistants for a project on political and strategic concerns related to existential AI risk.
- The Center for Applied Rationality is accepting applicants to a free workshop for machine learning researchers and students.
This newsletter was originally posted here.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: New $4 million grants program!
Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
Maggie Munro
1 August, 2024
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
Maggie Munro
5 July, 2024
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Maggie Munro
31 May, 2024
All Newsletters