Contents
Research updates
- New papers: “Formalizing Convergent Instrumental Goals” and “Quantilizers: A Safer Alternative to Maximizers for Limited Optimization.” These papers have been accepted to the AAAI-16 workshop on AI, Ethics and Society.
- New at AI Impacts: Recently at AI Impacts
- New at IAFF: A First Look at the Hard Problem of Corrigibility; Superrationality in Arbitrary Games; A Limit-Computable, Self-Reflective Distribution; Reflective Oracles and Superrationality: Prisoner’s Dilemma
- Scott Garrabrant joins MIRI’s full-time research team this month.
General updates
- Our Winter Fundraiser is now live, and includes details on where we’ve been directing our research efforts in 2015, as well as our plans for 2016. The fundraiser will conclude on December 31.
- A 2014 collaboration between MIRI and the Oxford-based Future of Humanity Institute (FHI), “The Errors, Insights, and Lessons of Famous AI Predictions,” is being republished next week in the anthology Risks of Artificial Intelligence. Also included will be Daniel Dewey’s important strategic analysis “Long-Term Strategies for Ending Existential Risk from Fast Takeoff” and articles by MIRI Research Advisors Steve Omohundro and Roman Yampolskiy.
- We recently spent an enjoyable week in the UK comparing notes, sharing research, and trading ideas with FHI. During our visit, MIRI researcher Andrew Critch led a “Big-Picture Thinking” seminar on long-term AI safety (video).
News and links
- In collaboration with Oxford, UC Berkeley, and Imperial College London, Cambridge University is launching a new $15 million research center to study AI’s long-term impact: the Leverhulme Centre for the Future of Intelligence.
- The Strategic Artificial Intelligence Research Centre, a new joint initiative between FHI and the Cambridge Centre for the Study of Existential Risk, is accepting applications for three research positions between now and January 6: research fellows in machine learning and the control problem, in policy work and emerging technology governance, and in general AI strategy. FHI is additionally seeking a research fellow to study AI risk and ethics. (Full announcement.)
- FHI founder Nick Bostrom makes Foreign Policy‘s Top 100 Global Thinkers list.
- Bostrom (link), IJCAI President Francesca Rossi (link), and Vicarious co-founder Dileep George (link) weigh in on AI safety in a Washington Post series.
- Future of Life Institute co-founder Viktoriya Krakovna discusses risks from general AI without an intelligence explosion.
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent newsletters
Future of Life Institute Newsletter: New $4 million grants program!
Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
Maggie Munro
1 August, 2024
Future of Life Institute Newsletter: California Pushes for AI Legislation
A look at SB 1047, new $50,000 Superintelligence Imagined contest, recommendations to the Senate AI Working Group, and more.
Maggie Munro
5 July, 2024
Future of Life Institute Newsletter: Notes on the AI Seoul Summit
Recapping the AI Seoul Summit, OpenAI news, updates on the EU's regulation of AI, new worldbuilding projects to explore, policy updates, and more.
Maggie Munro
31 May, 2024
All Newsletters