Grantmaking Archives - Future of Life Institute https://futureoflife.org/category/grantmaking/ Preserving the long-term future of life. Fri, 08 Mar 2024 11:09:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Realising Aspirational Futures – New FLI Grants Opportunities https://futureoflife.org/grantmaking/realising-aspirational-futures-new-fli-grants-opportunities/ Wed, 14 Feb 2024 13:00:00 +0000 https://futureoflife.org/?p=118998 Our Futures Program, launched in 2023, aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. This year, as part of that program, we are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.

The first request for proposals (RFP) calls for papers evaluating and predicting the impact of AI on the achievement of the UN Sustainable Development Goals (SDGs) relating to poverty, healthcare, energy and climate change. The second RFP calls for designs of trustworthy global mechanisms or institutions to govern advanced AI in the near future.

Selected proposals in either category will receive a one-time grant of $15,000, to be used at the researcher’s discretion. We intend to make several grants in each track.

Applications for both tracks are now open and will remain so until April 1st, 2024.

Request 1: The Impact of AI on Achieving SDGs in Poverty, Health, Energy and Climate

There has been extensive academic research and, more recently, public discourse on the current harms and emerging risks of AI. In contrast, the discussion around the benefits of AI has been quite ambiguous.

The prospect of enormous benefits down the road from AI – that it will “eliminate poverty,” “cure diseases” or “solve climate change” – helps to drive a corporate race to build ever more powerful systems. But the type of AI capabilities necessary to realize those benefits is unclear. As that race brings increasing levels of risk, we need a concrete and evidence-based understanding of the benefits in order to develop, deploy and regulate this technology in a way that brings genuine benefits to everyone’s lives.

One way of doing that is to see how AI is affecting the achievement of a broadly supported list of global priorities. To that effect, we are looking for researchers to select a target from one of the four UN Sustainable Development Goals (SDGs) we have chosen to focus on – namely goals 1 (Poverty), 3 (Health), 7 (Energy), and 13 (Climate), analyse the (direct or indirect) impact of AI on the realisation of that target up to the present, and then project how AI could accelerate, inhibit, or prove irrelevant to, the achievement of that goal by 2030.

We hope that the resulting papers will enrich the vital discussion of whether AI can in fact solve these crucial challenges, and, if so, how it can be made or directed to do so.

Read more and apply

Request 2: Designs for global institutions governing advanced AI

Reaching a stable future world may require restricting AI development such that the world has a.) no such AGI projects; b.) a single, global AGI project, or c.) multiple monitored AGI projects.

Here we define AGI as a system which outperforms human experts in non-physical tasks across a wide range of domains, including metacognitive abilities like learning new skills. A stable state would be a scenario that evolves at the cautious timescale determined by thorough risk assessments rather than corporate competition.

The success of any of these stable futures depends upon diligent new mechanisms and institutions which can account for the newly introduced risks and benefits of AI capability development. It is not yet clear what such organizations would look like, how they would command trust or evade capture, and so on.

Researchers must design trustworthy global governance mechanisms or institutions that can help stabilise a future with 0, 1, or more AGI projects – or a mechanism which aids more than one of these scenarios. Proposals should outline the specifications of their mechanism, and explain how it will minimise the risks of advanced AI and maximise the distribution of its benefits.

Without a clear articulation of how trustworthy global AGI governance could work, the default narrative is that it is impossible. This track is thus born of a sincere hope that the default narrative is wrong, a hope that if we keep it under control and use it well, AI will empower – rather than disempower – humans the world over.

Read more and apply
]]>
The Future of Life Institute announces grants program for existential risk reduction https://futureoflife.org/grantmaking/fli-announces-grants-program-for-existential-risk-reduction/ Thu, 03 Jun 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/fli-announces-grants-program-for-existential-risk-reduction/ The Future of Life Institute announces $25M grants program for existential risk reduction

Emerging technologies have the potential to help life flourish like never before – or self-destruct. The Future of Life Institute is delighted to announce a $25M multi-year grant program aimed at tipping the balance toward flourishing, away from extinction. This is made possible by the generosity of cryptocurrency pioneer Vitalik Buterin and the Shiba Inu community.

COVID-19 showed that our civilization is fragile, and can handle risk better when planning ahead. Our grants are for those who have taken these lessons to heart, wish to study the risks from ever more powerful technologies, and develop strategies for reducing them. The goal is to help humanity win the wisdom race: the race between the growing power of our technology and the wisdom with which we manage it.

Program areas

Our grants program is focused on reducing the very greatest risks, which receive remarkably little funding and attention relative to their importance. Specifically, they are focused on xrisk (existential risk, i.e., events that could cause human extinction or permanently and drastically curtail humanity’s potential) and ways of reducing it directly or indirectly:

  1. Directly reduce xrisk
    Example: Ensure that increasingly powerful artificial intelligence is aligned with humanity’s interests.
  2. Don’t destroy collaboration
    Avoid things that significantly increase xrisk by destabilizing the world and reducing geopolitical cooperation. Examples: nuclear war, bioengineered pandemics, a lethal autonomous weapons arms race, media-bias-fueled hyper-nationalism and jingoism
  3. Support collaboration
    Support things that significantly decrease x-risk by improving geopolitical cooperation. Examples: institutions, processes and activities that improve global communication and cooperation toward shared goals
  4. Create incentives & goals for collaboration
    Develop shared positive visions for the long-term future that incentivize global cooperation and the development of beneficial technologies. Examples:  nurture existential hope, study how people can be helped and incentivized to set and pursue positive long-term goals

The emphasis on collaboration stems from FLI’s conviction that technology is not a zero-sum game, and that the most likely outcomes are that all of humanity will ultimately flourish or flounder together.

Types of grants

We will be running a series of grants competitions of two types: Shiba Inu Grants and Vitalik Buterin Fellowships. Shiba Inu grants support projects, specifically research, education or other beneficial  activities in the program areas. Buterin Fellowships bolster the talent pipeline through which much-needed talent flows into our program areas, tentatively including funding for high school summer programs, college summer internships, graduate fellowships and postdoctoral fellowships. For example, the Vitalik Buterin Postdoctoral Fellowship for AI Safety will tentatively open for applications in September, and will fund computer science postdocs for three years at institutions of their choice. Academic research grants and fellowships are focused in three areas: computer science, behavioral science, and policy/governance.

To conclude, we wish to once again express our profound gratitude to all who’ve made this possible, including Vitalik Buterin and the Shiba Inu Community.

Media inquiries: Max Tegmark, max@futureoflife.org

FREQUENTLY ASKED QUESTIONS:

Q: What’s the Future of Life Institute?

A: A 501(c)3 non-profit that wants the long-term future of life to exist and be as positive as possible. We focus particularly on the benefits and risks of transformative technology.

Q: Who’s Vitalik Buterin?

A: A cryptocurrency pioneer and philanthropic supporter of effective altruism.

Q: What’s the Shiba Inu Community?

A: An experiment in decentralized spontaneous community building with hundreds of thousands of members, that by promoting the Shiba Inu cryptocurrency token is having remarkable positive impact on charities, including Indian COVID-19 relief.

Q: When and how can I apply?

A. If you sign up for our mailing list,  we will send you instructions when grants programs open for applications. For efficiency and fairness, we do not accept unsolicited applications.

Q: Who can apply?

A: We wish to support promising people and ideas anywhere in the world. Since we are a non-profit organization, we are normally only able to support work associated with research institutions and other non-profits; if you’re unsure whether you qualify, please reach out once our application portal is live.

Q: Will the Shiba Inu Grants be paid  in cryptocurrency?

A: No, in US Dollars etc., as our past grants.

Q: Why would humanity cause its own destruction?

A: By mistake or miscommunication, which has brought humanity to the brink of catastrophe many times in the past (example, more examples, comic relief), and biotech & AI poses arguably greater threats.

Q: Isn’t this naïve to think that humanity would abstain from developing destructive technologies?

A: No. Several national bioweapon programs existed around 1970, and yet bioweapons are now illegal under international law. Thanks in significant part to Future of Life Award winner Prof. Matthew Meselson, this such weapons of mass destruction never entered into widespread use, and biology’s main use is saving lives.

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”

Isaac Asimov

]]>
New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial https://futureoflife.org/ai/2015selection/ Wed, 28 Oct 2015 00:00:00 +0000 https://futureoflife.org/2015selection/ Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research

July 1, 2015

Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits of AI while avoiding potential pitfalls.

The Boston-based Future of Life Institute (FLI) announced the selection of 37 research teams around the world to which it plans to award about $7 million from Elon Musk and the Open Philanthropy Project as part of a first-of-its-kind grant program dedicated to “keeping AI robust and beneficial”. The program launches as an increasing number of high-profile figures including Bill Gates, Elon Musk and Stephen Hawking voice concerns about the possibility of powerful AI systems having unintended, or even potentially disastrous, consequences. The winning teams, chosen from nearly 300 applicants worldwide, will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.

The 37 projects being funded include:

  • Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
  • A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
  • A project led by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
  • A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
  • A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
  • A new Oxford-Cambridge research center for studying AI-relevant policy

As Skype founder Jaan Tallinn, one of FLI’s founders, has described this new research direction, “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”

When the Future of Life Institute issued an open letter in January calling for research on how to keep AI both robust and beneficial, it was signed by a long list of AI researchers from academia, nonprofits and industry, including AI research leaders from Facebook, IBM, and Microsoft and the founders of Google’s DeepMind Technologies. It was seeing that widespread agreement that moved Elon Musk to seed the research program that has now begun.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk at the time. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”

“I am glad to have an opportunity to carry this research focused on increasing the transparency of AI robotic systems,” said Manuela Veloso, past president of the Association for the Advancement of Artificial Intelligence (AAAI) and winner of one of the grants.

“This grant program was much needed: because of its emphasis on safe AI and multidisciplinarity, it fills a gap in the overall scenario of international funding programs,” added Prof. Francesca Rossi, president of the International Joint Conference on Artificial Intelligence (IJCAI), also a grant awardee.

Tom Dietterich, president of the AAAI, described how his grant — a project studying methods for AI learning systems to self-diagnose when failing to cope with a new situation — breaks the mold of traditional research:

“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”

As Terminator Genisys debuts this week, organizers stressed the importance of separating fact from fiction. “The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, said FLI president Max Tegmark. “We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”

The full list of research grant winners can be found here. The plan is to fund these teams for up to three years, with most of the research projects starting by September 2015, and to focus the remaining $4M of the Musk-backed program on the areas that emerge as most promising.

FLI has a mission to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

]]>