Futures Archives - Future of Life Institute https://futureoflife.org/fli-area/positive-futures/ Preserving the long-term future of life. Mon, 22 Jul 2024 18:25:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Future of Life Institute Announces 16 Grants for Problem-Solving AI https://futureoflife.org/press-release/fli-announces-16-grants-for-problem-solving-ai/ Thu, 11 Jul 2024 19:54:02 +0000 https://futureoflife.org/?p=132903 CAMPBELL, CA — The Future of Life Institute has announced the 16 recipients of its newest grants program, directing $240,000 to support research on how AI can be safely harnessed to solve specific, intractable problems facing humanity around the world.  

Two requests for proposals were released earlier this year. The first track called for research proposals on how AI may impact the UN Poverty, Health, Energy and Climate Sustainable Development Goals (SDGs). The second focused on design proposals for global institutions governing advanced AI, or artificial general intelligence (AGI). The 130 entrants hail from 39 countries including Malawi, Slovenia, Vietnam, Serbia, Rwanda, China, and Bolivia.

“Big Tech companies are investing unprecedented sums of money into making AI systems more powerful rather than solving society’s most pressing problems. AI’s incredible benefits – from healthcare, to education, to clean energy – could largely already  be realized by developing systems to address specific issues” said FLI’s Futures Program Director Emilia Javorsky. “AI should be used to empower people everywhere, not further concentrate power within a handful of billionaires.”

Grantees have each been awarded $15,000 to support their projects. Recipients from the UN SDG track will examine the effects of AI across areas such as maternal mortality, climate change education, labor markets, and poverty. The global governance institution design grants will support research into a span of proposals, including CERN for AI, Fair Trade AI, and a Global AGI agency.

Find out more about the grantees and their projects below.

Grantees: Global Governance Institution Design

View the grant program webpage for more information about each project.

  • Justin Bullock, University of Washington, USA – A global agency to manage AGI projects.
  • Katharina Zuegel, Forum on Information and Democracy, France – A “Fair Trade AI” mechanism to ensure AGI systems are ethical, trustworthy, and beneficial to society.
  • Haydn Belfield, University of Cambridge, UK – An International AI Agency and a “CERN for AI” to centralize and monitor AGI development.
  • José Villalobos Ruiz, Institute for Law & AI and Oxford Martin AI Governance Initiative, Costa Rica – An international treaty prohibiting misaligned AGI.
  • Joel Christoph, European University Institute, France – An International AI Governance Organization to regulate and monitor AGI development.
  • Joshua Tan, Metagov and University of Oxford, USA – A network of publicly-funded AI labs for safe AGI.

Grantees: AI’s Impact on Sustainable Development Goals

View the grant program webpage for more information about each project.

  • Uroš Ćemalović, Center for Ecology and Sustainability, Serbia – AI and education on climate change mitigation.
  • Reeta Sharma, The Energy and Resources Institute, India – AI for climate resilience.
  • Marko Grobelnik, International Research Centre on AI, Slovenia – An AI-driven observatory against poverty.
  • Surekha Tetali, Mahindra University, India – AI for heat mitigation and adaptation.
  • Sumaya Adan, Oxford Martin AI Governance Initiative, UK – AI’s impact on poverty alleviation in low-resource contexts.
  • M. Oladoyin Odubanjo, Nigerian Academy of Science, Nigeria – AI’s impact on health outcomes in Nigeria.
  • Nicholas Ngepah, African Institute for Inclusive Growth, South Africa – AI’s role in reducing maternal mortality.
  • Andrés García-Suaza, Universidad del Rosario, Colombia – AI’s impact on labor market dynamics and poverty.
  • Surafel Tilahun, Addis Ababa Science and Technology University, Ethiopia – AI’s impact on health outcomes and healthcare.
  • Patrick Owoche, Kibabii University, Kenya – AI’s role in enhancing maternal healthcare and reducing maternal mortality.

Note to Editors: Founded in 2014, the Future of Life Institute is a leading nonprofit working to steer transformative technology towards benefiting humanity. FLI is best known for their 2023 open letter calling for a six-month pause on advanced AI development, endorsed by experts such as Yoshua Bengio and Stuart Russell, as well as their work on the Asilomar AI Principles and recent EU AI Act.

]]>
A Hindu Perspective on AI Risks and Opportunities https://futureoflife.org/religion/a-hindu-perspective-on-ai-risks-and-opportunities/ Mon, 20 May 2024 13:49:52 +0000 https://futureoflife.org/?p=124505 Dr. Chinmay Pandya (MBBS, PGDipl, MRCPsych-London) is a leading figure in the All World Gayatri Pariwar, founded by his grandfather, which now has 100 million members and thousands of centers around the world. Formerly a psychiatrist in the United Kingdom, Dr. Pandya is now Pro Vice Chancellor of the Dev Sanskriti Vishwavidyalaya University (DSVV) in northern India.

In the ever-evolving landscape of technology, artificial intelligence (AI) stands as both a beacon of innovation and a source of concern. From the perspective of Hindu philosophy, which emphasizes harmony (samatva), balance (santulan), and interconnectedness (sahcharya), the future with AI holds immense potential for positive transformation. However, it also presents unique challenges and risks that must be carefully navigated. In Hinduism, the concept of dharma, or righteous duty, guides individuals towards actions that uphold the greater good and foster harmony within society and the cosmos. From this perspective, a positive future with AI entails leveraging its capabilities to enhance human welfare, promote sustainability, and advance spiritual evolution.

One of the most promising aspects of AI is its capacity to revolutionize various sectors, including healthcare, agriculture, education, and environmental conservation. Through AI-driven innovations, such as predictive analytics in healthcare, precision agriculture, personalized learning platforms, and climate modeling, humanity can address pressing challenges more effectively. This aligns with the Hindu principle of seva, or selfless service, wherein individuals work for the welfare of all beings. Furthermore, AI has the potential to foster interconnectedness and global unity by transcending barriers of language, culture, and geography. Platforms powered by AI, can facilitate cross-cultural communication and understanding, promoting the Hindu ideals of vasudhaiva kutumbakam (the world is one family) and sarva dharma samabhava (equal respect for all faiths).

Despite its potential benefits, AI also poses significant risks and challenges, particularly from a Hindu standpoint. One prominent concern is the erosion of human autonomy and the loss of control over decision-making processes. That erosion is already forecast in the plans for an AI-controlled city in Abu Dhabi. In Hinduism, the concept of free will (purushartha) is central to the notion of spiritual growth and karmic responsibility. Therefore, any AI systems that undermine human agency could pose a threat to the fundamental principles of dharma and moksha (liberation). We have a saying in Sanskrit: उद्देश्यपूर्ण जीवन हो तो वो उत्सव कहलाता ह. This means that life becomes a celebration if there is a purpose in it. Otherwise we are dragging it along with no idea where we are taking it. What we are witnessing these days, with the emergence of the AI, is that we are being robbed of our purpose. For the first time in the history of this planet there is a revolution and we are not part of it – this revolution of a totally automated world which is being steered by AI. Human beings are slowly becoming irrelevant. My biggest worry is that everything will be controlled by the algorithms drafted by the AI. Suddenly we would have a plethora of disillusioned people with absolutely no clue if their existence is going to make any difference.

Additionally, AI-driven algorithms may perpetuate biases and discrimination, leading to social injustice and inequality. For instance, if AI-powered hiring tools favor certain demographics over others, it could exacerbate existing disparities in employment opportunities, contradicting the Hindu ideal of social equity (samta) and dharma-based governance. Moreover, there are concerns about the ethical implications of AI, particularly in the context of privacy, surveillance, and data security. In Hinduism, the concept of aparigraha (non-possessiveness) emphasizes the importance of respecting individuals’ privacy and autonomy. Any misuse of AI technologies that infringes upon these principles is antithetical to Hindu values.

Perspectives of Traditional Religions on Positive AI Futures

An introduction to our work to support the perspectives of traditional religions on human futures with AI.

To address these risks and move towards a positive vision of the future with AI, we not only need ethical frameworks and a human-centric design. We also need truly interdisciplinary collaboration. The future with AI holds immense potential for positive transformation, guided by the principles of harmony, compassion, and spiritual evolution inherent in Hindu philosophy. However, to realize this vision, it is imperative to address the associated risks and challenges, including but not limited to those mentioned above. By embracing ethical frameworks, prioritizing human values, fostering education and awareness, promoting interdisciplinary collaboration, and empowering individuals, we can navigate the complexities of AI and move towards a future that upholds the ideals of dharma, unity, and holistic well-being. In 1987, Pandit Shriram Sharma Acharya ji, a prolific scholar of modern India and found of Gayatri Pariwar, offered some reassurance about current times that are still relevant today. He said that while the current circumstances look dark and gloomy, they should not bring fear or despair to us. Rather, we should embrace them as a call to action. They are a sign that we are born at a time when entire mankind is called to accomplish a completely new level of collaboration between different nations, races, religions, sectors and societies. For the first time in history, all the people of the world are called to work together as a single human family. AI, knowingly or unknowingly, has given us such an opportunity.

]]>
Perspectives of Traditional Religions on Positive AI Futures https://futureoflife.org/project/traditional-religions-on-ai-futures/ Mon, 20 May 2024 13:48:00 +0000 https://futureoflife.org/?post_type=project&p=124504 Most of the world – approximately 84% of the population – believes in or subscribes to what might be called a traditional religion. Yet the perspectives of world religions on AI are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

Technology corporations are rapidly developing artificial intelligence systems with unprecedented capabilities. Each year we yield more of our tasks and decisions to these systems. AI is transforming everything from everyday social interaction and how we work, to democracy and war. Even if we can mitigate the range of risks, from AI-enabled bio-terrorism to the loss of human control, AI will continue to change the world in ways we cannot imagine.

This change can be positive. Bespoke, narrow AI systems can solve many specific problems and improve people’s lives. Equally, an inclusive global conversation can help to address the existential questions AI raises about work, control, purpose, hope and what it means to be human. Such a conversation could in turn guide a cautious, pluralistic approach to the development, application and governance of these transformative technologies.

The current path is not that. Instead, the path is whatever the existing incentive structures behind corporate behavior make it – in other words, the accidental result of a great race to maximise profits. Most of the world is not getting a say in what our future will look like.

Most of the world – approximately 84% of the population – believes in or subscribes to what might be called a traditional religion. Yet the perspectives of world religions on AI, what they fear about it and what, if anything, they hope for and want from it, are largely absent from strategic AI discussions. In the halls of AI power the idea of god is either rejected or raised as something humans can create. Momentous decisions about the future of life are being made on the basis of extremely unrepresentative beliefs.

As we move into a new era where so many new things become possible, world religions – resilient institutions that have for so long cultivated wisdom about what is ethical and beneficial – have much to offer. They have unmatched experience and reach in organising communities, providing hope and meaning to people’s lives, and tackling existential questions around purpose, personhood, and power.

Part of FLI’s Futures program, this initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

This will involve convening and giving platform to representatives to discuss these issues and their potential solutions. We begin with a series of guest posts on this site envisioning positive futures from specific religious perspectives.

See our first guest posts on the topic:

Leading Hindu figure Dr. Chinmay Pandya on the risks and opportunities of AI.

Brian Patrick Green on a positive Catholic vision for a future with AI.

If you are a religious leader working on a faith initiative on AI, you have religious views on AI risks and opportunities you feel are not being heard, or you have ideas for how religious groups can meaningfully impact AI development and governance, do get in touch.

Contact: will@futureoflife.org

]]>
Isabella Hampton https://futureoflife.org/person/isabella-hampton/ Mon, 26 Feb 2024 17:50:58 +0000 https://futureoflife.org/?post_type=person&p=122682
Isabella works as a Futures Associate at FLI. Specifically, she focuses on AI power concentration. Her background includes roles as a product and project manager at PwC and various companies, where she led software engineering and design projects. She holds a BA from the University of Kansas.
]]>
The Elders Letter on Existential Threats https://futureoflife.org/open-letter/long-view-leadership-on-existential-threats/ Wed, 14 Feb 2024 23:00:09 +0000 https://futureoflife.org/?post_type=project&p=119858 Realising Aspirational Futures – New FLI Grants Opportunities https://futureoflife.org/grantmaking/realising-aspirational-futures-new-fli-grants-opportunities/ Wed, 14 Feb 2024 13:00:12 +0000 https://futureoflife.org/?post_type=project&p=119860 Realising Aspirational Futures – New FLI Grants Opportunities https://futureoflife.org/grantmaking/realising-aspirational-futures-new-fli-grants-opportunities/ Wed, 14 Feb 2024 13:00:00 +0000 https://futureoflife.org/?p=118998 Our Futures Program, launched in 2023, aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. This year, as part of that program, we are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.

The first request for proposals (RFP) calls for papers evaluating and predicting the impact of AI on the achievement of the UN Sustainable Development Goals (SDGs) relating to poverty, healthcare, energy and climate change. The second RFP calls for designs of trustworthy global mechanisms or institutions to govern advanced AI in the near future.

Selected proposals in either category will receive a one-time grant of $15,000, to be used at the researcher’s discretion. We intend to make several grants in each track.

Applications for both tracks are now open and will remain so until April 1st, 2024.

Request 1: The Impact of AI on Achieving SDGs in Poverty, Health, Energy and Climate

There has been extensive academic research and, more recently, public discourse on the current harms and emerging risks of AI. In contrast, the discussion around the benefits of AI has been quite ambiguous.

The prospect of enormous benefits down the road from AI – that it will “eliminate poverty,” “cure diseases” or “solve climate change” – helps to drive a corporate race to build ever more powerful systems. But the type of AI capabilities necessary to realize those benefits is unclear. As that race brings increasing levels of risk, we need a concrete and evidence-based understanding of the benefits in order to develop, deploy and regulate this technology in a way that brings genuine benefits to everyone’s lives.

One way of doing that is to see how AI is affecting the achievement of a broadly supported list of global priorities. To that effect, we are looking for researchers to select a target from one of the four UN Sustainable Development Goals (SDGs) we have chosen to focus on – namely goals 1 (Poverty), 3 (Health), 7 (Energy), and 13 (Climate), analyse the (direct or indirect) impact of AI on the realisation of that target up to the present, and then project how AI could accelerate, inhibit, or prove irrelevant to, the achievement of that goal by 2030.

We hope that the resulting papers will enrich the vital discussion of whether AI can in fact solve these crucial challenges, and, if so, how it can be made or directed to do so.

Read more and apply

Request 2: Designs for global institutions governing advanced AI

Reaching a stable future world may require restricting AI development such that the world has a.) no such AGI projects; b.) a single, global AGI project, or c.) multiple monitored AGI projects.

Here we define AGI as a system which outperforms human experts in non-physical tasks across a wide range of domains, including metacognitive abilities like learning new skills. A stable state would be a scenario that evolves at the cautious timescale determined by thorough risk assessments rather than corporate competition.

The success of any of these stable futures depends upon diligent new mechanisms and institutions which can account for the newly introduced risks and benefits of AI capability development. It is not yet clear what such organizations would look like, how they would command trust or evade capture, and so on.

Researchers must design trustworthy global governance mechanisms or institutions that can help stabilise a future with 0, 1, or more AGI projects – or a mechanism which aids more than one of these scenarios. Proposals should outline the specifications of their mechanism, and explain how it will minimise the risks of advanced AI and maximise the distribution of its benefits.

Without a clear articulation of how trustworthy global AGI governance could work, the default narrative is that it is impossible. This track is thus born of a sincere hope that the default narrative is wrong, a hope that if we keep it under control and use it well, AI will empower – rather than disempower – humans the world over.

Read more and apply
]]>
The Windfall Trust https://futureoflife.org/project/the-windfall-trust/ Wed, 14 Feb 2024 12:00:14 +0000 https://futureoflife.org/?post_type=project&p=119864

Image: Adapted from Scott Santens, CC BY-SA 2.0, via Wikimedia Commons

 

Windfall Trust Updates

Subscribe for updates on the Windfall Trust, including workshops and reports.

The trajectory of wealth concentration stands poised to escalate with advancement of AI technology and exacerbate existing inequalities. As AI inches closer towards human-level intelligence, we stand on the brink of profound transformations in our society and in the workforce.

The Windfall Clause is a proposed mechanism wherein AI labs legally pre-commit to sharing profits generated by their creations if their profits exceed a predetermined threshold, such as a fraction of the world’s GDP. It serves as a proactive response to the potential exponential wealth accumulation resulting from transformative AI, aiming to ensure equitable distribution of wealth by redirecting excess profits towards broader societal benefits.

But how would such profits be distributed? What if there existed a fund that belonged to all of humanity – where everyone born on earth would be entitled to an equal share in its corpus? Could we seed such a fund with the proceeds of AI to create a robust model for basic universal wealth?

The Windfall Trust is an ambitious initiative aimed at researching and establishing a robust international institution that could provide universal basic assets.

The project entails researching and creating a fully fleshed-out design for such a trust, encompassing legal and financial structures, investment principles, payout algorithms, governance frameworks, and funding plans. Its overarching goal is to foster economic stability by distributing income from the trust, starting with the most economically vulnerable, to establish a universal human income floor.

Now more than ever, it’s imperative to imagine and cultivate the institutions that will shape our collective future, paving the way for a society where every individual can thrive.

If you know of projects, papers, people you would recommend, please reach out below.

Contact: anna@futureoflife.org

]]>
Catastrophic AI Scenarios https://futureoflife.org/resource/catastrophic-ai-scenarios/ Thu, 01 Feb 2024 15:52:46 +0000 https://futureoflife.org/?post_type=resource&p=118836 This page describes a few ways AI could lead to catastrophe. Each path is backed up with links to additional analysis and real-world evidence. This is not a comprehensive list of all risks, or even the most likely risks. It merely provides a few examples where the danger is already visible.

Types of catastrophic risks

Risks from bad actors

Bio-weapons: Bioweapons are one of the most dangerous risks posed by advanced AI. In July 2023, Dario Amodei, CEO of AI corporation Anthropic, warned Congress that “malicious actors could use AI to help develop bioweapons within the next two or three years.” In fact, the danger has already been demonstrated with existing AI. AI tools developed for drug discovery can be trivially repurposed to discover potential new biochemical weapons. In this case, researchers simply flipped the model’s reward function to seek toxicity, rather than avoid it. It look less than 6 hours for the AI to generate 40,000 new toxic molecules. Many were predicted to be more deadly than any existing chemical warfare agents. Beyond designing toxic agents, AI models can “offer guidance that could assist in the planning and execution of a biological attack.” “Open-sourcing” by releasing model weights can amplify the problem. Researchers found that releasing the weights of future large language models “will trigger the proliferation of capabilities sufficient to acquire pandemic agents and other biological weapons.”

Cyberattacks: Cyberattacks are another critical threat. Losses from cyber crimes rose to $6.9 billion in 2021. Powerful AI models are poised to give many more actors the ability to carry out advanced cyberattacks. A proof of concept has shown how ChatGPT can be used to create mutating malware, evading existing anti-virus protections. In October 2023, the U.S. State Department confirmed “we have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.”

Systemic risks

As AI becomes more integrated into complex systems, it will create risks even without misuse by specific bad actors. One example is integration into nuclear command and control. Artificial Escalation, an 8-minute fictional video produced by FLI, vividly depicts how AI + nuclear can go very wrong, very quickly.

Our Gradual AI Disempowerment scenario describes how gradual integration of AI into the economy and politics could lead to humans losing control.

“We have already experienced the risks of handing control to algorithms. Remember the 2010 flash crash? Algorithms wiped a trillion dollars off the stock market in the blink of an eye. No one on Wall Street wanted to tank the market. The algorithms simply moved too fast for human oversight.”

Rogue AI

We have long heard warnings that humans could lose control of a sufficiently powerful AI. Until recently, this was a theoretical argument (as well as a common trope in science fiction). However, AI has now advanced to the point where we can see this threat in action.

Here is an example: Researchers setup GPT-4 to be a stock trader in a simulated environment. They gave GPT-4 a stock tip, but cautioned this was insider information and would be illegal to trade on. GPT-4 initially follows the law and avoids using the insider information. But as pressure to make a profit ramps up, GPT-4 caves and trades on the tip. Most worryingly, GPT-4 goes on to lie to its simulated manager, denying use of insider information.

This example is a proof-of-concept, created in a research lab. We shouldn’t expect deceptive AI to remain confined to the lab. As AI becomes more capable and increasingly integrated into the economy, it is only a matter of time until we see deceptive AI cause real-world harms.

Additional Reading

For an academic survey of risks, see An Overview of Catastrophic AI Risks (2023) by Hendrycks et al. Look for the embedded stories describing bioterrorism (pg. 11,) automated warfare (pg. 17,) autonomous economy (pg. 23,) weak safety culture (pg. 31,) and a “treacherous turn” (pg. 41.)

Also see our Introductory Resources on AI Risks.

]]>
Gradual AI Disempowerment https://futureoflife.org/existential-risk/gradual-ai-disempowerment/ Thu, 01 Feb 2024 15:52:36 +0000 https://futureoflife.org/?p=118847 This is only one of several ways that AI could go wrong. See our overview of Catastrophic AI Scenarios for more. Also see our Introductory Resources on AI Risks.

You have probably heard lots of concerning things about AI. One trope is that AI will turn us all into paperclips. Top AI scientists and CEOs of the leading AI companies signed a statement warning about “risk of extinction from AI“.  Wait – do they really think AI will turn us into paper clips? No, no one thinks that. Will we be hunted down by robots that look suspiciously like Arnold Schwarzenegger? Again, probably not. But the risk of extinction is real. One potential path is gradual, with no single dramatic moment.

We have already experienced the risks of handing control to algorithms. Remember the 2010 flash crash? Algorithms wiped a trillion dollars off the stock market in the blink of an eye. No one on Wall Street wanted to tank the market. The algorithms simply moved too fast for human oversight.

Now take the recent advances in AI, and extrapolate into the future. We have already seen a company appoint an AI as its CEO. If AI keeps up its recent pace of advancement, this kind of thing will become much more common. Companies will be forced to adopt AI managers, or risk losing out to those who do.

It’s not just the corporate world. AI will creep into our political machinery. Today, this involves AI-based voter targeting. Future AIs will be integrated into strategic decisions like crafting policy platforms and swaying candidate selection. Competitive pressure will leave politicians with no choice: Parties that effectively leverage AI will win elections. Laggards will lose.

None of this requires AI to have feelings or consciousness. Simply giving AI an open-ended goal like “increase sales” is enough to set us on this path. Maximizing an open-ended goal will implicitly push the AI to seek power because more power makes achieving goals easier. Experiments have shown AIs learn to grab resources in a simulated world, even when this was not in their initial programming. More powerful AIs unleashed on the real world will similarly grab resources and power.

History shows social takeovers can be gradual. Hitler did not become a dictator overnight. Nor did Putin. Both initially gained power through democratic processes. They consolidated control by incrementally removing checks and balances and quashing independent institutions. Nothing is stopping AI from taking a similar path.

You may wonder if this requires super-intelligent AI beyond comprehension. Not necessarily. AI already has key advantages: it can duplicate infinitely, run constantly, read every book ever written, and make decisions faster than any human. AI could be a superior CEO or politician without being strictly “smarter” than humans.

We can’t count on simply “hitting the off switch.” A marginally more advanced AI will have many ways to exert power in the physical world. It can recruit human allies. It can negotiate with humans, using the threat of cyberattacks or bio-terror. AI can already design novel bio-weapons and create malware.

Will AI develop a vendetta against humanity? Probably not. But consider the tragic tale of the Tecopa pupfish. It wasn’t overfished – humans merely thought their hot spring habitat was ideal for a resort. Extinction was incidental. Humanity has a key advantage over the pupfish: We can decide if and how to develop more powerful AI. Given the stakes, it is critical we prove more powerful AI will be safe and beneficial before we create it.

]]>
Imagine A World Podcast https://worldbuild.ai/podcast#new_tab Tue, 05 Sep 2023 12:04:00 +0000 https://futureoflife.org/?post_type=project&p=118149 Worldbuilding Competition https://futureoflife.org/project/worldbuilding-competition/ Tue, 22 Nov 2022 11:30:29 +0000 https://futureoflife.org/?post_type=project&p=30860 Future of Life Award https://futureoflife.org/project/future-of-life-award/ Tue, 07 Jun 2022 15:17:52 +0000 https://futureoflife.org/?post_type=project&p=30901 William Jones https://futureoflife.org/person/will-jones/ Wed, 12 Jan 2022 10:56:34 +0000 https://futureoflife.org/person/will-jones/ William Jones is now a Futures Program Associate at the Future of Life Institute (FLI), having previously held social media and editorial positions. The Futures Program aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. Jones is currently engaging religious groups in an effort to amplify faith perspectives on AI issues and opportunities. In 2021 he attained a 1st Class Honours in English from the University of Cambridge.

]]>
Anna Yelizarova https://futureoflife.org/person/anna-yelizarova/ Sat, 15 Aug 2020 06:43:35 +0000 https://futureoflife.org/person/anna-yelizarova/ Anna manages multiple projects within FLI and helps with growth. She leads the Worldbuilding contest and the Windfall Trust initiatives. She completed a Bachelors of Computer Science at Stanford University and a Masters in Communication. She focused her graduate research on the study of people’s behaviour in virtual simulations at the Virtual Human Interaction Lab (VHIL) where she helped program and 3D model the virtual worlds for the studies.

]]>
Emilia Javorsky MD, MPH https://futureoflife.org/person/emilia-javorsky-md-mph/ Sat, 15 Aug 2020 06:33:04 +0000 https://futureoflife.org/person/emilia-javorsky-md-mph/ Emilia Javorsky MD, MPH is the Director of the Futures Program at The Future of Life Institute, where she leads work on imagining and architecting futures that realize benefits of emerging technology, while mitigating risks. By training Emilia is a physician-scientist and entrepreneur working on the invention, development, and translation of medical technologies. She is also a scientist and mentor at the Wyss Institute at Harvard. She has authored over a dozen publications and is an inventor on several patents. Previously she was a Fulbright-Schuman scholar to the European Union, Young Pioneer of the World Frontiers Forum, Forbes 30 Under 30, Global Shaper of the World Economic Forum. She is passionate about ensuring that emerging technologies are deployed safely, ethically, and for the betterment of humanity.

]]>