Article Archives - Future of Life Institute https://futureoflife.org/category/article/ Preserving the long-term future of life. Tue, 04 Apr 2023 22:11:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons https://futureoflife.org/resource/state-of-ai/ Thu, 09 May 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/state-of-ai/ As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.

These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in the development of new technologies, and as these technologies proliferate it offers no long-term advantage to any one player.

Nevertheless, the development of military AI is accelerating. Below are the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea. All information is from State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.

“PAX calls on states to develop a legally binding instrument that ensures meaningful human control over weapons systems, as soon as possible,” says Daan Kayser, the report’s lead author. “Scientists and tech companies also have a responsibility to prevent these weapons from becoming reality. We all have a role to play in stopping the development of Killer Robots.”

The United States

UN Position

In April 2018, the US underlined the need to develop “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.”

AI in the Military

  • In 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).”
  • The 2016 report ‘Preparing for the Future of AI’ also refers to the weaponization of AI and notably states: “Given advances in military technology and AI more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid.”
  • In September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop [the] next wave of AI technologies.”
  • The Advanced Targeting and Lethality Automated System (ATLAS) program, a branch of DARPA, “will use artificial intelligence and machine learning to give ground-combat vehicles autonomous target capabilities.”

Cooperation with the Private Sector

  • Establishing collaboration with private companies can be challenging, as the widely publicized case of Google and Project Maven has shown: Following protests from Google employees, Google stated that it would not renew its contract. Nevertheless, other tech companies such as Clarifai, Amazon and Microsoft still collaborate with the Pentagon on this project.
  • The Project Maven controversy deepened the gap between the AI community and the Pentagon. The government has developed two new initiatives to help bridge this gap.
  • DARPA’s OFFSET program, which has the aim of “using swarms comprising upwards of 250 unmanned aircraft systems (UASs) and/or unmanned ground systems (UGSs) to accomplish diverse missions in complex urban environments,” is being developed in collaboration with a number of universities and start-ups.
  • DARPA’s Squad X Experimentation Program, which aims for human fighters to “have a greater sense of confidence in their autonomous partners, as well as a better understanding of how the autonomous systems would likely act on the battlefield,” is being developed in collaboration with Lockheed Martin Missiles.

China

UN Position

China demonstrated the “desire to negotiate and conclude” a new protocol “to ban the use of fully
autonomous lethal weapons systems.” However, China does not want to ban the development of these
weapons, which has raised questions about its exact position.

AI in the Military

  • There have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III.
  • Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.

Cooperation with the Private Sector

  • To advance military innovation, President Xi Jinping has called for China to follow “the road of military-civil fusion-style innovation,” such that military innovation is integrated into China’s national innovation system. This fusion has been elevated to the level of a national strategy.
  • The People’s Liberation Army (PLA) relies heavily on tech firms and innovative start-ups. The larger AI research organizations in China can be found within the private sector.
  • There are a growing number of collaborations between defense and academic institutions in China. For instance, Tsinghua University launched the Military-Civil Fusion National Defense Peak Technologies Laboratory to create “a platform for the pursuit of dual-use applications of emerging technologies, particularly artificial intelligence.”
  • Regarding the application of artificial intelligence to weapons, China is currently developing “next generation stealth drones,” including, for instance, Ziyan’s Blowfish A2 model. According to the company, this model “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”

Russia

UN Position

Russia has stated that the debate around lethal autonomous weapons should not ignore their potential benefits, adding that “the concerns regarding LAWS can be addressed through faithful implementation of the existing international legal norms.” Russia has actively tried to limit the number of days allotted for such discussions at the UN.

AI in the Military

  • While Russia does not have a military-only AI strategy yet, it is clearly working towards integrating AI more comprehensively.
  • The Foundation for Advanced Research Projects (the Foundation), which can be seen as the Russian equivalent of DARPA, opened the National Center for the Development of Technology and Basic Elements of Robotics in 2015.
  • At a conference on AI in March 2018, Defense Minister Shoigu pushed for increasing cooperation between military and civilian scientists in developing AI technology, which he stated was crucial for countering “possible threats to the technological and economic security of Russia.”
  • In January 2019, reports emerged that Russia was developing an autonomous drone, which “will be able to take off, accomplish its mission, and land without human interference,” though “weapons use will require human approval.”

Cooperation with the Private Sector

  • A new city named Era, devoted entirely to military innovation, is currently under construction. According to the Kremlin, the “main goal of the research and development planned for the technopolis is the creation of military artificial intelligence systems and supporting technologies.”
  • In 2017, Kalashnikov — Russia’s largest gun manufacturer — announced that it had developed a fully automated combat module based on neural-network technologies that enable it to identify targets and make decisions.

The United Kingdom

UN Position

The UK believes that an “autonomous system is capable of understanding higher level intent and direction.” It suggested that autonomy “confers significant advantages and has existed in weapons systems for decades” and that “evolving human/machine interfaces will allow us to carry out military functions with greater precision and efficiency,” though it added that “the application of lethal force must be directed by a human, and that a human will always be accountable for the decision.” The UK stated that “the current lack of consensus on key themes counts against any legal prohibition,” and that it “would not have any
practical effect.”

AI in the Military

  • A 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial
    intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive
    effects we need in this regard.”
  • The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next generation autonomous military-systems,” and optimization of human autonomy teaming.
  • The Defense Science and Technology Laboratory (Dstl), the MoD’s research arm, launched the AI Lab in 2018.
  • In terms of weaponry, the best-known example of autonomous technology currently under development is the top-secret Taranis armed drone, the “most technically advanced demonstration aircraft ever built in the UK,” according to the MoD.

Cooperation with the Private Sector

  • The MoD has a cross-government organization called the Defense and Security Accelerator (DASA), launched in December 2016. DASA “finds and funds exploitable innovation to support UK defense and security quickly and effectively, and support UK property.”
  • In March 2019, DASA awarded a GBP 2.5 million contract to Blue Bear Systems, as part of the Many Drones Make Light Work project. On this, the director of Blue Bear Systems said, “The ability to deploy a swarm of low cost autonomous systems delivers a new paradigm for battlefield operations.”

France

UN Position

France understands the autonomy of LAWS as total, with no form of human supervision from the moment of activation and no subordination to a chain of command. France stated that a legally binding instrument on the issue would not be appropriate, describing it as neither realistic nor desirable. France did propose a political declaration that would reaffirm fundamental principles and “would underline the need to maintain human control over the ultimate decision of the use of lethal force.”

AI in the Military

  • France’s national AI strategy is detailed in the 2018 Villani Report, which states that “the increasing use of AI in some sensitive areas such as […] in Defense (with the question of autonomous weapons) raises a real society-wide debate and implies an analysis of the issue of human responsibility.”
  • This has been echoed by French Minister for the Armed Forces, Florence Parly, who said that “giving a machine the choice to fire or the decision over life and death is out of the question.”
  • On defense and security, the Villani Report states that the use of AI will be a necessity in the future to ensure security missions, to maintain power over potential opponents, and to maintain France’s position relative to its allies.
  • The Villani Report refers to DARPA as a model, though not with the aim of replicating it. However, the report states that some of DARPA’s methods “should inspire us nonetheless. In particular as regards the President’s wish to set up a European Agency for Disruptive Innovation, enabling funding of emerging technologies and sciences, including AI.”
  • The Villani Report emphasizes the creation of a “civil-military complex of technological innovation, focused on digital technology and more specifically on artificial intelligence.”

Cooperation with the Private Sector

  • In September 2018, the Defense Innovation Agency (DIA) was created as part of the Direction Générale de l’Armement (DGA), France’s arms procurement and technology agency. According to Parly, the new agency “will bring together all the actors of the ministry and all the programs that contribute to defense innovation.”
  • One of the most advanced projects currently underway is the nEUROn unmanned combat air system, developed by French arms producers Dassault on behalf of the DGA, which can fly autonomously for over three hours.
  • Patrice Caine, CEO of Thales, one of France’s largest arms producers, stated in January 2019 that Thales will never pursue “autonomous killing machines,” and is working on a charter of ethics related to AI.

Israel

UN Position

In 2018, Israel stated that the “development of rigid standards or imposing prohibitions to something that is so speculative at this early stage, would be imprudent and may yield an uninformed, misguided result.” Israel underlined that “[w]e should also be aware of the military and humanitarian advantages.”

AI in the Military

  • It is expected that Israeli use of AI tools in the military will increase rapidly in the near future.
  • The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i, there is the the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”
  • The Israeli military deploys weapons with a considerable degree of autonomy. One of the most relevant examples is the Harpy loitering munition, also known as a kamikaze drone: an unmanned aerial vehicle that can fly around for a significant length of time to engage ground targets with an explosive warhead.
  • Israel was one of the first countries to “reveal that it has deployed fully automated robots: self-driving military vehicles to patrol the border with the Palestinian-governed Gaza Strip.”

Cooperation with the Private Sector

  • Public-private partnerships are common in the development of Israel’s military technology. There is a “close connection between the Israeli military and the digital sector,” which is said to be one of the reasons for the country’s AI leadership.
  • Israel Aerospace Industries, one of Israel’s largest arms companies, has long been been developing increasingly autonomous weapons, including the above mentioned Harpy.

South Korea

UN Position

In 2015, South Korea stated that “the discussions on LAWS should not be carried out in a way that can hamper research and development of robotic technology for civilian use,” but that it is “wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns.” In 2018, it raised concerns about limiting civilian applications as well as the positive defense uses of autonomous weapons.

AI in the Military

  • In December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.”
  • South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, with the aim of developing and deploying unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.
  • South Korea is known to have used the armed SGR-A1 sentry robot, which has operated in the demilitarized zone separating North and South Korea. The robot has both a supervised mode and an unsupervised mode. In the unsupervised mode “the SGR-AI identifies and tracks intruders […], eventually firing at them without any further intervention by human operators.”

Cooperation with the Private Sector

  • Public-private cooperation is an integral part of the military strategy: the plan for the AI Research and Development Center is “to build a network of collaboration with local universities and research entities such as the KAIST [Korea Advanced Institute for Science and Technology] and the Agency for Defense Development.”
  • In September 2018, South Korea’s Defense Acquisition Program Administration (DAPA) launched a new
    strategy to develop its national military-industrial base, with an emphasis on boosting ‘Industry 4.0
    technologies’, such as artificial intelligence, big data analytics and robotics.

To learn more about what’s happening at the UN, check out this article from the Bulletin of the Atomic Scientists.

]]>
AI Policy Challenges https://futureoflife.org/resource/ai-policy-challenges-and-recommendations/ Tue, 17 Jul 2018 00:00:00 +0000 https://futureoflife.org/ai-policy-challenges-and-recommendations/ This page is intended as an introduction to the major challenges that society faces when attempting to govern Artificial Intelligence (AI). FLI acknowledges that this list is not comprehensive, but rather a sample of the issues we believe to be consequential.

AI systems have enormous potential to serve and benefit the world. In the long-term, these systems could well enable discoveries in medicine, basic and applied science, managing complex systems, and creating currently-unimagined products and services. At present, AI already helps people in increasingly diverse ways. This includes breakthroughs in acquiring new skills and trainingdemocratizing mental health servicesdesigning and delivering faster production times, providing real-time environmental monitoring for pollution, enhancing cybersecurity defencesreducing healthcare inefficiencies, creating new kinds of enjoyable experiences, and improving real-time translation services to connect people. Overall, AI can foreseeably help manage some of the world’s hardest problems and improve countless lives.

Alongside AI’s many advantages, there are important challenges to address. Below are ten areas of particular concern for the safe and beneficial development of AI in the near- and far-future. These should be prioritised by policymakers seeking to prepare for and mitigate the risks of AI, as well as harness its benefits.

1. Global Governance and International Cooperation

The adoption and development of stronger AI systems will severely test and likely shift existing power dynamics. Discussion of an “AI race” between great powers has become commonplace, and many countries have outlined national strategies that describe efforts to attain or retain a competitive advantage in this field. However, there are important examples of international cooperation that will be increasingly critical in guiding the safe and beneficial development of AI, while reducing race conditions and global security threats.

2. Maximising Beneficial AI Research and Development

The challenges associated with Research and Development (R&D) programs revolve around ensuring AI is not only competent and useful, but also beneficial to humans. To this end, researchers aim to make high quality and standardised datasets more accessible and convince teams to implement risk analyses and mitigation practices in their programs. Similarly, R&D programs can prioritise ‘AI Safety’ by improving their systems’ robustness, benefits and technical design, incorporating core safety mechanisms to mitigate the “control problem,” and avoiding accidents and unwanted side-effects. Additionally, AI safety focuses on the consideration of value alignment between systems and humans. More of these efforts can be found in FLI’s AI safety research landscape.

3. Impact on the Workforce

There are two dimensions to the effects of AI on the workforce. First, there is technology’s ability to enable greater automation. This could impact many industries and worsen economic disparities by generating wealth for a smaller number of people than previous technological revolutions. As a result, society could face significant job losses, necessitating improved retraining programs as well as updated social security measures. Some popular proposals to address this challenge include redistributive economic policies like universal basic income and a “robot tax” to offset some of the increases in economic inequality.

The second dimension centres on the supply of labor. As this technology becomes the cornerstone of the economy, the difficulty in hiring people with the right combination of skills to build reliable, high-quality products will increase. Limits on immigration and work visas could further exacerbate the shortage of qualified individuals. These constraints might force governments to update educational programs that include training to build safe and beneficial AI systems.

4. Accountability, Transparency, and Explainability

Holding an AI system or its designers accountable for its decision-making poses several challenges. The lack of transparency and explainability associated with machine learning means that it can be hard or impossible to know why an algorithm made a particular choice. There is also the question of who has access to key algorithms and how understandable they are, a problem exacerbated by the use of proprietary information. As decision-making is ceded to AI systems, there are few clear guidelines about who should be held accountable for undesirable effects. FLI recently published a position paper providing feedback on the European Commission’s proposal for an AI Liability directive, suggesting ways it can better protect consumers from AI-related harms.

5. Surveillance, Privacy, and Civil Liberties

AI expands surveillance possibilities because it enables real-time monitoring and analysis of video and other data streams, including facial recognition. These uses raise questions about privacy, justice, and civil liberties, particularly in the law enforcement context. Police forces in the US are already experimenting with the use of AI for enhanced predictive policing. There is also increasing pressure on AI companies and institutions to be more transparent about their data and privacy policies. The EU GDPR is one prominent example of a recent data privacy regulation that has profound implications for AI development given its requirements for data collection and management as well as the “right to explanation.” The California Consumer Privacy Act of 2018 is another important privacy regulation that gives consumers greater rights over their personal information.

6. Fairness, Ethics, and Human Rights

The field of AI ethics is growing rapidly to address multiple challenges. One of them is the relative homogeneity in computer science and AI, lacking in gender, racial, and other kinds of diversity, which can lead to skewed product design, blind spots, and false assumptions. Another is the potential for algorithms to reproduce and magnify social biases and discrimination because they are trained on data sets that mirror existing biases in society or misrepresent reality. The field of AI ethics encompasses the issues of value systems and goals encoded into machines, design ethics, and systemic impacts and their effects on social, political, and economic structures. As a result, some have called for justice and ethics to be a more explicit goal of fair, accountable, and transparent (or “FAT”) AI development.

7. Manipulation

AI can enable and scale micro-targeting practices that are particularly persuasive and can manipulate behaviour and emotions. People could arguably lose autonomy if AI systems nudge their behaviour and even alter their perception of the world. As society cedes control to machines in various areas of our lives, a proportion of individuals might experience an increasing psychological dependency on these systems. Importantly, it is unclear what kinds of relationships people will form with AI systems once they are as capable as humans, or how this will impact human relationships.

AI systems are also capable of amplifying information wars, enabling the rise of highly personalised, and targeted computational propaganda. Fake news and social media bots can be used to tailor messages for political ends. Improvements in the creation of fake videos are making this challenge even greater. Many worry that manipulating the information people see and compromising their ability to make informed decisions through AI could undermine democracy itself.

8. Implications for Health

AI is capable of interpreting massive amounts of biomedical data that can assist diagnostics, patient treatment, and drug development. This can yield positive advances in precision medicine, yet it also raises issues of care access, data control, and opposing beliefs about human health choices. Some people want to use AI to augment human ability through “smart drugs,” nanobots and devices implanted in our bodies, or by directly linking our brains to computer interfaces. Such uses raise safety and ethical challenges, including the possibility of exacerbating inequalities between people.

9. National Security

AI impacts the national and global security landscape by generating new modes of information warfare, expanding the threat landscape, and contributing to destabilisation. Moreover, increasingly powerful AI systems are used to carry out cyberattacks that amplify existing threats and introduce novel ones, even from unsophisticated actors.

These systems also have myriad vulnerabilities: its software can be hacked and the data it relies upon can be manipulated. Adversarial machine learning, in which data inputs are used to confuse a system and cause a mistake, is also a threat. As AI is increasingly featured in a variety of bots and interfaces with which we form connections, there will also be novel security risks relating to the abuse of human trust and reliance.

The question of how much autonomy is acceptable in weapon systems is another ongoing international debate. Many civil society organisations support international and national bans on autonomous weapon systems that target humans. Arguments against these systems include the fact that they violate international humanitarian law by “removing a human from the loop,” that it is morally wrong to let a machine determine whom to kill, and that we need to avoid an AI arms race, which could lower the threshold of war, or alter the speed, scale, and scope of its effects. After many years of UN Conventions on Certain Conventional Weapons have proved unsuccessful, states are now looking to new fora to reach a treaty on these systems. You can read about FLI’s position and work on this particular issue here.

10. Artificial General Intelligence and Superintelligence

The notion of a machine with intelligence equal to humans in most or all domains is called strong AI or artificial general intelligence (AGI). Many AI experts agree that AGI is possible, and disagree only about the timelines and qualifications. AGI technology would encounter all of the challenges of narrow AI, but would additionally pose its own risks, such as containment. Key strategists, AI researchers, and business leaders believe that this advanced AI poses one of the greatest threats to human survival, and an extinction-level risk to life in the long-term. On top of that, the combinations of AI with cyber, nuclear, robotic, drone, or biological weapons throw in numerous other devastating possibilities.

 


 

]]>
The Top Myths About Advanced AI https://futureoflife.org/resource/aimyths/ Sun, 07 Aug 2016 14:47:26 +0000 https://futureoflife.org/uncategorized/aimyths/
A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let’s  clear up some of the most common myths.


Timeline Myths


The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainly.

One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.

Controversy Myths


Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-skeptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas in fact, he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.

Myths About the Risks of Superhuman AI


Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.

The Interesting Controversies


Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation!

Recommended References

Videos

Media Articles

Essays by AI Researchers

Research Articles

Research Collections

Case Studies

Blog posts and talks

Books

Organizations

  • Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.
  • Centre for the Study of Existential Risk (CSER): A multidisciplinary research center dedicated to the study and mitigation of risks that could lead to human extinction.
  • Future of Humanity Institute: A multidisciplinary research institute bringing the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.
  • Partnership on AI: Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
  • Global Catastrophic Risk Institute: A think tank leading research, education, and professional networking on global catastrophic risk.
  • Organizations Focusing on Existential Risks: A brief introduction to some of the organizations working on existential risks.
  • 80,000 Hours: A career guide for AI safety researchers.

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

]]>
Grants RFP Overview https://futureoflife.org/resource/grants-rfp/ Sun, 06 Dec 2015 00:00:00 +0000 https://futureoflife.org/grants-rfp/ 2015 INTERNATIONAL GRANTS COMPETITION

I. THE FUTURE OF AI: REAPING THE BENEFITS WHILE AVOIDING PITFALLS

For many years, Artificial Intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant success. In an open letter in January 2015, a large international group of leading AI researchers from academia and industry argued that this success makes it important and timely to research also how to make AI systems robust and beneficial, and that this includes concrete research directions that can be pursued today. The aim of this request for proposals is to support such research.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. However, like any powerful technology, AI has also raised new concerns, such as humans being replaced on the job market and perhaps altogether. Success in creating general-purpose human- or superhuman-level AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. A crucial question is therefore what can be done now to maximize the future benefits of AI while avoiding pitfalls.

This research priorities document gives many examples of research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself. The focus is on delivering AI that is beneficial to society and robust in the sense that the benefits are guaranteed: our AI systems must do what we want them to do. This is a significant expansion in the definition of the field, which up to now has focused on techniques that are neutral with respect to purpose.

II. EVALUATION CRITERIA & PROJECT ELIGIBILITY

This 2015 grants competition is the first wave of the $10M program announced this month, and will give grants totaling about $6M to researchers in academic and other non-profit institutions for projects up to three years in duration, beginning September 1, 2015. Future competitions are anticipated to focus on the areas that prove most successful. Grant applications will be subject to a competitive process of confidential expert peer review similar to that employed by all major U.S. scientific funding agencies, with reviewers being recognized experts in the relevant fields.

Grants will be made in two categories: Project Grants and Center Grants. Project Grants (approx. $100K-$500K) will fund a small group of collaborators at one or more research institutions for a focused research project of up to three years duration. Center Grants (approx. $500K-$1.5M) will fund the establishment of a (possibly multi-institution) research center that organizes, directs and funds (via subawards) research.

Proposals for both grant types will be evaluated according to how topical and impactful they are:

TOPICAL: This RFP is limited to research that aims to help maximize the societal benefit of AI, explicitly focusing not on the standard goal of making AI more capable, but on making AI more robust and/or beneficial.
Funding priority will be given to research aimed at keeping AI robust and beneficial even if it comes to greatly supersede current capabilities, either by explicitly focusing on issues related to advanced future AI or by focusing on near-term problems, the solutions of which are likely to be important first steps toward long-term solutions.

Appropriate research topics for Project Grants span multiple fields; as a general rule of thumb, any project that focuses on making AI more robust and/or beneficial is eligible, even if the project’s topic is not specifically named here. For our most comprehensive list of example research questions, please refer to A survey of research questions for robust and beneficial artificial intelligence, but bear in mind that this list is not intended to be complete.

For the sake of convenience, a very incomplete list of example research topics is given here:

  1. Computer Science:
    • Verification: how to prove that a system satisfies certain desired formal properties. (“Did I build the system right?”)
    • Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences. (“Did I build the right system?”)
    • Security: how to prevent intentional manipulation by unauthorized parties.
    • Control: how to enable meaningful human control over an AI system after it begins to operate.
  2. Law and ethics:
    • How should the law handle liability for autonomous systems? Must some autonomous systems remain under meaningful human control?
    • Should some categories of autonomous weapons be banned?
    • Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? Should such trade-offs be the subject of national standards?
    • To what extent can/should privacy be safeguarded as AI gets better at interpreting the data obtained from surveillance cameras, phone lines, emails, shopping habits, etc.?
  3. Economics:
    • Labor market forecasting
    • Labor market policy
    • How can a low-employment society flourish?
  4. Education and outreach:
    • Summer/winter schools on AI and its relation to society, targeted at AI graduate students and postdocs
    • Non-technical mini-schools/symposia on AI targeted at journalists, policymakers, philanthropists and other opinion leaders.

This RFP solicits Center Grants on the topic of AI policy, including forecasting. Proposed centers should address questions spanning (but not limited to) the following:

  • What is the space of AI policies worth studying? Possible dimensions include implementation level (global, national, organizational, etc.), strictness (mandatory regulations, industry guidelines, etc.) and type (policies/monitoring focused on software, hardware, projects, individuals, etc.)
  • Which criteria should be used to determine the merits of a policy? Candidates include verifiability of compliance, enforceability, ability to reduce risk, ability to avoid stifling desirable technology development, adoptability, and ability to adapt over time to changing circumstances to prevent intentional manipulation by unauthorized parties.
  • Which policies are best when evaluated against these criteria of merit? Addressing this question (which is anticipated to involve the lion’s share of the proposed work) would include detailed forecasting of how AI development will unfold under different policy options.

The relative amount of funding for different areas is not predetermined, but will be optimized to reflect the number and quality of applications received. Very roughly, the expectation is ~50% computer science, ~20% policy, ~15% law, ethics & economics, and ~15% education.

IMPACTFUL: Proposals will be rated according to their expected positive impact per dollar, taking all relevant factors into account, such as:

  1. Intrinsic intellectual merit, scientific rigor and originality
  2. A high product of likelihood for success and importance if successful (i.e., high-risk research can be supported as long as the potential payoff is also very high)
  3. The likelihood of the research opening fruitful new lines of scientific inquiry
  4. The feasibility of the research in the given time frame
  5. The qualifications of the Principal Investigator and team with respect to the proposed topic
  6. The part a grant may play in career development
  7. Cost effectiveness: Tight budgeting is encouraged in order to maximize the research impact of the project as a whole, with emphasis on scientific return per dollar rather than per proposal
  8. Potential to impact the greater community as well as the general public via effective outreach and dissemination of the research results

Strong proposals will make it easy for FLI to evaluate their impact by explicitly stating what
they aim to produce (publications, algorithms, software, events, etc.) when (after 1st, 2nd and 3rd year, say). Preference will be given to proposals whose deliverables are made freely available (open access publications, open source software, etc.).

To maximize its impact per dollar, this RFP is intended to complement, not supplement, conventional funding. We wish to enable research that, because of its long-term focus or its non-commercial, speculative or non-mainstream nature would otherwise go unperformed due to lack of available resources. Thus, although there will be inevitable overlaps, an otherwise scientifically rigorous proposal that is a good candidate for an FLI grant will generally not be a good candidate for funding by the NSF, DARPA, corporate R&D, etc.-and vice versa. To be eligible, research must focus on making AI more robust/beneficial as opposed to the standard goal of making AI more capable.

To aid prospective applicants in determining whether a project is appropriate for FLI, we have provided lists of questions and topics that make suitable targets for research funded under this program in the research priorities document.

Acceptable use of grant funds for Project Grants include:

  • Student/postdoc/researcher salary and benefits
  • Summer salary and teaching buyout for academics
  • Support for specific projects during sabbaticals
  • Assistance in writing or publishing books or journal articles, including page charges
  • Modest allowance for justifiable lab equipment, computers, and other research supplies
  • Modest travel allowance
  • Development of workshops, conferences, or lecture series for professionals in the relevant fields
  • Overhead of at most 15% (Please note if this is an issue with your institution, or if your organization is not non-profit, you can contact FLI to learn about other organizations that can help administer an FLI grant for you.)

Subawards are discouraged in the case of Project Grants, but perfectly acceptable for Center Grants.


III. APPLICATION PROCESS

Applications will be accepted electronically through a standard form on our website (click here for the application) and evaluated in a two-part process, as follows:

  1. INITIAL PROPOSAL-DUE March 1 2015, 11:59PM Eastern Time-Must include:
    • A summary of the project, explicitly addressing why it is topical and impactful. These should be 300-500 words for Projects Grants and 500-1000 words for Center Grants.
    • A draft budget description not exceeding 200 words, including an approximate total cost over the life of the award and explanation of how funds would be spent
    • A Curriculum Vitae for the Principal Investigator, which MUST be in PDF format, including:
      • Education and employment history
      • A list of up to five representative publications. Optional: if the PI has any previous publications relevant to the proposed research, they may list to up to five of these as well, for a total of up to 10 representative and relevant publications. We do wish to encourage PIs to enter relevant research areas where they may not have had opportunities before, so prior relevant publications are not required.
      • Full publication list
    • For Center Grants only: listing and brief bio of Center Co-Investigators, including if applicable the lead investigator at each institution that is part of the center.

    A review panel assembled by FLI will screen each Initial Proposal according to the criteria in Section II. Based on their assessment, the Principal Investigator (PI) may be invited to submit a Full Proposal, on or about March 21 2015, perhaps with feedback from FLI on improving the proposal. Please keep in mind that however positive FLI may be about a proposal at any stage, it may still be turned down for funding after full peer review.

  2. FULL PROPOSAL-DUE May 17 2015-Must Include:
    • Cover sheet
    • A 200-word project abstract, suitable for publication in an academic journal
    • A project summary not exceeding 200 words, explaining the work and its significance to laypeople
    • A detailed description of the proposed research, not to exceed 15 (20 pages for Center Grants) single-spaced 11-point pages, including a short statement of how the application fits into the applicant’s present research program, and a description of how the results might be communicated to the wider scientific community and general public
    • A detailed budget over the life of the award, with justification and utilization distribution (preferably drafted by your institution’s grant officer or equivalent)
    • A list, for all project senior personnel, of all present and pending financial support, including project name, funding source, dates, amount, and status (current or pending)
    • Evidence of tax-exempt status of grantee institution, if other than a US university.
    • Names of three recommended referees
    • Curricula Vitae for all project senior personnel, including:
      • Education and employment history
      • A list of references of up to five previous publications relevant to the proposed research, and up to five additional representative publications
      • Full publication list
    • Additional material may be requested in the case of Center Grants, as specified in the invitation and feedback phase.

Completed Full Proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described in Section III. A review panel of scientists in the relevant fields will be convened to produce a final rank ordering of the proposals, which will determine the grant winners, and make budgetary adjustments if necessary. Public award recommendations will be made on or about July 1, 2015.


IV. FUNDING PROCESS

The peer review and administration of this grants program will be managed by the Future of Life Institute (FLI), futureoflife.org. FLI is an independent, philanthropically funded non-profit organization whose mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

FLI will direct these grants through a Donor Advised Fund (DAF) at the Silicon Valley Community Foundation. FLI will solicit grant applications and have them peer reviewed, and on the basis of these reviews, FLI will advise the DAF on what grants to make. After grants have been made by the DAF, FLI will work with the DAF to monitor the grantee’s performance via grant reports. In this way, researchers will continue to interact with FLI, while the DAF interacts mostly with their institutes’ administrative or grants management offices.

]]>
Benefits & Risks of Biotechnology https://futureoflife.org/resource/risk-of-biotechnology/ Sat, 14 Nov 2015 00:27:55 +0000 https://futureoflife.org/uncategorized/risk-of-biotechnology/ Benefits & Risks of Biotechnology

Over the past decade, progress in biotechnology has accelerated rapidly. We are poised to enter a period of dramatic change, in which the genetic modification of existing organisms — or the creation of new ones — will become effective, inexpensive, and pervasive.

Biotech Trends

These past ten years have seen the cost of sequencing a human genome plummet, dropping from ~$10M USD to ~$1000, while the challenges of genome sequencing have also declined significantly. Accumulation of large data sets of medical and genetic information will provide an ever-increasing ability to understand and modify our own genome and that of other creatures.

In parallel, a major recent advance in genetic engineering has occurred with the discovery of CRISPR (clustered regularly interspaced short palindromic repeats), a bacterial DNA sequence that codes for a protein (Cas9) and RNA combination that can locate a specific DNA sequence and splice the DNA strand at that location. This enables relative to earlier recombinant DNA technologies.

The CRISPR system has been used successfully in complex organisms including adult mice and even embryonic humans.  As the technique can change the genome of a mature creature, it can in principle be used therapeutically to treat genetic conditions, and clinical trials in humans may be just a few years away.  Researchers have also proposed “gene drives” that spread a genetic modification through a population in the wild, so as to (for example) make mice immune to Lyme disease, or make mosquitos unable to transmit Malaria.

DNA_petri_dish

 It is easy to imagine this capability leading to powerful treatments for — or even elimination of — many genetic diseases, cancers and other illnesses, as well as a reduction or eradication of pathogens, dramatically improved food crops, organisms engineered to clean up degraded environments, and many other hugely beneficial biotechnologies. 

Biotech Risks

Unfortunately, it is just as easy to imagine major dangers. Gene drives may upend existing ecosystems in unforeseen ways. Modification of humans could open a Pandora’s box, altering the very meaning of humanity. Perhaps most alarming is that a clear understanding — and easy re-engineering — of human pathogens could lead to deliberate or accidental release of hugely destructive pathogens.  

Scientists performing “gain of function” research have, for example, introduced mutations to the H5N1 virus to make it airborne. Though the intention of such research is to predict and prepare for adverse mutations that may occur naturally, developing these organisms creates the risk of accidental release, and publishing the techniques could provide a blueprint for others to make dangerous modifications to organisms.  

Deliberately engineered pathogens could be given properties that make them even more dangerous than naturally occurring ones.  While such abilities are currently limited to high-end labs run by top researchers, the necessary technology and understanding is rapidly becoming cheaper and more widespread, leading to serious risks of accidental release.  Worse yet, if the set of people with access to such technology and understanding begins to overlap with groups of radical ideology who are willing to use such extreme measures, the results could be devastating unless effective countermeasures are developed first.

Recommended References

Videos

Research Papers

Books

Organizations

These organizations above all work on biotechnology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

]]>