AI Research Archives - Future of Life Institute https://futureoflife.org/category/ai-research/ Preserving the long-term future of life. Tue, 21 Nov 2023 19:59:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Miles Apart: Comparing key AI Act proposals https://futureoflife.org/ai-policy/miles-apart/ Tue, 21 Nov 2023 19:59:35 +0000 https://futureoflife.org/?p=118753 The table below provides an analysis of several transatlantic policy proposals on how to regulate the most advanced AI systems. The analysis shows that the recent non-paper circulated by Italy, France, and Germany (as reported by Euractiv) includes the fewest provisions with regards to foundation models or general purpose AI systems, even falling below the minimal standard that was set in a recent U.S. White House Executive Order.

While the non-paper proposes a voluntary code of conduct, it does not include any of the safety obligations required by previous proposals, including by the Council’s own adopted position. Moreover, the non-paper envisions a much lower level of oversight and enforcement than the Spanish Presidency’s compromise proposal and both the Parliament and Council’s adopted positions.

]]>
Steve Omohundro on Provably Safe AGI https://futureoflife.org/podcast/steve-omohundro-on-provably-safe-agi/ Fri, 06 Oct 2023 07:24:37 +0000 https://futureoflife.org/?post_type=podcast&p=118338 Tom Davidson on How Quickly AI Could Automate the Economy https://futureoflife.org/podcast/tom-davidson-on-how-quickly-ai-could-automate-the-economy/ Fri, 08 Sep 2023 13:28:18 +0000 https://futureoflife.org/?post_type=podcast&p=118157 Joe Carlsmith on How We Change Our Minds About AI Risk https://futureoflife.org/podcast/joe-carlsmith-on-how-we-change-our-minds-about-ai-risk/ Thu, 22 Jun 2023 19:15:14 +0000 https://futureoflife.org/?post_type=podcast&p=117978 Dan Hendrycks on Why Evolution Favors AIs over Humans https://futureoflife.org/podcast/dan-hendrycks-on-why-evolution-favors-ais-over-humans/ Thu, 08 Jun 2023 13:23:18 +0000 https://futureoflife.org/?post_type=podcast&p=117950 Roman Yampolskiy on Objections to AI Safety https://futureoflife.org/podcast/roman-yampolskiy-on-objections-to-ai-safety/ Fri, 26 May 2023 14:13:24 +0000 https://futureoflife.org/?post_type=podcast&p=117860 Connor Leahy on the State of AI and Alignment Research https://futureoflife.org/podcast/connor-leahy-on-the-state-of-ai-and-alignment-research/ Thu, 20 Apr 2023 16:35:22 +0000 https://futureoflife.org/staging-environm/?post_type=podcast&p=117756 Connor Leahy on AGI and Cognitive Emulation https://futureoflife.org/podcast/connor-leahy-on-agi-and-cognitive-emulation/ Thu, 13 Apr 2023 16:31:33 +0000 https://futureoflife.org/staging-environm/?post_type=podcast&p=117753 Lennart Heim on Compute Governance https://futureoflife.org/podcast/lennart-heim-on-compute-governance/ Thu, 06 Apr 2023 16:28:34 +0000 https://futureoflife.org/staging-environm/?post_type=podcast&p=117750 Lennart Heim on the AI Triad: Compute, Data, and Algorithms https://futureoflife.org/podcast/lennart-heim-on-the-ai-triad-compute-data-and-algorithms/ Thu, 30 Mar 2023 16:25:25 +0000 https://futureoflife.org/staging-environm/?post_type=podcast&p=117747 Liv Boeree on Poker, GPT-4, and the Future of AI https://futureoflife.org/podcast/liv-boeree-on-poker-gpt-4-and-the-future-of-ai/ Thu, 23 Mar 2023 16:21:39 +0000 https://futureoflife.org/staging-environm/?post_type=podcast&p=117743 Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI https://futureoflife.org/podcast/neel-nanda-on-math-tech-progress-aging-living-up-to-our-values-and-generative-ai/ Thu, 23 Feb 2023 14:13:53 +0000 https://futureoflife.org/?post_type=podcast&p=46097 Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability https://futureoflife.org/podcast/neel-nanda-on-avoiding-an-ai-catastrophe-with-mechanistic-interpretability/ Thu, 16 Feb 2023 18:16:11 +0000 https://futureoflife.org/?post_type=podcast&p=46068 Neel Nanda on What is Going on Inside Neural Networks https://futureoflife.org/podcast/neel-nanda-on-what-is-going-on-inside-neural-networks/ Thu, 09 Feb 2023 11:59:52 +0000 https://futureoflife.org/?post_type=podcast&p=46061 The Problem of Self-Referential Reasoning in Self-Improving AI: An Interview with Ramana Kumar, Part 2 https://futureoflife.org/ai/the-problem-of-self-referential-reasoning-in-self-improving-ai-an-interview-with-ramana-kumar-part-2/ Thu, 21 Mar 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/the-problem-of-self-referential-reasoning-in-self-improving-ai-an-interview-with-ramana-kumar-part-2/ When it comes to artificial intelligence, debates often arise about what constitutes “safe” and “unsafe” actions. As Ramana Kumar, an AGI safety researcher at DeepMind, notes, the terms are subjective and “can only be defined with respect to the values of the AI system’s users and beneficiaries.”

Fortunately, such questions can mostly be sidestepped when confronting the technical problems associated with creating safe AI agents, as these problems aren’t associated with identifying what is right or morally proper. Rather, from a technical standpoint, the term “safety” is best defined as an AI agent that consistently takes actions that lead to the desired outcomes, regardless of whatever those desired outcomes may be.

In this respect, Kumar explains that, when it comes to creating an AI agent that is tasked with improving itself, “the technical problem of building a safe agent is largely independent of what ‘safe’ means because a large part of the problem is how to build an agent that reliably does something, no matter what that thing is, in such a way that the method continues to work even as the agent under consideration is more and more capable.”

In short, making a “safe” AI agent should not be conflated with making an “ethical” AI agent. The respective terms are talking about different things..

In general, sidestepping moralistic definitions of safety makes AI technical work quite a bit easier It allows research to advance while debates on the ethical issues evolve. Case in point, Uber’s self-driving cars are already on the streets, despite the fact that we’ve yet to agree on a framework regarding whether they should safeguard their driver or pedestrians.

However, when it comes to creating a robust and safe AI system that is capable of self-improvement, the technical work gets a lot harder, and research in this area is still in its most nascent stages. This is primarily because we aren’t dealing with just one AI agent; we are dealing with generations of future self-improving agents.

Kumar clarifies, “When an AI agent is self-improving, one can view the situation as involving two agents: the ‘seed’ or ‘parent’ agent and the ‘child’ agent into which the parent self-modifies….and its total effects on the world will include the effects of actions made by its descendants.” As a result, in order to know we’ve made a safe AI agent, we need to understand all possible child agents that might originate from the first agent.

And verifying the safety of all future AI agents comes down to solving a problem known as “self-referential reasoning.”

Understanding the Self-Referential Problem

The problem with self-referential reasoning is most easily understood by defining the term according to its two primary components: self-reference and reasoning.

  • Self-reference: Refers to an instance in which someone (or something, such as a computer program or book) refers to itself. Any person or thing that refers to itself is called “self-referential.”
  • Reasoning: In AI systems, reasoning is a process through which an agent establishes “beliefs” about the world, like whether or not a particular action is safe or a specific reasoning system is sound. “Good beliefs” are beliefs that are sound or plausible based on the available evidence. The term “belief” is used instead of “knowledge” because the things that an agent believes may not be factually true and can change over time.

In relation to AI, then, the term “self-referential reasoning” refers to an agent that is using a reasoning process to establish a belief about that very same reasoning process. Consequently, when it comes to self-improvement, the “self-referential problem” is as follows: An agent is using its own reasoning system to determine that future versions of its reasoning system will be safe.

To explain the problem another way, Kumar notes that, if an AI agent creates a child agent to help it achieve its goal, it will want to establish some beliefs about the child’s safety before using it. This will necessarily involve proving beliefs about the child by arguing that the child’s reasoning process is good. Yet, the child’s reasoning process may be similar to, or even an extension of, the original agent’s reasoning process. And ultimately, an AI system can not use its own reasoning to determine whether or not its reasoning is good.

From a technical standpoint, the problem comes down to Godel’s second incompleteness theorem, which Kumar explains, “shows that no sufficiently strong proof system can prove its own consistency, making it difficult for agents to show that actions their successors have proven to be safe are, in fact, safe.”

Investigating Solutions

To date, several partial solutions to this problem have been proposed; however, our current software doesn’t have sufficient support for self-referential reasoning to make the solutions easy to implement and study. Consequently, in order to improve our understanding of the challenges of implementing self-referential reasoning, Kumar and his team aimed to implement a toy model of AI agents using some of the partial solutions that have been put forth.

Specifically, they investigated the feasibility of implementing one particular approach to the self-reference problem in a concrete setting (specifically, Botworld) where all the details could be checked. The approach selected was model polymorphism. Instead of requiring proof that shows an action is safe for all future use cases, model polymorphism only requires an action to be proven safe for an arbitrary number of steps (or subsequent actions) that is kept abstracted from the proof system.

Kumar notes that the overall goal was ultimately “to get a sense of the gap between the theory and a working implementation and to sharpen our understanding of the model polymorphism approach.” This would be accomplished by creating a proved theorem in a HOL (Higher Order Logic) theorem prover that describes the situation.

To break this down a little, in essence, theorem provers are computer programs that assist with the development of mathematical correctness proofs. These mathematical correctness proofs are the highest safety standard in the field, showing that a computer system always produces the correct output (or response) for any given input. Theorem provers create such proofs by using the formal methods of mathematics to prove or disprove the “correctness” of the control algorithms underlying a system. HOL theorem provers, in particular, are a family of interactive theorem proving systems that facilitate the construction of theories in higher-order logic. Higher-order logic, which supports quantification over functions, sets, sets of sets, and more, is more expressive than other logics, allowing the user to write formal statements at a high level of abstraction.

In retrospect, Kumar states that trying to prove a theorem about multiple steps of self-reflection in a HOL theorem prover was a massive undertaking. Nonetheless, he asserts that the team took several strides forward when it comes to grappling with the self-referential problem, noting that they built “a lot of the requisite infrastructure and got a better sense of what it would take to prove it and what it would take to build a prototype agent based on model polymorphism.”

Kumar added that MIRI’s (the Machine Intelligence Research Institute’s) Logical Inductors could also offer a satisfying version of formal self-referential reasoning and, consequently, provide a solution to the self-referential problem.

If you haven’t read it yet, find Part 1 here.

]]>
The Unavoidable Problem of Self-Improvement in AI: An Interview with Ramana Kumar, Part 1 https://futureoflife.org/ai/the-unavoidable-problem-of-self-improvement-in-ai-an-interview-with-ramana-kumar-part-1/ Tue, 19 Mar 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/the-unavoidable-problem-of-self-improvement-in-ai-an-interview-with-ramana-kumar-part-1/ Today’s AI systems may seem like intellectual powerhouses that are able to defeat their human counterparts at a wide variety of tasks. However, the intellectual capacity of today’s most advanced AI agents is, in truth, narrow and limited. Take, for example, AlphaGo. Although it may be the world champion of the board game Go, this is essentially the only task that the system excels at.

Of course, there’s also AlphaZero. This algorithm has mastered a host of different games, from Japanese and American chess to Go. Consequently, it is far more capable and dynamic than many contemporary AI agents; however, AlphaZero doesn’t have the ability to easily apply its intelligence to any problem. It can’t move unfettered from one task to another the way that a human can.

The same thing can be said about all other current AI systems — their cognitive abilities are limited and don’t extend far beyond the specific task they were created for. That’s why Artificial General Intelligence (AGI) is the long-term goal of many researchers.

Widely regarded as the “holy grail” of AI research, AGI systems are artificially intelligent agents that have a broad range of problem-solving capabilities, allowing them to tackle challenges that weren’t considered during their design phase. Unlike traditional AI systems, which focus on one specific skill, AGI systems would be able efficiently to tackle virtually any problem that they encounter, completing a wide range of tasks.

If the technology is ever realized, it could benefit humanity in innumerable ways. Marshall Burke, an economist at Stanford University, predicts that AGI systems would ultimately be able to create large-scale coordination mechanisms to help alleviate (and perhaps even eradicate) some of our most pressing problems, such as hunger and poverty. However, before society can reap the benefits of these AGI systems, Ramana Kumar, an AGI safety researcher at DeepMind, notes that AI designers will eventually need to address the self-improvement problem.

Self-Improvement Meets AGI

Early forms of self-improvement already exist in current AI systems. “There is a kind of self-improvement that happens during normal machine learning,” Kumar explains; “namely, the system improves in its ability to perform a task or suite of tasks well during its training process.”

However, Kumar asserts that he would distinguish this form of machine learning from true self-improvement because the system can’t fundamentally change its own design to become something new. In order for a dramatic improvement to occur — one that encompasses new skills, tools, or the creation of more advanced AI agents — current AI systems need a human to provide them with new code and a new training algorithm, among other things.

Yet, it is theoretically possible to create an AI system that is capable of true self-improvement, and Kumar states that such a self-improving machine is one of the more plausible pathways to AGI.

Researchers think that self-improving machines could ultimately lead to AGI because of a process that is referred to as “recursive self-improvement.” The basic idea is that, as an AI system continues to use recursive self-improvement to make itself smarter, it will get increasingly better at making itself smarter. This will quickly lead to an exponential growth in its intelligence and, as a result, could eventually lead to AGI.

Kumar says that this scenario is entirely plausible, explaining that, “for this to work, we need a couple of mostly uncontroversial assumptions: that such highly competent agents exist in theory, and that they can be found by a sequence of local improvements.” To this extent, recursive self-improvement is a concept at the heart of a number of theories on how we can get from today’s moderately smart machines to super-intelligent AGI. However, Kumar clarifies that this isn’t the only potential pathway to AI superintelligences.

Humans could discover how to build highly competent AGI systems through a variety of methods. This might happen “by scaling up existing machine learning methods, for example, with faster hardware. Or it could happen by making incremental research progress in representation learning, transfer learning, model-based reinforcement learning, or some other direction. For example, we might make enough progress in brain scanning and emulation to copy and speed up the intelligence of a particular human,” Kumar explains.

Yet, he is also quick to clarify that recursive self-improvement is an innate characteristic of AGI. “Even if iterated self-improvement is not necessary to develop highly competent artificial agents in the first place, explicit self-improvement will still be possible for those agents,” Kumar said.

As such, although researchers may discover a pathway to AGI that doesn’t involve recursive self-improvement, it’s still a property of artificial intelligence that is in need of serious research.

Safety in Self-Improving AI

When systems start to modify themselves, we have to be able to trust that all their modifications are safe. This means that we need to know something about all possible modifications. But how can we ensure that a modification is safe if no one can predict ahead of time what the modification will be?  

Kumar notes that there are two obvious solutions to this problem. The first option is to restrict a system’s ability to produce other AI agents. However, as Kumar succinctly sums, “We do not want to solve the safe self-improvement problem by forbidding self-improvement!”

The second option, then, is to permit only limited forms of self-improvement that have been deemed sufficiently safe, such as software updates or processor and memory upgrades. Yet, Kumar explains that vetting these forms of self-improvement as safe and unsafe is still exceedingly complicated. In fact, he says that preventing the construction of one specific kind of modification is so complex that it will “require such a deep understanding of what self-improvement involves that it will likely be enough to solve the full safe self-improvement problem.”

And notably, even if new advancements do permit only limited forms of self-improvement, Kumar states that this isn’t the path to take, as it sidesteps the core problem with self-improvement that we want to solve. “We want to build an agent that can build another AI agent whose capabilities are so great that we cannot, in advance, directly reason about its safety…We want to delegate some of the reasoning about safety and to be able to trust that the parent does that reasoning correctly,” he asserts.

Ultimately, this is an extremely complex problem that is still in its most nascent stages. As a result, much of the current work is focused on testing a variety of technical solutions and seeing where headway can be made. “There is still quite a lot of conceptual confusion about these issues, so some of the most useful work involves trying different concepts in various settings and seeing whether the results are coherent,” Kumar explains.

Regardless of what the ultimate solution is, Kumar asserts that successfully overcoming the problem of self-improvement depends on AI researchers working closely together. “The key to is to make assumptions explicit, and, for the sake of explaining it to others, to be clear about the connection to the real-world safe AI problems we ultimately care about.”

Read Part 2 here

]]>
How to Create AI That Can Safely Navigate Our World — An Interview With Andre Platzer https://futureoflife.org/recent-news/how-to-create-ai-that-can-safely-navigate-our-world-andre-platzer/ Thu, 13 Dec 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/how-to-create-ai-that-can-safely-navigate-our-world-andre-platzer/ Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health’s AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London, electronic transfers allow us to instantly send money nearly anywhere in the world, and, over the last 20 years, GPS has revolutionized  how we navigate, how we track and ship goods, and how we regulate traffic.

However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it’s exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.

This uncertainty is particularly worrisome when it comes to technologies that could pose existential challenges — artificial intelligence, for example.

Yet, despite the unpredictable nature of tomorrow’s AI, certain challenges are foreseeable. Case in point, regardless of the developmental path that AI agents ultimately take, these systems will need to be capable of making intelligent decisions that allow them to move seamlessly and safely through our physical world. Indeed, one of the most impactful uses of artificial intelligence encompasses technologies like autonomous vehicles, robotic surgeons, user-aware smart grids, and aircraft control systems — all of which combine advanced decision-making processes with the physics of motion.

Such systems are known as cyber-physical systems (CPS). The next generation of advanced CPS could lead us into a new era in safety, reducing crashes by 90% and saving the world’s nations hundreds of billions of dollars a year — but only if such systems are themselves implemented correctly.

This is where Andre Platzer, Associate Professor of Computer Science at Carnegie Mellon University, comes in. Platzer’s research is dedicated to ensuring that CPS benefit humanity and don’t cause harm. Practically speaking, this means ensuring that the systems are flexible, reliable, and predictable.

What Does it Mean to Have a Safe System?

Cyber-physical systems have been around, in one form or another, for quite some time. Air traffic control systems, for example, have long relied on CPS-type technology for collision avoidance, traffic management, and a host of other decision-making tasks. However, Platzer notes that as CPS continue to advance, and as they are increasingly required to integrate more complicated automation and learning technologies, it becomes far more difficult to ensure that CPS are making reliable and safe decisions.

To better clarify the nature of the problem, Platzer turns to self-driving vehicles. In advanced systems like these, he notes that we need to ensure that the technology is sophisticated enough to be flexible, as it has to be able to safely respond to any scenario that it confronts. In this sense, “CPS are at their best if they’re not just running very simple , but if they’re running much more sophisticated and advanced systems,” Platzer notes. However, when CPS utilize advanced autonomy, because they are so complex, it becomes far more difficult to prove that they are making systematically sound choices.

In this respect, the more sophisticated the system becomes, the more we are forced to sacrifice some of the predictability and, consequently, the safety of the system. As Platzer articulates, “the simplicity that gives you predictability on the safety side is somewhat at odds with the flexibility that you need to have on the artificial intelligence side.”

The ultimate goal, then, is to find equilibrium between the flexibility and predictability — between the advanced learning technology and the proof of safety — to ensure that CPS can execute their tasks both safely and effectively. Platzer describes this overall objective as a kind of balancing act, noting that, “with cyber-physical systems, in order to make that sophistication feasible and scalable, it’s also important to keep the system as simple as possible.”

How to Make a System Safe

The first step in navigating this issue is to determine how researchers can verify that a CPS is truly safe. In this respect, Platzer notes that his research is driven by this central question: if scientists have a mathematical model for the behavior of something like a self-driving car or an aircraft, and if they have the conviction that all the behaviors of the controller are safe, how do they go about proving that this is actually the case?

The answer is an automated theorem prover, which is a computer program that assists with the development of rigorous mathematical correctness proofs.

When it comes to CPS, the highest safety standard is such a mathematical correctness proof, which shows that the system always produces the correct output for any given input. It does this by using formal methods of mathematics to prove or disprove the correctness of the control algorithms underlying a system.

After this proof technology has been identified and created, Platzer asserts that the next step is to use it to augment the capabilities of artificially intelligent learning agents — increasing their complexity while simultaneously verifying their safety.

Eventually, Platzer hopes that this will culminate in technology that allows CPS to recover from situations where the expected outcome didn’t turn out to be an accurate model of reality. For example, if a self-driving car assumes another car is speeding up when it is actually slowing down, it needs to be able to quickly correct this error and switch to the correct mathematical model of reality.

The more complex such seamless transitions are, the more complex they are to implement. But they are the ultimate amalgamation of safety and flexibility or, in other words, the ultimately combination of AI and safety proof technology.

Creating the Tech of Tomorrow

To date, one of the biggest developments to come from Platzer’s research is the KeYmaera X prover, which Platzer characterizes as a “gigantic, quantum leap in terms of the reliability of our safety technology, passing far beyond in rigor than what anyone else is doing for the analysis of cyber-physical systems.”

The KeYmaera X prover, which was created by Platzer and his team, is a tool that allows users to easily and reliably construct mathematical correctness proofs for CPS through an easy-to-use interface.

More technically, KeYmaera X is a hybrid systems theorem prover that analyzes the control program and the physical behavior of the controlled system together, in order to provide both efficient computation and the necessary support for sophisticated safety proof techniques. Ultimately, this work builds off of a previous iteration of the technology known as KeYmaera. However, Platzer states that, in order to optimize the tool and make it as simple as possible, the team essentially “started from scratch.”

Emphasizing just how dramatic these most recent changes are, Platzer notes that, in the previous prover, the correctness of the statements was dependent on some 66,000 lines of code. Notably, each of these 66,000 lines were all critical to the correctness of the verdict. According to Platzer, this poses a problem, as it’s exceedingly difficult to ensure that all of the lines are implemented correctly. Although the latest iteration of KeYmaera is ultimately just as large as the previous version, in KeYmaera X, the part of the prover that is responsible for verifying the correctness is a mere 2,000 lines of code.

This allows the team to evaluate the safety of cyber-physical systems more reliably than ever before. “We identified this microkernel, this really minuscule part of the system that was responsible for the correctness of the answers, so now we have a much better chance of making sure that we haven’t accidentally snuck any mistakes into the reasoning engines,” Platzer said. Simultaneously, he notes that it enables users to do much more aggressive automation in their analysis. Platzer explains, “If you have a small part of the system that’s responsible for the correctness, then you can do much more liberal automation. It can be much more courageous because there’s an entire safety net underneath it.”

For the next stage of his research, Platzer is going to begin integrating multiple mathematical models that could potentially describe reality into a CPS. To explain these next steps, Platzer returns once more to self-driving cars: “If you’re following another driver, you can’t know if the driver is currently looking for a parking spot, trying to get somewhere quickly, or about to change lanes. So, in principle, under those circumstances, it’s a good idea to have multiple possible models and comply with the ones that may be the best possible explanation of reality.”

Ultimately, the goal is to allow the CPS to increase their flexibility and complexity by switching between these multiple models as they become more or less likely explanations of reality. “The world is a complicated place,” Platzer explains, “so the safety analysis of the world will also have to be a complicated one.”

]]>
Cognitive Biases and AI Value Alignment: An Interview with Owain Evans https://futureoflife.org/recent-news/cognitive-biases-ai-value-alignment-owain-evans/ Mon, 08 Oct 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/cognitive-biases-ai-value-alignment-owain-evans/ Click here to see this page in other languages:  Russian 

At the core of AI safety, lies the value alignment problem: how can we teach artificial intelligence systems to act in accordance with human goals and values?

Many researchers interact with AI systems to teach them human values, using techniques like inverse reinforcement learning (IRL). In theory, with IRL, an AI system can learn what humans value and how to best assist them by observing human behavior and receiving human feedback.

But human behavior doesn’t always reflect human values, and human feedback is often biased. We say we want healthy food when we’re relaxed, but then we demand greasy food when we’re stressed. Not only do we often fail to live according to our values, but many of our values contradict each other. We value getting eight hours of sleep, for example, but we regularly sleep less because we also value working hard, caring for our children, and maintaining healthy relationships.

AI systems may be able to learn a lot by observing humans, but because of our inconsistencies, some researchers worry that systems trained with IRL will be fundamentally unable to distinguish between value-aligned and misaligned behavior. This could become especially dangerous as AI systems become more powerful: inferring the wrong values or goals from observing humans could lead these systems to adopt harmful behavior.

 

Distinguishing Biases and Values

Owain Evans, a researcher at the Future of Humanity Institute, and Andreas Stuhlmüller, president of the research non-profit Ought, have explored the limitations of IRL in teaching human values to AI systems. In particular, their research exposes how cognitive biases make it difficult for AIs to learn human preferences through interactive learning.

Evans elaborates: “We want an agent to pursue some set of goals, and we want that set of goals to coincide with human goals. The question then is, if the agent just gets to watch humans and try to work out their goals from their behavior, how much are biases a problem there?”

In some cases, AIs will be able to understand patterns of common biases. Evans and Stuhlmüller discuss the psychological literature on biases in their paper, Learning the Preferences of Ignorant, Inconsistent Agents, and in their online book, agentmodels.org. An example of a common pattern discussed in agentmodels.org is “time inconsistency.” Time inconsistency is the idea that people’s values and goals change depending on when you ask them. In other words, “there is an inconsistency between what you prefer your future self to do and what your future self prefers to do.”

Examples of time inconsistency are everywhere. For one, most people value waking up early and exercising if you ask them before bed. But come morning, when it’s cold and dark out and they didn’t get those eight hours of sleep, they often value the comfort of their sheets and the virtues of relaxation. From waking up early to avoiding alcohol, eating healthy, and saving money, humans tend to expect more from their future selves than their future selves are willing to do.

With systematic, predictable patterns like time inconsistency, IRL could make progress with AI systems. But often our biases aren’t so clear. According to Evans, deciphering which actions coincide with someone’s values and which actions spring from biases is difficult or even impossible in general.

“Suppose you promised to clean the house but you get a last minute offer to party with a friend and you can’t resist,” he suggests. “Is this a bias, or your value of living for the moment? This is a problem for using only inverse reinforcement learning to train an AI — how would it decide what are biases and values?”

 

Learning the “Correct” Values

Despite this conundrum, understanding human values and preferences is essential for AI systems, and developers have a very practical interest in training their machines to learn these preferences.

Already today, popular websites use AI to learn human preferences. With YouTube and Amazon, for instance, machine-learning algorithms observe your behavior and predict what you will want next. But while these recommendations are often useful, they have unintended consequences.

Consider the case of Zeynep Tufekci, an associate professor at the School of Information and Library Science at the University of North Carolina. After watching videos of Trump rallies to learn more about his voter appeal, Tufekci began seeing white nationalist propaganda and Holocaust denial videos on her “autoplay” queue. She soon realized that YouTube’s algorithm, optimized to keep users engaged, predictably suggests more extreme content as users watch more videos. This led her to call the website “The Great Radicalizer.”

This value misalignment in YouTube algorithms foreshadows the dangers of interactive learning with more advanced AI systems. Instead of optimizing advanced AI systems to appeal to our short-term desires and our attraction to extremes, designers must be able to optimize them to understand our deeper values and enhance our lives.

Evans suggests that we will want AI systems that can reason through our decisions better than humans can, understand when we are making biased decisions, and “help us better pursue our long-term preferences.” However, this will entail that AIs suggest things that seem bad to humans on first blush.

One can imagine an AI system suggesting a brilliant, counterintuitive modification to a business plan, and the human just finds it ridiculous. Or maybe an AI recommends a slightly longer, stress-free driving route to a first date, but the anxious driver takes the faster route anyway, unconvinced.

To help humans understand AIs in these scenarios, Evans and Stuhlmüller have researched how AI systems could reason in ways that are comprehensible to humans and can ultimately improve upon human reasoning.

One method (invented by Paul Christiano) is called “amplification,” where humans use AIs to help them think more deeply about decisions. Evans explains: “You want a system that does exactly the same kind of thinking that we would, but it’s able to do it faster, more efficiently, maybe more reliably. But it should be a kind of thinking that if you broke it down into small steps, humans could understand and follow.”

This second concept is called “factored cognition” – the idea of breaking sophisticated tasks into small, understandable steps. According to Evans, it’s not clear how generally factored cognition can succeed. Sometimes humans can break down their reasoning into small steps, but often we rely on intuition, which is much more difficult to break down.

 

Specifying the Problem

Evans and Stuhlmüller have started a research project on amplification and factored cognition, but they haven’t solved the problem of human biases in interactive learning – rather, they’ve set out to precisely lay out these complex issues for other researchers.

“It’s more about showing this problem in a more precise way than people had done previously,” says Evans. “We ended up getting interesting results, but one of our results in a sense is realizing that this is very difficult, and understanding why it’s difficult.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

]]>
Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich https://futureoflife.org/recent-news/making-ai-safe-in-an-unpredictable-world-an-interview-with-thomas-g-dietterich/ Mon, 17 Sep 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/making-ai-safe-in-an-unpredictable-world-an-interview-with-thomas-g-dietterich/ Our AI systems work remarkably well in closed worlds. That’s because these environments contain a set number of variables, making the worlds perfectly known and perfectly predictable. In these micro environments, machines only encounter objects that are familiar to them. As a result, they always know how they should act and respond. Unfortunately, these same systems quickly become confused when they are deployed in the real world, as many objects aren’t familiar to them. This is a bit of a problem because, when an AI system becomes confused, the results can be deadly.

Consider, for example, a self-driving car that encounters a novel object. Should it speed up, or should it slow down? Or consider an autonomous weapon system that sees an anomaly. Should it attack, or should it power down? Each of these examples involve life-and-death decisions, and they reveal why, if we are to deploy advanced AI systems in real world environments, we must be confident that they will behave correctly when they encounter unfamiliar objects.

Thomas G. Dietterich, Emeritus Professor of Computer Science at Oregon State University, explains that solving this identification problem begins with ensuring that our AI systems aren’t too confident — that they recognize when they encounter a foreign object and don’t misidentify it as something that they are acquainted with. To achieve this, Dietterich asserts that we must move away from (or, at least, greatly modify) the discriminative training methods that currently dominate AI research.

However, to do that, we must first address the “open category problem.”

 

Understanding the Open Category Problem

When driving down the road, we can encounter a near infinite number of anomalies. Perhaps a violent storm will arise, and hail will start to fall. Perhaps our vision will become impeded by smoke or excessive fog. Although these encounters may be unexpected, the human brain is able to easily analyze new information and decide on the appropriate course of action — we will recognize a newspaper drifting across the road and, instead of abruptly slamming on the breaks, continue on our way.

Because of the way that they are programmed, our computer systems aren’t able to do the same.

“The way we use machine learning to create AI systems and software these days generally uses something called ‘discriminative training,’” Dietterich explains, “which implicitly assumes that the world consists of only, say, a thousand different kinds of objects.” This means that, if a machine encounters a novel object, it will assume that it must be one of the thousand things that it was trained on. As a result, such systems misclassify all foreign objects.

This is the “open category problem” that Dietterich and his team are attempting to solve. Specifically, they are trying to ensure that our machines don’t assume that they have encountered every possible object, but are, instead, able to reliably detect — and ultimately respond to — new categories of alien objects.

Dietterich notes that, from a practical standpoint, this means creating an anomaly detection algorithm that assigns an anomaly score to each object detected by the AI system. That score must be compared against a set threshold and, if the anomaly score exceeds the threshold, the system will need to raise an alarm. Dietterich states that, in response to this alarm, the AI system should take a pre-determined safety action. For example, a self-driving car that detects an anomaly might slow down and pull off to the side of the road.

 

Creating a Theoretical Guarantee of Safety

There are two challenges to making this method work. First, Dietterich asserts that we need good anomaly detection algorithms. Previously, in order to determine what algorithms work well, the team compared the performance of eight state-of-the-art anomaly detection algorithms on a large collection of benchmark problems.

The second challenge is to set the alarm threshold so that the AI system is guaranteed to detect a desired fraction of the alien objects, such as 99%. Dietterich says that formulating a reliable setting for this threshold is one of the most challenging research problems because there are, potentially, infinite kinds of alien objects. “The problem is that we can’t have labeled training data for all of the aliens. If we had such data, we would simply train the discriminative classifier on that labeled data,” Dietterich says.

To circumvent this labeling issue, the team assumes that the discriminative classifier has access to a representative sample of “query objects” that reflect the larger statistical population. Such a sample could, for example, be obtained by collecting data from cars driving on highways around the world. This sample will include some fraction of unknown objects, and the remaining objects belong to known object categories.

Notably, the data in the sample is not labeled. Instead, the AI system is given an estimate of the fraction of aliens in the sample. And by combining the information in the sample with the labeled training data that was employed to train the discriminative classifier, the team’s new algorithm can choose a good alarm threshold. If the estimated fraction of aliens is known to be an over-estimate of the true fraction, then the chosen threshold is guaranteed to detect the target percentage of aliens (i.e. 99%).

Ultimately, the above is the first method that can give a theoretical guarantee of safety for detecting alien objects, and a paper reporting the results was presented at ICML 2018. “We are able to guarantee, with high probability, that we can find 99% all of these new objects,” Dietterich says.

In the next stage of their research, Dietterich and his team plan to begin testing their algorithm in a more complex setting. Thus far, they’ve been looking primarily at classification, where the system looks at an image and classifies it. Next, they plan to move to controlling an agent, like a robot of self-driving car. “At each point in time, in order to decide what action to choose, our system will do a ‘look ahead search’ based on a learned model of the behavior of the agent and its environment. If the look ahead arrives at a state that is rated as ‘alien’ by our method, then this indicates that the agent is about to enter a part of the state space where it is not competent to choose correct actions,” Dietterich says. In response, as previously mentioned, the agent should execute a series of safety actions and request human assistance.

But what does this safety action actually consist of?

 

Responding to Aliens

Dietterich notes that, once something is identified as an anomaly and the alarm is sounded, the nature of this fall back system will depend on the machine in question, like whether the AI system is in a self-driving car or autonomous weapon.

To explain how these secondary systems operate, Dietterich turns to self-driving cars. “In the Google car, if the computers lose power, then there’s a backup system that automatically slows the car down and pulls it over to the side of the road.” However, Dietterich clarifies that stopping isn’t always the best course of action. One may assume that a car should come to a halt if an unidentified object crosses its path; however, if the unidentified object happens to be a blanket of snow on a particularly icy day, hitting the breaks gets more complicated. The system would need to factor in the icy roads, any cars that may be driving behind, and whether these cars can break in time to avoid a rear end collision.

But if we can’t predict every eventuality, how can we expect to program an AI system so that it behaves correctly and in a way that is safe?

Unfortunately, there’s no easy answer; however, Dietterich clarifies that there are some general best practices; “There’s no universal solution to the safety problem, but obviously there are some actions that are safer than others. Generally speaking, removing energy from the system is a good idea,” he says. Ultimately, Dietterich asserts that all the work related to programming safe AI really boils down to determining how we want our machines to behave under specific scenarios, and he argues that we need to rearticulate how we characterize this problem, and focus on accounting for all the factors, if we are to develop a sound approach.

Dietterich notes that “when we look at these problems, they tend to get lumped under a classification of ‘ethical decision making,’ but what they really are is problems that are incredibly complex. They depend tremendously on the context in which they are operating, the human beings, the other innovations, the other automated systems, and so on. The challenge is correctly describing how we want the system to behave and then ensuring that our implementations actually comply with those requirements.” And he concludes, “the big risk in the future of AI is the same as the big risk in any software system, which is that we build the wrong system, and so it does the wrong thing. Arthur C Clark in 2001: A Space Odyssey had it exactly right. The Hal 9000 didn’t ‘go rogue;’ it was just doing what it had been programmed to do.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

]]>
Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity https://futureoflife.org/recent-news/governing-ai-an-inside-look-at-the-quest-to-regulate-artificial-intelligence/ Thu, 30 Aug 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/governing-ai-an-inside-look-at-the-quest-to-regulate-artificial-intelligence/ Click here to see this page in other languages:  Russian 

Finance, education, medicine, programming, the arts — artificial intelligence is set to disrupt nearly every sector of our society. Governments and policy experts have started to realize that, in order to prepare for this future, in order to minimize the risks and ensure that AI benefits humanity, we need to start planning for the arrival of advanced AI systems today.

Although we are still in the early moments of this movement, the landscape looks promising. Several nations and independent firms have already started to strategize and develop polices for the governance of AI. Last year, the UAE appointed the world’s first Minister of Artificial Intelligence, and Germany took smaller, but similar, steps in 2017, when the Ethics Commission at the German Ministry of Transport and Digital Infrastructure developed the world’s first set of regulatory guidelines for automated and connected driving.

This work is notable; however, these efforts have yet to coalesce into a larger governance framework that extends beyond national boundaries. Nick Bostrom’s Strategic Artificial Intelligence Research Center seeks to assist in resolving this issue by understanding, and ultimately shaping, the strategic landscape of long-term AI development on a global scale.

 

Developing a Global Strategy: Where We Are Today

The Strategic Artificial Intelligence Research Center was founded in 2015 with the knowledge that, to truly circumvent the threats posed by AI, the world needs a concerted effort focused on tackling unsolved problems related to AI policy and development. The Governance of AI Program (GovAI), co-directed by Bostrom and Allan Dafoe, is the primary research program that has evolved from this center. Its central mission, as articulated by the directors, is to “examine the political, economic, military, governance, and ethical dimensions of how humanity can best navigate the transition to such advanced AI systems.” In this respect, the program is focused on strategy — on shaping the social, political, and governmental systems that influence AI research and development — as opposed to focusing on the technical hurdles that must be overcome in order to create and program safe AI.

To develop a sound AI strategy, the program works with social scientists, politicians, corporate leaders, and artificial intelligence/machine learning engineers to address questions of how we should approach the challenge of governing artificial intelligence. In a recent 80,0000 Hours podcast with Rob Wiblin, Dafoe outlined how the team’s research shapes up from a practical standpoint, asserting that the work focuses on answering questions that fall under three primary categories:

  • The Technical Landscape: This category seeks to answer all the questions that are related to research trends in the field of AI with the aim of understanding what future technological trajectories are plausible and how these trajectories affect the challenges of governing advanced AI systems.
  • AI Politics: This category focuses on questions that are related to the dynamics of different groups, corporations, and governments pursuing their own interests in relation to AI, and it seeks to understand what risks might arise as a result and how we may be able to mitigate these risks.
  • AI Governance: This category examines positive visions of a future in which humanity coordinates to govern advanced AI in a safe and robust manner. This raises questions such as how this framework should operate and what values we would want to encode in a governance regime.

The above categories provide a clearer way of understanding the various objectives of those invested in researching AI governance and strategy; however, these categories are fairly large in scope. To help elucidate the work they are performing, Jade Leung, a researcher with GovAI and a DPhil candidate in International Relations at the University of Oxford, outlined some of the specific workstreams that the team is currently pursuing.

One of the most intriguing areas of research is the Chinese AI Strategy workstream. This line of research examines things like China’s AI capabilities vis-à-vis other countries, official documentation regarding China’s AI policy, and the various power dynamics at play in the nation with an aim of understanding, as Leung summarizes, “China’s ambition to become an AI superpower and the state of Chinese thinking on safety, cooperation, and AGI.” Ultimately, GovAI seeks to outline the key features of China’s AI strategy in order to understand one of the most important actors in AI governance. The program published Deciphering China’s AI Dream in March of 2018a report that analyzes new features of China’s national AI strategy, and has plans to build upon research in the near future.

Another workstream is Firm-Government Cooperation, which examines the role that private firms play in relation to the development of advanced AI and how these players are likely to interact with national governments. In a recent talk at EA Global San Francisco, Leung focused on how private industry is already playing a significant role in AI development and why, when considering how to govern AI, private players must be included in strategy considerations as a vital part of the equation. The description of the talk succinctly summarizes the key focal areas, noting that “private firms are the only prominent actors that have expressed ambitions to develop AGI, and lead at the cutting edge of advanced AI research. It is therefore critical to consider how these private firms should be involved in the future of AI governance.”

Other work that Leung highlighted includes modeling technology race dynamics and analyzing the distribution of AI talent and hardware globally.

 

The Road Ahead

When asked how much confidence she has that AI researchers will ultimately coalesce and be successful in their attempts to shape the landscape of long-term AI development internationally, Leung was cautious with her response, noting that far more hands are needed. “There is certainly a greater need for more researchers to be tackling these questions. As a research area as well as an area of policy action, long-term safe and robust AI governance remains a neglected mission,” she said.

Additionally, Leung noted that, at this juncture, although some concrete research is already underway, a lot of the work is focused on framing issues related to AI governance and, in so doing, revealing the various avenues in need of research. As a result, the team doesn’t yet have concrete recommendations for specific actions governing bodies should commit to, as further foundational analysis is needed. “We don’t have sufficiently robust and concrete policy recommendations for the near term as it stands, given the degrees of uncertainty around this problem,” she said.

However, both Leung and Defoe are optimistic and assert that this information gap will likely change — and rapidly. Researchers across disciplines are increasingly becoming aware of the significance of this topic, and as more individuals begin researching and participating in this community, the various avenues of research will become more focused. “In two years, we’ll probably have a much more substantial research community. But today, we’re just figuring out what are the most important and tractable problems and how we can best recruit to work on those problems,” Dafoe told Wiblin.

The assurances that a more robust community will likely form soon are encouraging; however, questions remain regarding whether this community will come together with enough time to develop a solid governance framework. As Dafoe notes, we have never witnessed an intelligence explosion before, so we have no examples to look to for guidance when attempting to develop projections and timelines regarding when we will have advanced AI systems.

Ultimately, the lack of projections is precisely why we must significantly invest in AI strategy research in the immediate future. As Bostrom notes in Superintelligence: Paths, Dangers, and Strategies, AI is not simply a disruptive technology, it is likely the most disruptive technology humanity will ever encounter: “[Superintelligence] is quite possibly the most important and most daunting challenge humanity has ever faced. And — whether we succeed or fail — it is probably the last challenge we will ever face.”

This article is part of a Future of Life series on the AI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

Edit: The title of the article has been changed to reflect the fact that this is not about regulating AI.

]]>