{"id":39362,"date":"2018-12-13T00:00:00","date_gmt":"2018-12-13T00:00:00","guid":{"rendered":"https:\/\/futureoflife.org\/uncategorized\/how-to-create-ai-that-can-safely-navigate-our-world-andre-platzer\/"},"modified":"2022-06-22T08:41:13","modified_gmt":"2022-06-22T08:41:13","slug":"how-to-create-ai-that-can-safely-navigate-our-world-andre-platzer","status":"publish","type":"post","link":"https:\/\/futureoflife.org\/recent-news\/how-to-create-ai-that-can-safely-navigate-our-world-andre-platzer\/","title":{"rendered":"How to Create AI That Can Safely Navigate Our World — An Interview With Andre Platzer"},"content":{"rendered":"
Over the last few decades, the unprecedented pace of technological progress has allowed us to upgrade and modernize much of our infrastructure and solve many long-standing logistical problems. For example, Babylon Health\u2019s AI-driven smartphone app is helping assess and prioritize 1.2 million patients in North London<\/a>, electronic transfers allow us to instantly send money nearly anywhere in the world<\/a>, and, over the last 20 years, GPS has revolutionized <\/a>\u00a0how we navigate, how we track and ship goods, and how we regulate traffic.<\/p>\n However, exponential growth comes with its own set of hurdles that must be navigated. The foremost issue is that it\u2019s exceedingly difficult to predict how various technologies will evolve. As a result, it becomes challenging to plan for the future and ensure that the necessary safety features are in place.<\/p>\n This uncertainty is particularly worrisome when it comes to technologies that could pose existential challenges \u2014 artificial intelligence, for example.<\/p>\n Yet, despite the unpredictable nature of tomorrow\u2019s AI, certain challenges are foreseeable. Case in point, regardless of the developmental path that AI agents ultimately take, these systems will need to be capable of making intelligent decisions that allow them to move seamlessly and safely through our physical world. Indeed, one of the most impactful uses of artificial intelligence encompasses technologies like autonomous vehicles, robotic surgeons, user-aware smart grids, and aircraft control systems \u2014 all of which combine advanced decision-making processes with the physics of motion.<\/p>\n Such systems are known as cyber-physical systems (CPS). The next generation of advanced CPS could lead us into a new era in safety, reducing crashes by 90%<\/a> and saving the world\u2019s nations hundreds of billions of dollars a year<\/a> \u2014 but only <\/i>if such systems are themselves implemented correctly.<\/p>\n This is where Andre Platzer<\/a>, Associate Professor of Computer Science at Carnegie Mellon University, comes in. Platzer\u2019s research is dedicated to ensuring that CPS benefit humanity and don\u2019t cause harm. Practically speaking, this means ensuring that the systems are flexible, reliable, and predictable.<\/p>\n Cyber-physical systems have been around, in one form or another, for quite some time. Air traffic control systems, for example, have long relied on CPS-type technology for collision avoidance, traffic management, and a host of other decision-making tasks. However, Platzer notes that as CPS continue to advance, and as they are increasingly required to integrate more complicated automation and learning technologies, it becomes far more difficult to ensure that CPS are making reliable and safe decisions.<\/p>\n To better clarify the nature of the problem, Platzer turns to self-driving vehicles. In advanced systems like these, he notes that we need to ensure that the technology is sophisticated enough to be flexible, as it has to be able to safely respond to any scenario that it confronts. In this sense, \u201cCPS are at their best if they\u2019re not just running very simple , but if they\u2019re running much more sophisticated and advanced systems,\u201d Platzer notes. However, when CPS utilize advanced autonomy, because they are so complex, it becomes far more difficult to prove that they are making systematically sound choices.<\/p>\n In this respect, the more sophisticated the system becomes, the more we are forced to sacrifice some of the predictability and, consequently, the safety of the system. As Platzer articulates, \u201cthe simplicity that gives you predictability on the safety side is somewhat at odds with the flexibility that you need to have on the artificial intelligence side.\u201d<\/p>\n The ultimate goal, then, is to find equilibrium between the flexibility and predictability \u2014 between the advanced learning technology and the proof of safety \u2014 to ensure that CPS can execute their tasks both safely and effectively. Platzer describes this overall objective as a kind of balancing act, noting that, \u201cwith cyber-physical systems, in order to make that sophistication feasible and scalable, it\u2019s also important to keep the system as simple as possible.\u201d<\/p>\n The first step in navigating this issue is to determine how researchers can verify that a CPS is truly safe. In this respect, Platzer notes that his research is driven by this central question: if scientists have a mathematical model for the behavior of something like a self-driving car or an aircraft, and if they have the conviction that all the behaviors of the controller are safe, how do they go about proving<\/i> that this is actually the case?<\/p>\n The answer is an automated theorem prover, which is a computer program that assists with the development of rigorous mathematical correctness proofs<\/a>.<\/p>\n When it comes to CPS, the highest safety standard is such a mathematical correctness proof, which shows that the system always produces the correct output for any given input. It does this by using formal methods of mathematics to prove or disprove the correctness of the control algorithms underlying a system.<\/p>\n After this proof technology has been identified and created, Platzer asserts that the next step is to use it to augment the capabilities of artificially intelligent learning agents \u2014 increasing their complexity while simultaneously verifying their safety.<\/p>\n Eventually, Platzer hopes that this will culminate in technology that allows CPS to recover from situations where the expected outcome didn\u2019t turn out to be an accurate model of reality. For example, if a self-driving car assumes another car is speeding up when it is actually slowing down, it needs to be able to quickly correct this error and switch to the correct mathematical model of reality.<\/p>\n The more complex such seamless transitions are, the more complex they are to implement. But they are the ultimate amalgamation of safety and flexibility or, in other words, the ultimately combination of AI and safety proof technology.<\/p>\n To date, one of the biggest developments to come from Platzer\u2019s research is the KeYmaera X prover<\/a>, which Platzer characterizes as a \u201cgigantic, quantum leap in terms of the reliability of our safety technology, passing far beyond in rigor than what anyone else is doing for the analysis of cyber-physical systems.\u201d<\/p>\n The KeYmaera X prover, which was created by Platzer and his team, is a tool that allows users to easily and reliably construct mathematical correctness proofs for CPS through an easy-to-use interface.<\/p>\nWhat Does it Mean to Have a Safe System? <\/h3>\n
How to Make a System Safe <\/h3>\n
Creating the Tech of Tomorrow <\/h3>\n