Autonomous Weapons Archives - Future of Life Institute https://futureoflife.org/category/aws/ Preserving the long-term future of life. Fri, 02 Aug 2024 11:10:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Future of Life Institute Statement on the Pope’s G7 AI Speech https://futureoflife.org/aws/future-of-life-institute-statement-on-the-popes-g7-ai-speech/ Tue, 18 Jun 2024 11:42:03 +0000 https://futureoflife.org/?p=132773 CAMBRIDGE, MA – Future of Life Institute (FLI) President and Co-Founder Max Tegmark today released the following statement after the Pope gave a speech at the G7 in Italy, raising the alarm about the risks of out-of-control AI development.

“The Future of Life Institute strongly supports the Pope’s call at the G7 for urgent political action to ensure artificial intelligence acts in service of humanity. This includes banning lethal autonomous weapons and ensuring that future AI systems stay under human control. I urge the leaders of the G7 nations to set an example for the rest of the world, enacting standards that keep future powerful AI systems safe, ethical, reliable, and beneficial.”

]]>
An introduction to the issue of Lethal Autonomous Weapons https://futureoflife.org/aws/an-introduction-to-the-issue-of-lethal-autonomous-weapons/ Tue, 30 Nov 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/an-introduction-to-the-issue-of-lethal-autonomous-weapons/ In the last few years, there has been a new development in the field of weapons technology.

Some of the most advanced national military programs are beginning to implement artificial intelligence (AI) into their weapons, essentially making them ‘smart’. This means these weapons will soon be making critical decisions by themselves – perhaps even deciding who lives and who dies.

If you’re safe at home, far from the front lines, you may think this does not concern you – but it should.

What are lethal autonomous weapons?

Slaughterbots, also called “lethal autonomous weapons systems” or “killer robots”, are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention.

Whereas in the case of unmanned military drones the decision to take life is made remotely by a human operator, in the case of lethal autonomous weapons the decision is made by algorithms alone.

Slaughterbots are pre-programmed to kill a specific “target profile.” The weapon is then deployed into an environment where its AI searches for that “target profile” using sensor data, such as facial recognition.

When the weapon encounters someone the algorithm perceives to match its target profile, it fires and kills.

What’s the problem?

Weapons that use algorithms to kill, rather than human judgement are immoral and a grave threat to national and global security.

  1. Immoral: Algorithms are incapable of comprehending the value of human life, and so should never be empowered to decide who lives and who dies. Indeed, the United Nations Secretary General António Guterres agrees that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
  2. Threat to Security: Algorithmic decision-making allows weapons to follow the trajectory of software: faster, cheaper, and at greater scale. This will be highly destabilising on both national and international levels because it introduces the threats of proliferation, rapid escalation, unpredictability, and even the potential for weapons of mass destruction.

How soon will they be developed?

Terms like “slaughterbots” and “killer robots” remind people of science fiction movies like The Terminator, which features a self-aware, human-like, robot assassin. This fuels the assumption that lethal autonomous weapons are of the far–future.

But that is incorrect.

In reality, weapons which can autonomously select, target, and kill humans are already here.

A 2021 report by the U.N. Panel of Experts on Libya documented the use of a lethal autonomous weapon system hunting down retreating forces. Since then, there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.

The accelerating rate of these use cases is a clear warning that time to act is quickly running out.

  • March 2021 – First documented use of a lethal autonomous weapon
  • June 2021 – First documented use of a drone swarm in combat
]]>
10 Reasons Why Autonomous Weapons Must be Stopped https://futureoflife.org/aws/10-reasons-why-autonomous-weapons-must-be-stopped/ Sat, 27 Nov 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/10-reasons-why-autonomous-weapons-must-be-stopped/ Lethal autonomous weapons pose a number of severe risks. These risks significantly outweigh any benefits they may provide, even for the world’s most advanced military programs.

In fact, these weapons have been referred to as “The Third Revolution In Warfare” due to their huge potential to have a negative impact on our society.

So, what are the main risks posed by the development of this new type of weapon?

Safety risks

1 – Unpredictability

Lethal autonomous weapons are dangerously unpredictable in their behaviour. Complex interactions between machine learning-based algorithms and a dynamic operational context make it extremely difficult to predict the behaviour of these weapons in realworld settings. Moreover, the weapons systems are unpredictable by design; they’re programmed to behave unpredictably in order to remain one step ahead of enemy systems.

2 – Escalation

Given the speed and scale at which they are capable of operating, autonomous weapons systems introduce the risk of accidental and rapid conflict escalation. Recent research by RAND found that “the speed of autonomous systems did lead to inadvertent escalation in the wargame” and concluded that “widespread AI and autonomous systems could lead to inadvertent escalation and crisis instability.” The United Nations Institute for Disarmament Research (UNIDIR) has concurred with RAND’s conclusion. Even the United States’ quasi-governmental National Security Commission on AI (NSCAI) acknowledged that “unintended escalations may occur for numerous reasons, including when systems fail to perform as intended, because of challenging and untested complexities of interaction between AI-enabled and autonomous systems on the battlefield, and, more generally, as a result of machines or humans misperceiving signals or actions.” The NSCAI went on to state that “AI-enabled systems will likely increase the pace and automation of warfare across the board, reducing the time and space available for de-escalatory measures.”

3 – Proliferation

Slaughterbots do not require costly or hard-to-obtain raw materials, making them extremely cheap to mass-produce. They’re also safe to transport and hard to detect. Once significant military powers begin manufacturing, these weapons systems are bound to proliferate. They will soon appear on the black market, and then in the hands of terrorists wanting to destabilise nations, dictators oppressing their populace, and/or warlords wishing to perpetrate ethnic cleansing. Indeed, the U.S. National Security Commission on AI has identified reducing the risk of proliferation as a key priority in reducing the strategic risks of AI in the military.

4 – Lowered barriers to conflict

War has traditionally been costly, both in terms of the cost of producing conventional weapons and in terms of costing human lives. Arguably, this has sometimes acted as a disincentive to go to war and, on the flip side, incentivised diplomacy. The rise of cheap, scalable weapons may undermine this norm, thereby lowering the barrier to conflict. The risk of rapid and unintended escalation combined with the proliferation of lethal autonomous weapons would arguably have the same effect.

5 – Mass destruction

Lethal autonomous weapons are extremely scalable. This means that the level of harm you can do using autonomous weapons depends solely on the quantity of Slaughterbots in your arsenal, not on the number of people you have available to operate the weapons. This stands in stark contrast to conventional weapons: a military power cannot do twice as much harm simply by purchasing twice as many guns; it also needs to recruit twice as many soldiers to shoot those guns. A swarm of Slaughterbots, small or large, requires only a single individual to activate it, and then its component Slaughterbots would fire themselves.

The quality of scalability, together with the significant threat of proliferation, gives rise to the threat of mass destruction. The defining characteristic of a weapon of mass destruction is that it can be used by a single person to cause many fatalities directly, and with lethal autonomous weapons a single individual could theoretically activate a swarm of hundreds of Slaughterbots, if not thousands. Proliferation increases the likelihood that large quantities of these weapons will end up in the hands of someone inclined to wreak havoc, and scalability empowers that individual. These considerations have prompted some to classify certain types of autonomous weapons systems, namely Slaughterbots, as weapons of mass destruction.

6 – Selective targeting of groups

Selecting individuals to kill based on sensor data alone, especially through facial recognition or other biometric information, introduces substantial risks for the selective targeting of groups based on perceived age, gender, race, ethnicity or religious dress. Combine that with the risk of proliferation, and autonomous weapons could greatly increase the risk of targeted violence against specific classes of individuals, including even ethnic cleansing and genocide. Furthermore, facial recognition software has been shown to amplify bias and increase error rates in correct identification of individuals from minority backgrounds, such as women and people of color. The potential disproportionate effects of lethal autonomous weapons on race and gender are key focus areas of civil society advocacy.

These threats are especially noteworthy given the increased use of facial recognition in policing and ethnic discrimination, with companies citing interest in developing lethal systems as a reason not to take a pledge against the weaponization of facial recognition software.

7 – AI Arms Race

Avoidance of an AI arms race is a foundational guiding principle of ethical artificial intelligence and yet, in the absence of a unified global effort to highlight the risks of lethal autonomous weapons and generate political pressure, an “ AI military race has begun.” Arms race dynamics, which favour speed over safety, further compound the inherent risks of unpredictability and escalatory behaviour.

Algorithms are incapable of understanding or conceptualizing the value of a human life, and so should never be empowered to decide who lives and who dies. Lethal autonomous weapons represent a violation of that clear moral red line.

9 – Lack accountability

Delegating the decision to use lethal force to algorithms raises significant questions about who is ultimately responsible and accountable for the use of force by autonomous weapons, particularly given their tendency towards unpredictability. This “accountability gap” is arguably illegal, as “international humanitarian law requires that individuals be held legally responsible for war crimes and grave breaches of the Geneva Conventions. Military commanders or operators could be found guilty if they deployed a fully autonomous weapon with the intent to commit a crime. It would, however, be legally challenging and arguably unfair to hold an operator responsible for the unforeseeable actions of an autonomous robot.”

10 – Violate international humanitarian law

International humanitarian law (IHL) sets out the principles of distinction and proportionality. The principle of distinction establishes the obligation of parties in armed conflict to distinguish between civilian and military targets, and to direct their operations only against military objectives. The principle of proportionality prohibits attacks in conflict which expose civilian populations to harm that is excessive when compared to the expected military advantage gained.

It has been noted that “fully autonomous weapons would face significant obstacles to complying with the principles of distinction and proportionality.” For example, these systems would lack the human judgment necessary to determine whether expected civilian harm outweighs anticipated military advantage in ever-changing and unforeseen combat situations.”

Further, it has been argued that autonomous weapons that target humans would violate the Martens Clause, a provision of IHL that establishes a moral baseline for judging emerging technologies. These systems would violate the dictates of public conscience and “undermine the principles of humanity because they would be unable to apply compassion or human judgment to decisions to use force.”

Related posts

]]>
Real-Life Technologies that Prove Autonomous Weapons are Already Here https://futureoflife.org/aws/real-life-technologies-that-prove-autonomous-weapons-are-already-here/ Mon, 22 Nov 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/real-life-technologies-that-prove-autonomous-weapons-are-already-here/ Lethal autonomous weapons have been in development for years. In fact, we warned the world about this back in 2017.

Unfortunately, Slaughterbots are now here.

In this article, we have collected 4 examples of real-life autonomous weapons which exist today, and have either already been deployed in military operations or will be deployed in the near future.

1 – STM Kargu-2

In spring of 2020, it was reported that an autonomous drone strike had taken place in Libya. As far as we know, this was the first documented case of an autonomous weapon being used in a real military operation.

According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya’s Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”

One of the Kargu-2 drones used in the attack was downed and recovered for inspection, which allowed us to learn the following details about its functionality:

The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it’s a Slaughterbot.

Link: Source

2 – Jaeger-C

In November 2021, a new form of autonomous weapon got its first military contract:

Australian robot vehicle maker GaardTech announced a contract Thursday to supply its Jaeger-C uncrewed combat vehicle to the Australian Army for demonstrations in 2022.

The wheeled Jaeger-C is a small machine with a low profile designed to attack from ambush. In some ways, it might be seen as a mobile robotic mine. This is especially true because the makers note it can be remote-controlled or operate “autonomously with image analysis and trained models linked to robotic actions,” according to a report in Overt Defense.

The weapon has two modes of operation: Chariot mode, for engaging human targets, and Goliath mode, for engaging vehicles. Here is how the two modes work:

In Chariot mode, the robot engages targets with an undisclosed weapon, which is likely to be a 7.62 mm medium machine gun. It might also be something like the 6.5mm sniper rifle in a Special Purpose Unmanned Rifle pod recently seen mounted on a quadruped robot. In Goliath mode, the robot carries out a kamikaze attack on a vehicle. This is named after the Goliath Tracked Mine deployed by German forces in WWII. Known as the ‘beetle tank,’ these little tracked vehicles were less than five feet long but carried a hundred-pound explosive charge and were used against tanks and fortifications.

When it identifies an armoured vehicle target, such as a tank, the Jaeger-C will enter ‘Goliath mode’ and roll at up to 50 mph towards the vehicle. When it is close enough, the on-board explosive will automatically detonate, causing critical damage to the target.

It may be possible for a tank to react in time and destroy a fast-approaching Jaeger-C, but this would become very difficult if multiple attack robots were swarming the target vehicle at once.

The Jaeger-C carries an armor-piercing shaped charge; the size is unspecified, but it’s likely to be at least comparable to the 20-pound warhead on the FGM-148 Javelin and has the huge advantage of attacking the belly of the tank where the armor this thinnest.

No human operator needs to be present – the Jaeger-C is able to perform all of these functions, including recognising a target and deciding whether or not to detonate, by itself.

Link: Source

3 – US Air Force MQ-9 Reaper

In December 2020, Forbes ran this headline:

U.S. To Equip MQ-9 Reaper Drones With Artificial Intelligence

This caused a bit of a stir in the existential risk community.

The Pentagon’s Joint Artificial Intelligence Center has awarded a $93.3 million contract to General Atomics Aeronautical Systems Inc (GA-ASI), makers of the MQ-9 Reaper, to equip the drone with new AI technology. The aim is for the Reaper to be able to carry out autonomous flight, decide where to direct its battery of sensors, and to recognize objects on the ground.

In September, the Air Force announced that General Atomics had flown a Reaper fitted with a new device known as an Agile Condor pod under its wing for the first time. Agile Condor, which has been in development by the Air Force Research Laboratory for some years, is effectively a flying supercomputer – ‘high-performance embedded computing ‘ — optimized for artificial intelligence applications. Built by SRC Inc, it packs the maximum computing capacity into the minimum space, with the lowest possible power requirements. Its modular architecture is built around machine learning (suggesting a lot of GPUs or other processors optimized for parallel processing) and the makers anticipate upgrades to neuromorphic computing hardware which mimics the human brain.

In short, this new ‘smart’ drone promises to reduce the amount of human-hours required to review the incoming data, and make critical decisions.

“Instead of taking hours, sometimes days or even weeks – decisions can now be made in near real-time. If the system detects an anomaly on the ground, warfighters are alerted within minutes, allowing them to investigate and act while it’s still relevant,” according to SRC’s page on Agile Condor.

However, this also opens up the capability for the drone to act autonomously, and make decisions by itself.

It also opens up the possibility of the Reaper operating on its own. An Air Force slide of the Agile Condor concept of operations shows the drone losing both its communications link and GPS navigation at the start of its mission. An existing Reaper would circle in place or fly back to try and re-establish communications; the AI-boosted version uses its AI to navigate using landmarks and find the target area – as well as spotting threats on the ground and changing its flight path to avoid them.

Though it may seem harmless for an autonomous drone to use its new-found intelligence to avoid obstacles, it will be extremely tempting for military personnel to rely more and more on their drones’ autonomous capabilities, especially when there is a communications or GPS disconnection at critical moments. Eventually, they might even delegate the task of deciding whether or not to strike to the drones’ onboard system.

Unfortunately, the Pentagon’s policy on ‘human judgement’ is sufficiently vague to allow it to mean whatever they decide is in their best interests at the time:

When it comes to autonomous weapons, the Pentagon’s stated policy is always that a human operator will always make the firing decision, this policy has some flexibility: it simply demands “appropriate levels of human judgment,” whatever that means.

It seems that more and more critical decisions will be made ‘on the fly’ from now on.

Link: Source

Should algorithms decide who lives and who dies?

We believe that algorithms should not be empowered to decide who lives and who dies. If you agree, we need your help to demonstrate widespread support for this cause. Will you help us to #BanSlaughterbots, and take action against these weapons?

Related posts

]]>
Why support a ban on Autonomous weapons? https://futureoflife.org/aws/why-support-a-ban-on-autonomous-weapons/ Tue, 26 Oct 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/why-support-a-ban-on-autonomous-weapons/ Why support a ban on Autonomous weapons?

Artificial Intelligence (AI) will soon become the most powerful technology ever created. It can help humanity flourish like never before– if we use it wisely and ethically, drawing a clear red line between acceptable and unacceptable use of AI. This line must ensure that humans retain control over the decision to take a life. Why?

]]>
Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Lethal Autonomous Weapons https://futureoflife.org/podcast/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ Thu, 25 Feb 2021 00:00:00 +0000 https://futureoflife.org/uncategorized/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/ Top Myths and Facts on Human Control of Autonomous Weapons https://futureoflife.org/aws/top-myths-and-facts-on-human-control-of-autonomous-weapons/ Sat, 10 Oct 2020 00:00:00 +0000 https://futureoflife.org/uncategorized/top-myths-and-facts-on-human-control-of-autonomous-weapons/ There are a number of myths surrounding the issue of human control over Lethal autonomous weapons.

Many people who wish to ensure the safe emergence of autonomous weapons believe that it is necessary to enforce a ‘mandate of human control’ over Lethal autonomous weapons in order to mitigate some of the risks associated with this new technology

This page debunks some of the most commons myths on the mandate of human control.

1 – Defense systems

Myth: Mandating human control would outlaw autonomous missile defense systems.

Fact: Missile defense systems and other anti-materiel weapons aren’t lethal AWS systems. Their targets are not humans.

Lethal AWS systems refer to a narrow subset of autonomous weapons systems where the target of the weapon system is a human. Autonomous weapons systems designed to defend against incoming missiles, or other anti-materiel targets, would not be subject to the mandate.

2 – Drone warfare

Myth: Mandating human control would outlaw drone warfare.

Fact: Today’s drones are semi-autonomous. They already keep a human in the loop.

Current drone warfare often employs AI for many of its functions, human decision making is maintained across the life cycle of identifying, selecting and engaging a target. The presence of this human-machine interaction is what characterizes the semi-autonomous systems in use today. Mandating human control would not affect current military practice in drone warfare, which keeps a human in the loop.

3 – Military applications

Myth: Mandating human control would outlaw military applications of AI.

Fact: Lethal AWS are a very small subset of military AI applications. They can still be AI powered, but will require humans in the loop of decision making.

AI is already widely used in the military and has many benefits such as improving precision, accuracy, speed, situational awareness, detecting and tracking functions of weapons. These functions can help commanders to make more informed decisions and minimize civilian casualties. The semi-autonomous weapons systems in drones today are an example of how AI can be used to enhance human decision making. Hence, mandating human control over lethal AWS, would not limit the use of AI in weapons system.

4 – Mutually-Assured Destruction

Myth: A lethal AWS arms race would lead to stable mutually assured destruction.

Fact: An arms race would be destabilizing, lower the threshold for conflict, and would introduce new international security risks.

Many see lethal AWS as conferring a decisive strategic advantage, similar to nuclear weapons, and
see the inevitable endpoint of an arms race as one of stable mutually assured destruction. However, in contrast, lethal AWS systems do not require expensive raw materials, extensive expertise and will likely be highly scalable, making them a cheap and accessible weapon of mass destruction. Furthermore, lethal AWS have serious vulnerabilities in terms of the reliability of their performance and their vulnerability to hacking. It is more likely that the endpoint of an arms race would be catastrophically destabilizing.

5 – Exclusiveness

Myth: Only countries at the cutting edge of AI development will have lethal autonomous weapons.

Fact: Small, cheap, scalable lethal AWS would proliferate and be readily accessible to any state or even non-state actors (law enforcement, terrorists, despots).

The ability to easily scale is an inherent property of software. Small, cheap, variants of lethal AWS could easily be mass produced and proliferate not only to any state globally, but also to non state-actors. Hence, lethal AWS could pose a significant national security risk beyond military applications, being used by terrorist groups, as weapons of assassination, or by law-enforcement or border patrol.

6 – Replace soldiers

Myth: Lethal AWS will replace soldiers and save lives.

Fact: Human soldiers can stay in the loop while operating weapons remotely from a safe location, as is done with drones in use today.

Semi-autonomous systems already enable the removal of the human from the battlefield, saving soldiers lives and allowing for drones to be operated remotely. Keeping humans in the loop also can help to save civilian lives, as robust human-machine decision-making can minimize the errors and improve situational awareness, thereby helping to reduce non-combatant casualties.

7 – Cheating

Myth: Cheating will make any mandate useless.

Fact: Despite cheating, the bio- & chem-weapon bans have created powerful stigma& prevented large-scale use.

The chemical weapons ban has robust verification and certifications mechanisms, but other bans, such as biological weapons, do not have any such mechanism. In both cases, to date, there has not been a large scale violation of these bans. Arguably, it is not the legal authority of a weapons ban that has prevented large-scale use and cheating, but the powerful stigma that they have created. Like lethal AWS, chemical and biological weapons can be created at scale and without exotic or expensive materials, but this largely hasn’t happened due to the powerful stigma against these weapons.

8 – Pacifists

Myth: Only pacifists support mandating human control.

Fact: Most non-pacifists support bans on biological, chemical, and other inhumane or destabilizing weapons.

Banning the use of inhumane and destabilizing weapons is a fairly non-controversial topic, as few would argue in favor of the development of biological or chemical weapons. While many groups from the pacifist community are in favor of mandating human control, many of the most vocal advocates for human control are those from the technology, science, and military communities. The FLI Pledge in support of banning lethal autonomous weapons has been signed by over 3,000 individuals, many of whom are leading researchers and developers of AI.

9 – Compliant

Myth: Lethal AWS will be reliable, predictable, and always comply with the human commander’s intent.

Fact: Military advantage is contingent on AWS systems behaving autonomously and unpredictably, especially when encountering other AWS.

Lethal autonomous weapons systems are inherently unpredictable, as by definition they are designed to have autonomous decision making authority to respond to unforeseen and rapidly evolving environments. This is especially relevant in context of two adversarial weapon systems, where unpredictability is desired to avoid being defeated by another system. Thus, unpredictability is encouraged as it confers a strategic advantage.

10 – Defenseless

Myth: Adopters of human control can’t defend against lethal AWS.

Fact: The best anti-drone technology isn’t drones, just as the best defense against chemical weapons isn’t chemical weapons.

The defense systems required for lethal AWS are likely to be different technologies than lethal AWS. Furthermore, if a technology is developed using AI to defend against lethal AWS, much like missile defense systems, it would not be subject to any requirement for human control as the target of the weapon is another weapon as opposed to a human target.

Infographic

You can view all of these myths and facts as an infographic, available to view and download here.

]]>
FLI’s Position on Lethal Autonomous Weapons https://futureoflife.org/aws/flis-position-on-lethal-autonomous-weapons/ Fri, 05 Jun 2020 00:00:00 +0000 https://futureoflife.org/uncategorized/flis-position-on-lethal-autonomous-weapons/ Described as the third revolution in warfare after gunpowder and nuclear weapons, lethal autonomous weapons (AWS) are weapon systems that can identify, select and engage a target without meaningful human control. Many semi-autonomous weapons in use today rely on autonomy for certain parts of their system but have a communication link to a human that will approve or make decisions. In contrast, a fully-autonomous system could be deployed without any established communication network and would independently respond to a changing environment and decide how to achieve its pre-programmed goals. It would have an increased range and would not be subject to communication jamming. Autonomy is present in many military applications that do not raise concerns such as take-off, landing, and refuelling of aircrafts, ground collision avoidance systems, bomb disposal and missile defence systems. The ethical, political and legal debate underway has been around autonomy in the use of force and the decision to take a human life.

Lethal AWS may create a paradigm shift in how we wage war. This revolution will be one of software; with advances in technologies such as facial recognition and computer vision, autonomous navigation in congested environments, cooperative autonomy or swarming, these systems can be used in a variety of assets from tanks, ships to small commercial drones. They would allow highly lethal systems to be deployed in the battlefield that cannot be controlled or recalled once launched. Unlike any weapon seen before, they could also allow for the selective targeting of a particular group based on parameters like age, gender, ethnicity or political leaning (if such information was available). Because lethal AWS would greatly decrease personel cost and could be easy to obtained at low cost (like in the case of small drones), small groups of people could potentially inflict disproportionate harm, making lethal AWS a new class of weapon of mass destruction.

Some believe that lethal AWS have the opportunity to make war more humane and reduce civilian casualties by being more precise and taking more soldiers off the battlefield. Others worry about accidental escalation and global instability, and the risks of seeing these weapons fall into the hands of non-state actors. Over 4500 AI and Robotics researchers, 250 organizations, 30 nations and the Secretary General of the UN have called for legally-binding treaty banning lethal AWS. They have been met with resistance from countries developing lethal AWS, fearing the loss of strategic superiority.

There is an important conversation underway in how to shape the development of this technology and where to draw the line in the use of lethal autonomy. This will set a precedent for future discussion around the governance of AI. 

An Early Test for AI Arms Race Avoidance & Value Alignment

The goal of AI governance is to ensure that increasingly powerful systems are safe and aligned with human values.

When thinking about the long-term future, it is important not only to craft the vision for how existential risk can be mitigated, but also to define the appropriate policy precedents that create a dependent path towards the desired long-term end-state. A pressing issue in shaping a positive long-term future is ensuring that increasingly powerful artificial intelligence is safe and aligned with human values.

Legal & Ethical Precedent

The development of safe and aligned artificial intelligence in the long-term requires near-term investments in capital, human resources, and policy precedents. While there have been increases in investment into AI safety, especially for “weak” AI, it remains a grossly underfunded area, especially in contrast to the amount of human and financial capital directed towards increasing the power of AI systems. From a policy perspective, the safety risks of artificial intelligence have only recently begun to be appreciated and incorporated into mainstream thinking.

In recent years, there has been concrete progress in the development of ethical principles on AI. Starting with the Asilomar AI Principles, subsequent multi-stakeholder efforts have built on these efforts including the OECD Principles on AI and the IEEE’s Ethically Aligned Design with varying degrees of emphasis on AI safety. A recent paper by the Berkman Klein Center surveyed the landscape of multistakeholder efforts on AI principle development detailed remarkable convergence around eight key themes including: safety and security, accountability, human control, responsibility, privacy, transparency and explainability, fairness, and promotion of human values. The development of consensus principles that AI should be ethical is a welcomed first step. However, much like technical research, principles are insufficient in their robustness and capacity to adequately govern artificial intelligence. Ensuring a future where AI is safe and beneficial to humanity will require us to move beyond soft law and develop governance mechanisms that ensure that the correct policy precedents are set in the near term to steer AI in the direction of being beneficial to humanity.

Lethal autonomous weapons systems, which can be highly scalable, represent a new category of weapons of mass destruction.

It is in this context that the governance of lethal autonomous weapons systems (lethal AWS) emerges as a high priority policy issue related to AI, both in terms of the importance for human-centric design of AI and society’s capacity to mitigate arms race dynamics for AI. Beyond the implications for AI governance, lethal autonomous weapons represent a nascent catastrophic risk. Highly scalable embodiments of lethal autonomous weapons (i.e. small & inexpensive autonomous drones) represent a new category of weapons of mass destruction.

The Importance of Human Control

Lethal autonomous weapons systems refer to weapons or weapons systems that identify, select, and engage targets without meaningful human control. The concept of how to define “meaningful” control remains a topic of discussion, and includes other characterizations such as “human-machine interaction,” “sufficient control,” and “human in the loop” but central to all of these characterizations is a belief that human decision making must be encompassed in a decision to take a human life.

We believe there are many acceptable and beneficial uses of AI in the military, such as its use in missile defense systems, supporting and enhancing human decision making, and increasing capacity for accuracy and discrimination of legitimate targets which has the potential to decrease non-combatant casualties. However these applications would not meet the criteria of being a lethal autonomous weapon system, as these applications either have a non-human target (i.e. incoming missile) or are reliant on robust human-machine interaction (i.e. retain human control). Furthermore, to our knowledge, all of the systems currently in use in drone warfare require a human in the loop, and hence are also exempt.

If the global community establishes a norm that it is appropriate to remove humans from the decision to take a human life and cede that moral authority to an algorithm-enabled weapon, it becomes difficult to envision how more subtle issues surrounding the human responsibility for algorithmic decision making, such as the use of AI in the judicial system or for medical care, can be achieved. By condoning the removal of human responsibility, accountability and moral agency from the decision to take a human life, it arguably sets a dire precedent for the cause of human-centric design of more powerful AI systems of the future.

Lethal autonomous weapons systems can identify, select, and engage targets without meaningful human control.

Furthermore, there are substantial societal, technical and ethical risks to lethal autonomous weapons that extend beyond the moral precedent of removing human control over the decision to enact lethal harm. Firstly, such weapons systems run the risk of unreliability, as it is difficult to envision any training set that can approximate the dynamic and unclear context of war. The issue of “unreliability” is compounded by a future where lethal autonomous weapons systems interact with the lethal autonomous weapons systems of an opposing force, when a weapon system will intentionally be designed to behave unpredictably to defeat other AI-enable counter-measures of an opposing enemy. Fully autonomous systems also pose unique risks of unintentional escalation, as the systems will make decisions faster than human speed, which will reduce the time allowed for intervention in an escalatory dynamic. Perhaps most concerning is the fact that such weapons systems would not require sophisticated or expensive supply chains that would only be accessible to leading military powers. Small lethal autonomous weapons could be produced cheaply and at scale, and it has been argued by Stuart Russell and others, that they would represent a new class of weapons of mass destruction. Such a class of lethal autonomous weapon would be deeply destabilizing due to its risk of proliferation and incentives for competition, as these systems could be produced by and for any state or non-state actors such as law enforcement or even terrorist groups.

Establishing International Governance

In terms of governance, the International Committee of the Red Cross (ICRC) has noted that there are already limits to autonomy in the use of force under existing International Humanitarian Law (IHL), or the “Law of War,” but notable gaps remain in defining the level of human control required for an operator to exercise the context specific judgments that are required by IHL. Hence, new law is needed, and the prospective governance of lethal autonomous weapons may be an early test of the global community’s ability to coordinate on shared commitments for the development of trustworthy, responsible, and beneficial AI. This achievement will go a long way to avoid dangerous arms race dynamics between near-peer adversarial nations in AI-related technology. If nation-states cannot develop a global governance system that deescalates and avoids such an arms race in lethal autonomous weapons, then it is nearly impossible to see how a reckless race towards AGI, with a winner-take-all dynamic, is avoided.

To be clear: it is FLI’s opinion and that of many others in the AI community, including 247 organizations, 4,500 researchers, 30 nations, and the Secretary General of the UN, that the ideal outcome for humanity is a legally-binding treaty banning lethal AWS. This ban would be the output of multilateral negotiations by nation-states and would be inclusive of a critical mass of countries leading the development of artificial intelligence. A legally-binding ban treaty would both set a powerful norm to deescalate the arms race and set a clear precedent that humans must retain meaningful control over the decision to enact lethal harm. Such a treaty would ideally include a clear enforcement mechanism, but other historical examples, such as the Biological Weapons Convention, have been net-positive without such mechanisms.

However, FLI also recognizes that such a treaty may not be adopted internationally, especially by countries leading development of these weapons systems, the number of whom is increasing. The United Nations, through the Convention on Certain Conventional Weapons (CCW),has been discussing the issue of lethal autonomous weapons since 2014, and those negotiations have made little progress beyond the “guiding principles” stage of governance, likely due in part that unanimity is required for developing new law. Hence, while we recognize the benefits of states meeting regularly to discuss the issue, it seems that the best outcome of this forum may be incremental progress at best, as it is unlikely to yield a new protocol that ensures meaningful human control in weapons systems. Therefore urgent action to stigmatize lethal autonomous weapons is needed, and we must also consider supplemental paths for humanity’s future to ensure meaningful human control over these weapons systems.

In the absence of governance on lethal autonomous weapons, it is likely that there will be an unchecked arms race between adversarial nation-states.

In the prospective absence of a legally binding treaty, there is a dangerous alternative future where few, or no, norms or agreements on the governance of lethal autonomous weapons are developed in time to prevent or mitigate their use in battle. In this scenario, it is likely that there will be an unchecked arms race between adversarial nation-states and the setting of a disastrous precedent maintaining a human-centric design of AI.

Thankfully, there is a far-ranging, undeveloped continuum of policy options between the two poles of no effective governance of lethal AWS at all at one end and an outright ban on the other end. Furthermore, these intermediary options could be used to ensure human control in the near term, while helping to generate the political will for an eventual treaty. Such intermediaries could include the development of new law or agreement on the level of human control required for weapons systems at the national level or in other international fora outside of the CCW, establishing international agreement on the limits to autonomy under the law of war, similar to the Montreux process, weapons reviews, or political declarations. Since such policy actions that rest on the continuum between no governance at all and a ban might be necessary, we would be remiss to ignore them entirely, as they may be able to play a key role in supplementing efforts towards an eventual ban. Furthermore, we see an urgent need to expand the fora for discussion of the risks and legality of lethal autonomous weapons at both the national and international level to continue to include militaries, but also AI researchers, the private sector, national security experts, and advocacy groups within civil society, to name a few.

There is an urgent need to develop policies that provide some meaningful governance mechanisms for lethal autonomous weapons before it is too late. Once these lethal AWS are integrated into military strategy or worse, mass-produced and proliferate, the opportunity for preventative governance of the worst risks posed by lethal AWS has likely passed.

Therefore, FLI is open to working with all stakeholders in efforts to develop norms and governance frameworks that ensure meaningful human control and minimize the worst risks associated with lethal AWS. We do so while still maintaining the position that the most beneficial outcome is an outright, legally enforceable ban on lethal AWS.

]]>
FLI Podcast: Why Ban Lethal Autonomous Weapons? https://futureoflife.org/podcast/fli-podcast-why-ban-lethal-autonomous-weapons/ Tue, 02 Apr 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/fli-podcast-why-ban-lethal-autonomous-weapons/ Autonomous Weapons Open Letter: Global Health Community https://futureoflife.org/open-letter/medical-lethal-autonomous-weapons-open-letter/ Wed, 13 Mar 2019 00:00:00 +0000 https://futureoflife.org/uncategorized/medical-lethal-autonomous-weapons-open-letter/ Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact Dr. Emilia Javorsky.

Lethal Autonomous Weapons: An Open Letter from the Global Health Community

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.

In contrast to semi-autonomous weapons that require human oversight to ensure that each target is validated as ethically and legally legitimate, such fully autonomous weapons select and engage targets without human intervention, representing complete automation of lethal harm. This ability to selectively and anonymously target groups of people without human oversight would carry dire humanitarian consequences and be highly destabilizing. By nature of being cheap and easy to mass produce, lethal autonomous weapons can fall into the hands of terrorists and despots, lower the barriers to armed conflict, and become weapons of mass destruction enabling very few to kill very many. Furthermore, autonomous weapons are morally abhorrent, as we should never cede the decision to take a human life to algorithms. As healthcare professionals, we believe that breakthroughs in science have tremendous potential to benefit society and should not be used to automate harm. We therefore call for an international ban on lethal autonomous weapons.

There’s a similar letter available to AI & Robotics Researchers here.

Add your signature

]]>
Podcast: Six Experts Explain the Killer Robots Debate https://futureoflife.org/podcast/podcast-six-experts-explain-the-killer-robots-debate/ Tue, 31 Jul 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/podcast-six-experts-explain-the-killer-robots-debate/ About the LAWS Pledge https://futureoflife.org/aws/laws-pledge/ Wed, 18 Jul 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/laws-pledge/ About the Lethal Autonomous Weapons Systems (LAWS) Pledge

LAWS Pledge

Sign the pledge here.

LAWS Frequently Asked Questions

What sort of weapons systems do “LAWS” refer to? Won’t militaries without LAWS be at a disadvantage against adversaries who develop them? Won’t LAWS save lives by having robots die rather than soldiers, and minimizing collateral damage? And more.

Why Did Others Sign the Pledge

Artificial Intelligence is a complex technology that could fail in grave and subtle ways. Humanity will be better served if this technology is deliberately developed for civilian purposes first, and militaries exhibit restraint in its use until it its properties and failure modes are deeply understood.

Those favoring development of autonomous lethal weapons fantasize about precisely targeted strikes against enemy combatants — “bad guys” by their definition — while sparing uninvolved civilians. But once a technology exists, it eventually falls into the hands of “rogue” actors; and indeed the “rogues” may turn out to include those who sponsored the development in the first place.

Lethal autonomous weapons will make it far easier for war criminals to escape prosecution.

The Robotics Council of the Brazilian Computer Society (CE-R SBC) would like to state that we are against to all forms of lethal autonomous weapons, A.I. killer robots or any other form of robotic or autonomous machines where the decision to take a human life was delegated to the machine. Killer robots should be completely banned from our planet.

Autonomous weapons are a threat to every human being and every form of life. In most cases there will be no practical defense against them. We must pledge not to create them and to enact an international treaty prohibiting their development.

It would be reckless for international governments to ignore the need for a binding Treaty agreement on the regulation of autonomous lethal weapons. The urgency of this requirement is increasing quickly.

WeRobotics believes that the future of robotics and artificial intelligence technologies must be driven by a core ethical commitment to improving human and ecological well-being above all. Autonomous weapons systems threaten both human life and the stability of planetary society and ecology by shifting control over the fundamental decisions of life and death to algorithmic processes that may likely be immune to ethical judgment and human control. As we help to build a future in which robotics and artificial intelligence are applied to building wealth and solving problems for all people, we must insist that autonomous weapons remain off limits to all countries, based on commonly agreed upon global ethical standards.

Lucid believes AI to be one of the world’s greatest assets in solving global problems in all industries. We see the possibilities of AI-for-good everywhere. Uses of AI for weaponry pits country against country, rather than using AI to help unite Humanity as otherwise possible and needed. Lucid will not allow use of any AI technology it creates for weaponry.

Press Release for LAWS Pledge

AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons

Leading AI companies and researchers take concrete action against killer robots, vowing never to develop them.

Stockholm, Sweden (July 18, 2018) After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 160 AI-related companies and organizations from 36 countries, and 2,400 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Max Tegmark, president of the Future of Life Institute (FLI) which organized the effort, announced the pledge on July 18 in Stockholm, Sweden during the annual International Joint Conference on Artificial Intelligence (IJCAI), which draws over 5,000 of the world’s leading AI researchers. SAIS and EurAI were also organizers of this year’s IJCAI.

Said Tegmark, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human “in-the-loop.” That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system. (This does not include today’s drones, which are under human control. It also does not include autonomous systems that merely defend against other weapons, since “lethal” implies killing a human.)

The pledge begins with the statement:

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

Another key organizer of the pledge, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, points out the thorny ethical issues surrounding LAWS. He states:

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, has long been a strong opponent of lethal autonomous weapons. He says:

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

In addition to the ethical questions associated with LAWS, many advocates of an international ban on LAWS are concerned that these weapons will be difficult to control – easier to hack, more likely to end up on the black market, and easier for bad actors to obtain –  which could become destabilizing for all countries, as illustrated in the FLI-released video “Slaughterbots”.

In December 2016, the UN’s Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion regarding LAWS. At the most recent meeting in April, twenty-six countries announced support for some type of ban, including China. And such a ban is not without precedent. Biological weapons, chemical weapons, and space weapons were also banned not only for ethical and humanitarian reasons, but also for the destabilizing threat they posed.

The next UN meeting on LAWS will be held in August, and signatories of the pledge hope this commitment will encourage lawmakers to develop a commitment at the level of an international agreement between countries. As the pledge states:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

As Featured In

]]>
Lethal Autonomous Weapons Pledge https://futureoflife.org/open-letter/lethal-autonomous-weapons-pledge/ Wed, 06 Jun 2018 00:00:00 +0000 https://futureoflife.org/uncategorized/lethal-autonomous-weapons-pledge/

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

Independent from this pledge, 30 countries in the United Nations have explicitly endorsed the call for a ban on lethal autonomous weapons systems: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, El Salvador, Ghana, Guatemala, Holy See, Iraq, Jordan, Mexico, Morocco, Namibia, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, Zimbabwe.

 

Add your signature

To express your support, please add your name below:

Full Name *

This is a required question


 

For more information about what is at stake and about ongoing efforts at the UN, please visit autonomousweapons.org and stopkillerrobots.org

If you have questions about this letter, please contact FLI.

]]>
Killer robots: World’s top AI and robotics companies urge United Nations to ban lethal autonomous weapons https://futureoflife.org/ai/killer-robots-worlds-top-ai-robotics-companies-urge-united-nations-ban-lethal-autonomous-weapons/ Sun, 20 Aug 2017 00:00:00 +0000 https://futureoflife.org/uncategorized/killer-robots-worlds-top-ai-robotics-companies-urge-united-nations-ban-lethal-autonomous-weapons/ Press release from Faculty of Engineering at UNSW, Sydney, Australia.

Open letter by leaders of leading robotics & AI companies is launched at the world’s biggest artificial intelligence conference as UN delays meeting till later this year to discuss the robot arms race

An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries urges the United Nations to urgently address the challenge of lethal autonomous weapons (often called ‘killer robots’) and ban their use internationally.

A key organiser of the letter, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, released it at the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, the world’s pre-eminent gathering of top experts in artificial intelligence (AI) and robotics. Walsh is a member of the IJCAI 2017’s conference committee.

The open letter is the first time that AI and robotics companies have taken a joint stance on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal discussions on autonomous weapons. Of these, 19 have already called for an outright ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter states. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” it states, concluding with an urgent plea for the UN “to find a way to protect us all from these dangers.”

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (USA)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü​rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Their companies employ tens of thousands of researchers, roboticists and engineers, are worth billions of dollars and cover the globe from North to South, East to West: Australia, Canada, China, Czech Republic, Denmark, Estonia, Finland, France, Germany, Iceland, India, Ireland, Italy, Japan, Mexico, Netherlands, Norway, Poland, Russia, Singapore, South Africa, Spain, Switzerland, UK, United Arab Emirates and USA.

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons. The 2015 letter was signed by thousands of researchers in AI and robotics working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple  Co-founder Steve Wozniak and cognitive scientist Noam Chomsky, among others.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” said Walsh. “It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.

“Two years ago at this same conference, we released an open letter signed by thousands of researchers working in AI and robotics calling for such a ban. This helped push this issue up the agenda at the United Nations and begin formal talks. I am hopeful that this new letter, adding the support of the AI and robotics industry, will add urgency to the discussions at the UN that should have started today.”

“The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” said Ryan Gariepy, founder & CTO of Clearpath Robotics, who was the first to sign.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he added. “The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

Yoshua Bengio, founder of Element AI and a leading ‘deep learning’ expert, said: “I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI’s good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).”

Stuart Russell, founder and Vice-President of Bayesian Logic, agreed: “Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security.”

DOWNLOADS AVAILABLE FOR MEDIA USE

  • Portraits: Photos of Toby Walsh with UNSW’s Baxter Collaborative Robot, made by Rethink Robotics (a U.S. company founded by Australian Rodney Brooks). Credit: Grant Turner/UNSW.
  • Killer robots: Images of autonomous weapon systems currently in use or being developed.
  • The 2017 Open Letter: An open letter signed by 116 founders of robotics and artificial intelligence companies from 26 countries.

Media contacts:

BACKGROUND

The International Joint Conference on Artificial Intelligence (IJCAI) is the world’s leading conference on artificial intelligence. It has been held every two years since 1969, and annually since 2015. It attracts around 2,000 of the best researchers working in AI from around the world. IJCAI 2017 is currently being held in Melbourne, Australia.

A news conference will be held at 11am on Monday 21 August 2017 to open the IJCAI 2017 conference in Banquet Room 201 of the Melbourne Exhibition and Conference Centre, where we will answer questions on the technical, legal and social challenges posed by autonomy especially in areas like the battlefield, and on the open letter. Address: 1 Convention Centre Pl, South Wharf VIC 3006.

To obtain a press pass to attend the IJCAI 2017 conference in Melbourne, please contact Vesna Sabljakovic-Fritz, IJCAI executive secretary, on sablja@dbai.tuwien.ac.at

Two years ago, at IJCAI 2015, more than 1,000 AI researchers released an open letter calling for a ban on lethal autonomous weapons. Signatories to this letter have now grown to over 20,000.

As part of Melbourne’s Festival of Artificial Intelligence, there will be a public panel on Wednesday 23 August, 5.30 to 7.00pm, entitled, ‘Killer robots: The end of war?’. The panel features Stuart Russel, Ugo Pagallo and Toby Walsh. This is part of AI Lounge, a conversation about artificial intelligence open to the public and media every night from 21 to 25 August 2017 (see http://tinyurl.com/ailounge)

Toby Walsh’s new book, It’s Alive!: Artificial Intelligence from the Logic Piano to Killer Robots, just published by Black Inc, covers the arguments for and against lethal autonomous weapons in detail.

]]>
Autonomous Weapons Open Letter: AI & Robotics Researchers https://futureoflife.org/open-letter/open-letter-autonomous-weapons-ai-robotics/ Tue, 09 Feb 2016 00:00:00 +0000 https://futureoflife.org/uncategorized/open-letter-autonomous-weapons-ai-robotics/ This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28.
Journalists who wish to see the press release may contact Toby Walsh.
Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact Max Tegmark.

Click here to see this page in other languages: German Japanese   Russian

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

Add your signature

]]>