Lethal Autonomous Weapon Systems (LAWS) in War and Laws of War: Are Killer Robots Compatible with International Humanitarian Law?

By Laure de Rochegonde, PhD candidate at Sciences Po – CERI

“Terminator will not parade on July 14,” said Florence Parly, the French Minister of the Armed Forces on April 5, 2019, as she presented the French strategy on artificial intelligence and defense. This was a way of announcing France’s decision not to develop “killer robots”, also known as “lethal autonomous weapon systems” (LAWS). These have been defined in 2012 by the US Department of Defense as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator”. LAWS do not exist yet, but they are being developed as a result of technological advances in artificial intelligence and robotics brought about by the fourth industrial revolution.

In this respect, some consider LAWS as a new revolution in warfare, just like gunpowder and nuclear weapons before them. In a military point of view, they offer obvious operational, security and economic advantages: LAWS could be faster, more resilient and coordinated than soldiers on the battlefield, giving the army a decisive tactical advantage. In other areas, automated weapons systems could allow cost reduction and lead to a better use of human capacities[1]. In this perspective, states that refuse to develop these weapons would be exposed to the risk of technological backwardness and strategic downgrading that would be highly difficult to catch up with[2].

However, LAWS raise many ethical and legal concerns. At the first Paris Peace Forum in November 2018, UN Secretary-General Antonio Guterres called on states parties to “ban these politically unacceptable and morally repugnant weapons”. The emergence of LAWS is giving rise to a wide-ranging debate, initiated by several NGOs, grouped together since 2013 into a Campaign to Stop Killer Robots. Opponents of LAWS thus denounce the risks they represent for both international peace and security and civilians in armed conflicts[3]. One of the main criticisms against LAWS is their alleged incompatibility with the laws of war. The latter, which give a normative framework to armed conflicts, are based on the standards enshrined in the four Geneva Conventions and their two Additional Protocols of 1977. However, according to their critics, killer robots violate quite a few of these principles.

Outlawing LAWS?

According to the Article 36 of the First Additional Protocol to the Geneva Conventions, states have a duty to determine whether new weapons are compatible with the international humanitarian law. It is the reason why since 2014 a group of governmental experts has been meeting annually to “explore and agree on possible recommendations and options related to emerging technologies in the field of LAWS”. The work of this group has already led to the recognition by all states of the applicability of the laws of war to killer robots.

However, applicability does not mean compatibility. To be compatible with the international humanitarian law and especially the jus in bello, which aims at limiting the suffering caused by war, autonomous weapons systems have to follow three principles: precaution, distinction, and proportionality[4]. According to the First Additional Protocol to the Geneva Conventions, weapons that cannot discriminate (target only military objectives) violate the principle of precaution and are therefore illegal. In this respect, LAWS, which have been described as “weapons of indiscriminate lethality” lead to obvious safety issues.

In the same vein, the principle of distinction requires differentiating between combatants and civilians.  Yet, in contemporary conflicts, it is not uncommon for these two categories to be porous: the application of the principle of distinction then depends on the context and it is uncertain that LAWS will be endowed with the necessary discernment to understand it.

Finally, the principle of proportionality is even more difficult to follow, since it involves measuring the potentially excessive nature of collateral damage in view of the expected military advantage, which requires a case-by-case assessment of the strategic and political context, of which a killer robot would hardly be aware.

Overall, respect for the laws of war is based on deliberative reasoning, for which humans seem better endowed than systems. As the computer scientist and founder of the International Committee on Robot Arms Control Noel Sharkey explains, “in a war with non-uniformed combatants, knowing who to kill have to be based on situational awareness and on having human understanding of other people’s intentions and their likely behavior”.

An accountability gap

In addition, LAWS bring about significant accountability issues since it is difficult to say who, from the designer, developer, manufacturer, and military commanders bears responsibility for an autonomous weapon behavior. The lawyer Julien Ancelin refers to this as a « vague responsibility »[5]. Yet, the establishment of accountability is a prerequisite of the jus in bello: actors violating the international humanitarian law must be able to be held responsible, tried and convicted if need be. As Christof Heyns, the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions of the United Nations stated: “if the nature of a weapon makes it impossible to establish responsibility regarding the consequences of its use, this weapon should be judged abominable, and its use declared unlawful”. From then on, killer robots appear to be hardly compatible with the international humanitarian law.

In the fog of war

However, human individuals as well as organizations – and states especially – are the first to avoid compliance with the international humanitarian law, either knowingly or unknowingly, as a result of what Carl von Clausewitz famously labelled the « fog of war ». Some respond to criticism made against LAWS that it is sufficient for them to be of a lower or equal fallibility to human fallibility (which can be assessed by the Arkin test[6]) for their use to be desirable. Killer robots could then be deployed provided they comply with the laws of war at least as well as humans would in similar circumstances.

According to the philosopher Bradley Jay Strawser, there is even a moral imperative to use technology for security reasons (because it minimizes the risks for soldiers) and even for humanitarian reasons (because it better respects the international humanitarian law than humans)[7]. The LAWS’ absence of emotion could thus make them less willing to use excessive force as a result of a feeling: may it be fear for their own protection, or desire to avenge their dead brother in arms. Killer robots could even encourage soldiers to better respect the international humanitarian law: its sensors recording actions, they would thus provide effective safety oversight of humans on the battlefield.

Obviously, the functioning of an information-based weapons system depends on its programming and humans, who are far from systematically complying with the laws of war, could then program killer robots to commit abuses. To be sure, malignant use of weapons it is not per se a proof of non-compliance with the laws of war, since, as Hugo Grotius pointed out in 1625: “a right does not at once cease to exist in case it is to some extent abused by evil men. Pirates also sail the sea, arms are carried also by brigands”. Nevertheless, unlike soldiers, LAWS would lack the alleged natural inhibition to kill, and the human capacity not to open fire out of compassion. The absence of emotion is then a double-edged sword: a killer robot is devoid of human emotions that can cause crimes, but also of those that can avoid them[8].

The weight of public opinion

Finally, the Martens clause, which fits in the Hague Convention on the Laws and Customs of War on Land of 1899, states that if a technology is not included in a specific convention, it is subject to other international standards, including those induced by public opinion. However, Human Rights Watch published in January the results of an opinion poll released by Ipsos in 26 countries, according to which 61% of respondents were opposed to the use of LAWS.

The debate on the compatibility of killer robots with the laws of war is therefore far from being settled. Finally, it is worth noting that the original purpose of international humanitarian law was to “humanize war”. So, will the emergence of a dehumanized means of combat, and the gradual withdrawal of soldiers from the battlefield, paradoxically make war less inhuman?


[1] V. Boulanin and M. Verbruggen, “Mapping the Development of Autonomy in Weapon Systems”, SIPRI, November 2017.

[2] N. Leys, “Autonomous Weapon Systems and International Crises”, Strategic Studies Quarterly, Spring 2018.

[3] 31 D. Garcia, “Lethal Artificial Intelligence and Change: The Future of International Peace and Security”, International Studies Review, Vol. 20, 2018.

[4] K. Anderson, D. Reisner, M. Waxman, “Adapting the Law of Armed Conflict to Autonomous Weapons Systems”, International Law Studies,Vol. 90, 2014.

[5] J. Ancelin, “Les systèmes d’armes létaux autonomes (SALA) : Enjeux juridiques de l’émergence d’un moyen de combat déshumanisé”, La Revue des droits de l’homme, 2016.

[6] R. C. Arkin, “The Case for Ethical Autonomy in Unmanned Systems”, Journal of Military Ethics, Vol. 9, No. 4, 2010.

[7] B. J. Strawser, “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles”, Journal of Military Ethics, Vol. 9, No. 4, 2010.

[8] J.B. Jeangène-Vilmer, “Terminator Ethics : Faut-il interdire les « robots tueurs” ? ”, Politique Etrangère, Winter 2014.

Share
Ce contenu a été publié dans Analyses, avec comme mot(s)-clé(s) , . Vous pouvez le mettre en favoris avec ce permalien.

2 réponses à Lethal Autonomous Weapon Systems (LAWS) in War and Laws of War: Are Killer Robots Compatible with International Humanitarian Law?

  1. MerAirDef dit :

    Il semblerait que cette opération a perturbé des opérations d’infiltration des services de sécurité…

    [Reply]

  2. MerAirDef dit :

    Le commentaire concerne l’article sur la cyberattaque d’Europole.

    [Reply]

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.