Cooperative Security, Arms Control and Disarmament By Stuart Casey-Maslen  |  05 July, 2021

Regulating Autonomous Weapons Systems

Image: Shutterstock

With the advance of algorithmic technologies, the international legal regulation of fully autonomous weapons systems has risen up the international security and humanitarian agenda in recent years. But calls for the outright prohibition of such systems seem both premature and destined to fail. Premature, because we do not yet know for sure whether autonomous weapons would be more or less protective of life, especially with respect to civilians in situations of armed conflict. And destined to fail, because most major military powers are already hell-bent on fielding them, despite occasional public denials.

While weapons systems incorporating features of automaticity of action and reaction have existed for many years, the ever-increasing sophistication and phenomenal speed of decision-making by machine mean that both as a means of warfare and in law enforcement, autonomous weapons systems may well become commonplace in years to come. Often confused with drones—where a human pilot remotely pilots an aircraft (or other vehicle) and then decides when and upon whom to fire—autonomous weapons systems are of a different order, enjoying the capability to target and/or fire on persons and objects independent of human direction.

There is justified apprehension about the delegation of the ability to use lethal or less-lethal force to a computer algorithm. But despite widespread humanitarian, technological, security, ethical, and legal concerns about the consequences that may ensue, as Human Rights Watch observed in its 2020 piece, “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control”, China, Israel, Russia, South Korea, the United Kingdom, and the United States have already invested heavily in the development of a range of autonomous weapons systems. Australia, Turkey, and other countries are also making investments. Indeed, this March a United Nations (UN) Security Council Panel of Experts on Libya reported that a Turkish-produced loitering munition, the STM Kargu “kamikaze” aircraft, autonomously detected and attacked General Khalifa Haftar’s forces inside Libya in 2020. Seemingly, an “arms race in autonomy” (in Michael Klare’s word) is already underway.

The call for a total prohibition by Human Rights Watch and other organisations in the Campaign to Stop Killer Robots is supported by many governments, but their combined efforts may not prevent deployment of fully autonomous systems. For once even a few military powers have developed and fielded them, the likelihood is that others will seek to follow.

The Campaign has focused efforts on advocating the negotiation of a dedicated Protocol to the Convention on Certain Conventional Weapons (CCW). The CCW, which has attracted 125 States as party, has six annexed protocols, dealing with anti-personnel mines, incendiary weapons, and blinding laser weapons, among others. But outlawing a potent new military technology was always going to be a hard sell in a forum dominated by the major arms producers and exporters and where consensus is the pervasive tradition (if not necessarily the legal rule). After all, the only weapons comprehensively prohibited under CCW auspices are blinding laser weapons. Protocol IV was adopted pre-emptively in 1995 after an effective campaign by Human Rights Watch and the International Committee of the Red Cross (ICRC). Not since then has agreement within the CCW been secured for a total prohibition on any weapon. And even if it were, the CCW governs means of warfare and not also any corresponding use in law enforcement.

Moreover, consensus among international lawyers that a total prohibition is either necessary or desirable has yet to be secured. In its General Comment 36 on the right to life, the UN Human Rights Committee, which oversees the 1966 International Covenant on Civil and Political Rights to which 173 of 197 States are party, affirmed that “the development of autonomous weapon systems lacking in human compassion and judgment raises difficult legal and ethical questions concerning the right to life, including questions relating to legal responsibility for their use”. The Committee did not decree such systems as unlawful, although it did call upon States Parties to refrain from their deployment until their compatibility with international humanitarian law (IHL) and the right to life was confirmed.

Therein lies a dilemma. Is it to be argued that IHL does not regulate autonomous weapons systems at all (and perhaps that the principles of humanity and the dictates of public conscience of the Martens Clause demand their prohibition)? Or rather does IHL already regulate such systems and indeed renders them unlawful? Both are tricky arguments to sustain and do not enjoy general support. In a May 2021 detailing its institutional position, the ICRC called for use of autonomous weapon systems to target human beings to be “ruled out” through the imposition of a prohibition on autonomous systems where they are “designed or used to apply force against persons”. Where they are used against objects that may contain people (or where people may be nearby), the organisation recommends that limits be set on the types of target and on the duration, geographical scope, and scale of use, combined with requirements for human-machine interaction to “ensure effective human supervision, and timely intervention and deactivation”.

And what if—though admittedly it’s a big “if”—machines prove to be more respectful of life and limb, more precautionary, and more accurate in their dispensation of force? Do ethical concerns about computers possessing the power of life and death trump the humanitarian imperative? The answer for many appears to be a resounding yes. In a paper submitted in 2018 to the States Parties to the CCW, the ICRC noted the loss of human dignity through the lack of human agency in lethal decision-making. The contrary argument, of course, is that respect for humanitarian principles should seek the preservation of human life over the identity of the agent. After all, human respect for the lives of other humans in warfare has always been pitiful. Computer algorithms do not fear, lack venomous hatred and the desire for revenge, and do not feel contempt for others who are different. Is a future with increasingly widespread autonomous use of force inherently and by necessity a dystopian one? In truth, we do not yet know.

Stuart Casey-Maslen is Honorary Professor at the University of Pretoria in South Africa where he teaches international human rights and humanitarian law, disarmament law, jus ad bellum, and the protection of civilians. He holds a doctorate in international humanitarian law and master’s degrees in forensic ballistics and in international human rights law.  Professor Casey-Maslen is a member of the Toda International Research Advisory Council.