New Technologies, Security and Peace By Denise Garcia  |  24 October, 2021

UN Role in Restraining the Dark Side of Emerging Technology

Photo credit: Outflow_Designs/Shutterstock

In May 2021, the United Nations Security Council met for the first time to discuss the role of emerging technologies, such as Artificial Intelligence (AI), in peace and security. In the following month, the Security Council met to discuss how to keep peace in cyberspace, also for the first time, ushering emerging technologies to the highest level of diplomatic efforts at the United Nations (UN).

According to the United Nations Charter, the Council has the custodianship of the decisions on peace, security, the protection of civilians, and the use of force in international relations. The focus on emerging technologies and cyberspace as the novel domain of international relations marks a noteworthy evolution of the role of the UN in promoting much-needed norms for common behaviour in these areas, originally not anticipated by the Charter drafters. In 2020, member states spent $1trillion to restore networks that had been breached or to combat malicious uses. The need to create a global cooperative framework on cyberspace where states can pull their capacities and assist those who lack them is critical.  

The role of the UN Secretary-General António Guterres is compelling. He created a High-Level Panel for Digital Cooperation that met in 2018-2019. In March 2019, the Secretary-General made a staunch call for a prohibition against autonomous weapons: 'machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.'

Based upon its recommendations, after consultations with several different communities, from academic to the private sector, governments to civil society, Guterres recommended a Road Map for Digital Cooperation in time for the UN’s 75th anniversary. The Road Map aims to bridge the digital divide between developing and developed countries, create transparency by curbing the spread of misinformation, protecting critical digital infrastructure, and protect people's dignity. His Road Map for Digital Cooperation also seeks to restrain the weaponization of emerging technologies in general and instead use such technologies solely for the common good of humanity.

The proactive and preventive action-oriented role that the UN Secretary-General assumed has firmly placed the United Nations as the cornerstone for global action in emerging technologies. For Guterres, four significant threats to global security today can endanger the future of humanity: mounting geopolitical tensions, the climate crisis, global mistrust, and the dark side of technology to commit abuses, crimes, spread hate and misinformation, and oppress people in an increasing number of countries. Technological advancements are fast outpacing diplomatic efforts, and the world is not prepared for the impact of the Fourth Industrial Revolution.

In the high-level segment of the UN General Assembly in September 2021, Guterres launched the Common Agenda, a comprehensive path forward to tackle the four main threats to humanity, utilising the already existing platform of the UN Sustainable Development Goals. The Common Agenda results from a two-year-long crowd-sourcing consultation with thousands of people worldwide and represents a pivot towards protecting future generations and including youth. Admittedly, there is much work ahead to carry out the Road Map for Digital Cooperation, especially in the areas of misinformation, the spread of hate, and the digital divide between poor and rich countries. However, one area within emerging technologies has come a long way in the last five years: the first formal meeting of the Group of Governmental Experts (GGE) related to emerging technologies in the area of lethal autonomous weapons systems was held in Geneva from 13 to 17 November 2017.

The GGE was established within the remit of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW), which is considered an International Humanitarian Law (IHL) instrument setting limits on what is lawful and unlawful during war and conflict. In the past, the CCW has preventively prohibited blinding laser weapons. The initial discussions on the moral, legal and ethical implications of autonomous weapons were brought to the Human Rights Council in 2013. A year later, France and Germany decided to start discussions within the CCW which led to the formation of the GGE. Since then, the GGE has produced a set of ten principles on autonomous weapons that accept the applicability of IHL to all new systems and the non-delegation of human responsibility.

Most states view this outcome as far from suitable to meet the challenges presented by autonomous systems. The UN Secretary-General calls on states to set limits on autonomous weapons motivated by the belief that the use of these systems will profoundly transform war and drive humanity into a morally indefensible realm.

In May 2021, the International Committee of the Red Cross (ICRC) – the guardian of IHL – has published its position on the matter and definitively called upon states to negotiate a new legally binding instrument on autonomous weapons. This new position promises to be a momentous milestone because the ICRC has commanding clout as all states have ratified the central IHL conventions, known as the Geneva Conventions. It has an authoritative place in any discussion on weapons use and control. The ICRC's view is that the use of such systems poses a significant risk to civilians and combatants, which is compounded by the irregular and variable nature of outputs and outcomes generated by AI-enabled algorithms that may not be able to comply with IHL. At the end of the day, who lives or who dies should not be relegated to sensor data and unpredictable software programming.

Three types of limits should therefore constitute a new international treaty: first is the limits solely on military targets, such as incoming missiles and in situations where civilians are not present; second: the targeting should be limited in duration and geographical scale to enable a human to oversee even when a machine learning algorithm has supplied the target; third is the requirement of human control and supervision to allow for timely intervention.

Are the UN member states on board, and what are the prospects of a new international treaty regulating all aspects of the development and deployment of autonomous weapons? The answer to the first part of the question is promising: most member states, along with AI scientists and civil society, would like to see all-inclusive new international law on the human-machine interaction, potentially including a combination of prohibitions and regulations which would be a relatively new form of global governance of a pressing international security issue. This new treaty would be innovative and would not fit the moulds of previously existing disarmament and arms control regulations. This new treaty is about how humans remain present in overseeing new technology deployment in existing systems. However, it also has to be future-proof and remain relevant vis-à-vis new technological innovations.

The CCW is a forum that includes the leading producers of these technologies – Australia, India, Israel, Japan, Republic of Korea, Turkey, United Kingdom, and the United States – and these states remain and continue to act as stumbling blocks to any progress. The deliberations at the UN have come a long way, and it seems that these countries are now a minority. The continuation of the logic of militarizing yet another new technology, AI, is seen by most as an utter betrayal in addressing all the other pressing challenges humanity faces, highlighted by the UN Secretary-General.

AI has the potential to be the emerging technology that will assist with tackling diseases and be part of solving the climate crisis; it should not be weaponized as nuclear technology was. All in all, the UN has a central role to play in all aspects of emerging technologies, and in being the forum that will deny a few countries their march towards amplifying the dark side of technology.

Denise Garcia is professor at Northeastern University, Boston. She is author of the forthcoming book: When A.I. Kills: Ensuring the Common Good in the Age of Military Artificial Intelligencea and a member of the Toda Peace Institute International Research Advisory Council. She is also Vice-chair of the International Committee for Robot Arms Control.