HomeA.I.In Defense of (Virtuous) Autonomous Systems

In Defense of (Virtuous) Autonomous Systems

The war in Ukraine is a reminder that the world’s major military powers are developing and deploying weapons with ever increasing levels of autonomy in a nearly total vacuum of international law or generally accepted norms for how such systems should be designed and used. Meanwhile, almost 10 years have been wasted in conversations in Geneva under the auspices of the U.N.’s Convention on Certain Conventional Weapons (CCW), as NGOs such as Human Rights Watch (HRW), the Campaign to Stop Killer Robots and the Future of Life Institute pressed first for a total ban on autonomous weapons, then for a ban on just offensive autonomous weapons, and now for a requirement that there be “meaningful human control” on autonomous weapons.

It was obvious from the start that the major powers would never accept a ban, and the arguments adduced for a ban were as muddled as the concept of meaningful human control.

Yet we must norm this space if we care about justice in war. To do that requires careful thinking and the formulation of politically practical proposals of the kind to be discussed later in this article.

Among the many ethical issues that must be faced as we integrate digitally based autonomous systems throughout our society and economy, those that arise in connection with autonomous weapons are of the greatest urgency, precisely because we are delegating the kill decision to the machine.

Moral Gains from Autonomy?

Let us begin by reflecting on a possibility not acknowledged, for the most part, by the proponents of a ban, namely, that there might be moral gains from the introduction of autonomous weapons. By far the most compelling case of this kind is that made by Ronald Arkin in his 2009 book, Governing Lethal Behavior in Autonomous Robots. Do not dawdle over the particular architecture that Arkin suggests in that book, some of which is already dated. Appreciate, instead, his main point, which is that humans are notoriously unreliable systems, that human combatants commit war crimes with frightening frequency, and that what we must ask of autonomous weapons systems is not moral perfection, but simply performance above the level of the average human soldier.

There is not space here to review in detail the study by the U.S. Army Medical Command’s Office of the Surgeon General from the Iraq War upon which Arkin mainly bases his assessment of human combatant performance (Surgeon General 2006). Suffice it to say that the number of admitted war crimes by US troops, the number of unreported but observed war crimes, and the self-reported ignorance about what even constitutes a war crime are staggering. With such empirical evidence as background, Arkin’s claim to be able to build a “more moral” robot combatant seems far more plausible than one might initially have thought. Why?

Start with the obvious reasons. Autonomous weapons systems suffer from none of the human failings that so often produce immoral behavior in war. They feel no fear, hunger, fatigue or anger over the death of a friend. Move on to the slightly less obvious reasons. Thus, a robot, not fearing for its own well-being, can easily err on the side of caution, choosing not to fire in moments of doubt, where a human might rightly have to err on the side of self-defense. Then consider still more important design constraints, such as those embodied in Arkin’s “Ethical Adaptor,” into which are programmed all relevant parts of the International Law of Armed Conflict, International Humanitarian Law, and the rules of engagement specific to given conflict arena or a specific action (Arkin 2009, 138- 143). The Ethical Adapter blocks the “fire” option unless all of those prescriptions are satisfied. Arkin’s robots could not fire (absent an override from a human operator) at all, unless the most stringent requirements are met. In the face of uncertainty about target identification, discrimination, applicability of rules of engagement and so forth, the robot combatant defaults to the “no fire” option. Of course, other militaries could design the robots differently, say, by making “fire,” rather than “no fire,” the default. But hold that thought until we turn at the end to the discussion of a specific regulatory regime.

Arkin illustrates the functioning of the Ethical Adaptor with several scenarios, one of which—a Taliban gathering in a cemetery for a funeral (Arkin 2009, 157-161)—bears an eerie similarity to the horrific US attack on a Doctors without Borders (Médecins Sans Frontières [MSF]) hospital in Kunduz, Afghanistan, in October 2015 (Rubin 2015). The rules of engagement as uploaded to the Ethical Adaptor would typically include specific coordinates for areas within which no fire would be permitted, including hospitals, schools, important cultural monuments and other protected spaces. Likewise, no fire could be directed at any structure, vehicle or individual displaying the red cross or the red crescent. This assumes, of course, sensor and AI capabilities adequate for spotting and correctly identifying such insignia, but, especially with structures and vehicles, where the symbol is commonly painted in large, high-contrast format on the roof, that is not a difficult problem. A fully autonomous drone designed, as per Arkin’s model, which was tasked with the same action that led to the bombing of the MSF hospital in Kunduz, simply would not have fired at the hospital. A human might have overridden that decision, but the robot would not have fired on its own. Moreover, the kind of robot weapon that Arkin has designed would even remind the human operator that a war crime might be committed if the action proceeds.

In February 2023, the Eurasia Times reported that the first four Russian Marker robotic weapons platforms were deployed  in Eastern Ukraine. The unmanned Marker is equipped with a modular multispectral vision system and neural networks to autonomously detect and destroy targets, prioritizing enemy tanks. Credit: Kirill Borisenko, Wikimedia Commons

Another kind of moral gain from autonomous weapons was once pointed out to me by an undergraduate student, an engineering major, in my Robot Ethics class. He recalled the oft-expressed worry about the dehumanization of combat with standoff weapons, such as remotely piloted drones. The concern is that the computer-game-like character of operator interfaces and controls, and the insulation of the operator from the direct risk of combat, might dull the moral sensitivity of the operator. But my student argued with deliberate and insightful irony, that the solution to the problem of dehumanization might be to take the human out of the loop, because it is the human operator who is, thus, dehumanized.

For the record, I would dispute the dehumanization argument in the first place, because the typical drone operator often watches the target for many minutes, if not hours, and gets to know the humans on the receiving end of the munitions—including the wives and children—far better than does, say, an artillery officer, a bombardier in a high-altitude bomber or even the infantryman who gets, at best, a fleeting and indistinct glimpse of an enemy combatant across a wide, hazy, busy field of combat. That drone operators get to know their targets so well is part of the explanation for the extremely high reported rates of PTSD and other forms of combat stress among them (Chapelle et al. 2014). Still, my student’s point was a good one. If dehumanization is the problem, then take the dehumanized human out of the loop. This is really just a special case of Arkin’s point about how stress and other contextual circumstances increase the likelihood of mistakes or deliberate bad acts by humans in combat and that, since robots are unaffected by such factors, they will not make those mistakes.

One of the most common criticisms of Arkin’s model was voiced in the original HRW call for a ban, namely, that sensor systems and AI are not capable of distinguishing combatants from non-combatants, so that, even if the principle of discrimination is programmed into a robot weapon, it still cannot satisfy the requirements of international law. But there are two obvious responses to this criticism: (1) what is or is not technically feasible is an empirical question to be decided by further research, not on a priori grounds; and (2) discrimination is usually a highly context- dependent challenge, and in some contexts, such as finding and identifying a Red Cross or Red Crescent symbol, the problem is easily solved.

The other major criticism of Arkin’s model is that, since it assumes a conventional, structured, top-down decision tree approach to programming ethics and law into autonomous weapons, it cannot deal with the often-bewildering complexity of real battlefield situations. The basis of the objection is a simple and old worry about any rule-based or algorithmic approach to ethical decision making, such as deontology or consequentialism. It is that one cannot write a rule or build a decision tree to cover every contingency and that the consequentialist’s calculation of benefit and risk is often impossible to carry out when not all consequences can be foreseen. The objection is a good one, at least by way of pointing out the limited range of applicability of Arkin-type autonomous weapons systems.

But Arkin’s model for ethical autonomous weapons design is only a beginning. This last objection—that one cannot write a rule to cover every contingency—is the main reason why some of us are hard at work on developing a very different approach to ethics programming for artificial systems, one inspired by the virtue ethics tradition and implemented via neural nets and machine learning algorithms. The idea—already explored in concept by Wendell Wallach and Colin Allen in their 2010 book, Moral Machines (Wallach and Allen 2010)—is to supplement Arkin’s top-down approach, involving rules and perhaps a consequentialist algorithm, with a bottom-up approach in which we design autonomous systems as moral learners, growing in them a nuanced and plastic moral capacity in the form of habits of moral response, in much the same way that we mature our children as moral agents (Muntean and Howard 2017).

There is considerable debate about this approach via moral learning. Arkin, himself, objects that neural nets and learning algorithms “black box” the developed competence in such a way as to make impossible both the robot’s reconstructing for us either a decision tree or a moral justification of its choices, which he regards as a minimum necessary condition on moral machines, and the operator’s reliably predicting the robot’s behavior (Arkin 2009, 67, 108). We respond that human moral agents are also somewhat unpredictable and that what they produce, when pressed for a justification of their actions, are after-the-fact rationalizations of moral choices. Why should we demand more of moral robots? How to produce after-the-fact rationalizations is an interesting technical question, one currently being vigorously and successfully investigated under such headings as “rule extraction,” “interpretable AI,” and “explainable AI” (See Samek, et al. 2019).

Others object that there is no consensus on what morality to program into our robots, whether through learning or rule sets. We respond that moral diversity among robots should be prized in the same way that we prize human moral diversity. We learn from one another because of our moral differences. But, at the same time, in the constrained space of autonomous weapons, there is consensus in the form of the international support for extant international law and the just war moral theory, upon which it is based. Saudi Arabian health care robots might rightly evince different habits with respect to touching and viewing unveiled bodies from those evinced by North American or European health-care robots. But Saudi Arabia has ratified the main principles of the Geneva Conventions, as has the U.S.

There are other potential moral gains from autonomous weapons, such as facilitating military intervention to prevent genocide or other human rights abuses, minimizing risk of death or injury to our troops, and sparing drone operators and other personnel both psychological damage and moral corrosion from direct participation in combat. One can imagine still more, such as employing weaponized autonomous escort vehicles to protect aid convoys in conflict zones. The conclusion is that there are, in fact, noteworthy potential moral gains from the development and deployment of both offensive and defensive autonomous weapons. Of course, this must be done in such a way as to ensure compliance with existing international law and in a manner that minimizes the likelihood of the technology’s being put to the wrong uses by bad actors. Short of a ban on autonomous weapons, how do we do that?

Article 36 Regulatory Regime

The goal is regulating the development and deployment of autonomous weapons in a way that ensures compliance with international law and minimizes the chance of misuse. Moreover, we need to do this in a politically feasible way, using regulatory structures that will be accepted by the international community. This last point is important, because, as mentioned, one common criticism of the proposed ban on autonomous weapons is, precisely, that it stands little chance of ever being incorporated into international law.

In November 2021, it was reported that the “British Royal Air Force is sending Ukraine advanced laser-guided Brimstone 2 missiles.” Brimstone is a dual-mode weapon. It can be operated in a manual mode, with the pilot or other operator selecting the target and guiding the weapon to that target. Or, it can be operated in autonomous mode, with the pilot releasing the weapon after which the weapon, itself, identifies the target and decides to strike. Once released in autonomous mode, Brimstone’s the only constraint is that its operation is confined to a delimited field of fire.

Even in the talks under the aegis of the CCW, which have been going on since 2014 in Geneva, it is mainly only nations with little or no prospect of becoming significant participants in the development and use of autonomous weapons that have shown support for moving forward with consideration of a ban. The major players, including the U.S., have repeatedly indicated that they will not support a ban. In December 2021, the American representative in Geneva, Josh Dorosin, said it again, while adding that a non-binding, international code of conduct might be appropriate (Bowden 2021). That sufficiently strong support for a ban was unlikely ever to emerge from the Geneva talks was already clearly sensed six years ago by the most energetic proponents of the ban. Thus, in a 2016 press release, the Stop Killer Robots campaign subtly shifted the discourse, hinting at a tactical retreat, by urging a focus on “meaningful human control” (whatever that might mean), though talk of a ban still dominates the headlines. If the goal is regulating the development and use of autonomous weapons in a politically feasible way, then nine years of talks have been wasted by the continued insistence on a ban.

What could the international community have been discussing instead? The discussion should have focused on what might be done within the compass of extant international law. There is already in place, since 1977, Article 36 of Protocol I to the Geneva Conventions, which stipulates:

In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by the Protocol or by any other rule of international law applicable to the High Contracting Party.i

One hundred and seventy-four states have ratified Protocol I, including Article 36, and three states, Pakistan, Iran and the U.S., are signatories but have not formally ratified the Protocol.ii But the U.S. has promised to abide by nearly all provisions, including Article 36, and has established rules and procedures in all four branches of the military for ensuring legal review of new weapons systems (see ICRC 2006 and, for example, U.S. Army 2019). The countries having ratified Protocol I include every other major nation, among them China, the Russian Federation and all NATO member states. I would argue that, since Article 36 is already a widely accepted part of international law, it is the best foundation upon which to construct a regulatory regime for autonomous weapons.

Concerns have been expressed about the effectiveness of Article 36 in general, chief among them that the prescribed legal reviews are sometimes perfunctory, and that it is too easy to evade an Article 36 review by declaring that a weapon is not new but just a minor modification of an existing and already authorized weapon. Those are serious worries, as evidenced by the recent controversy over whether the American redesign of the B61 nuclear warhead with a tail assembly that makes possible limited, real-time steering of the warhead, the configuration designated now as B61-12, constituted a new weapon, as critics allege, or merely a modification, as the U.S. asserts (see Mount 2017).

Another worry is that only a small number of states have certified that they are regularly carrying out Article 36 reviews. Equally serious are concerns that have been expressed about the effectiveness of Article 36 specifically with respect to autonomous weapons, as in a briefing report for delegates to the 2016 meeting of experts, which argued that what is at issue is not so much the conformity of individual weapons systems with international law, but the wholesale transformation of the nature of warfare wrought by the “unprecedented shift in human control over the use of force” that autonomous weapons represent. The magnitude of that change was said to require not individual state review but the engagement of the entire international community (CCW 2016). All such concerns would have to be addressed explicitly in the construction of an autonomous weapons regulatory regime based on Article 36.

How would a new Article 36 regulatory regime be constructed? Most important would be the development of a set of clear specifications of what would constitute compliance with relevant international law. This could be the charge to a Group of Governmental Experts under the auspices of the U.N.’s CCW.

First in importance among such guidelines would be a detailed articulation of what capabilities an autonomous weapon must possess for handling the problem of discrimination, bearing in mind the point made above that this is not an all-or- nothing capability, but, rather, one specific to the functions and potential uses of an individual weapons system. For example, Great Britain’s fire-and-forget Brimstone cruise missile, which can be operated in an autonomous mode, needs only the capability to distinguish different categories of vehicles, say battle tanks versus passenger vehicles, within its designated field of fire.

An autonomous check-point sentry, by contrast, would have to be capable of much more sophisticated discriminations. Similarly detailed specifications would have to be developed for determinations of proportionality, recognition of a human combatant’s having been rendered hors de combat, recognition of a target’s displaying insignia, such as the Red Cross or Red Crescent, that identify a structure, vehicle or individual as protected medical personnel, and so forth. Just as important as developing the specifications would be the development of protocols for testing to ensure compliance. Optimal, but politically unachievable, for obvious reasons, would be the open sharing of all relevant design specifications. It is highly unlikely that states and manufacturers are going to let the world community look under the hood at such things as new sensor technologies and accompanying software. The alternative is demonstrations of performance capability in realistic testing scenarios. We already have considerable relevant experience and expertise in safety and effectiveness testing for a wide range of engineered systems, especially pertinent being the testing protocols for certifying control systems in commercial aircraft and industrial systems. One might think that weapons developers would be just as shy about showing off the weapon at work in realistic scenarios, lest adversaries and competitors infer confidential capabilities and technologies. But, in fact, most weapons developers are proud to show off videos of their new systems doing impressive things and to display and demonstrate their products at international weapons expositions. What would be required would not be the sharing of secrets but simply demonstrations of reliability in complying with the detailed guidelines just discussed.

As with the existing Article 36 requirements, certification of compliance will surely have to be left to individual states. But it is not unreasonable to begin an international conversation about a more public system for declaring that the required certifications have been carried out, even if that consists in little more than asking signatories and states parties to file such certifications with the U.N., the International Committee of the Red Cross or another designated international entity.

The good news is that, within just the last few years, serious discussion of precisely such concrete elaborations of Article 36 protocols for autonomous weapons has begun to appear in the scholarly, policy and legal literatures (see, for example, Poitras 2018, Cochrane 2020 and Jevglevskaja 2020). Equally encouraging is the willingness of some governments to underwrite such work. Thus, the German Auswärtiges Amt (Foreign Office) subsidized a 2015 expert seminar under the auspices of the Stockholm International Peace Research Institute (SIPRI) that had representation from France, Germany, Sweden, Switzerland, the United Kingdom and the U.S. (Boulanin 2015).

What have been the fruits of such work? Many good ideas have emerged. Especially thoughtful are the main recommendations contained in a 2017 report, sponsored by SIPRI, covering Article 36 elaborations for cyber weapons, autonomous weapons and soldier enhancement. Their approach is to focus on advice to reviewing authorities in individual member states, and they emphasize two broad categories of advice: (1) building on best practices already being employed by states that have well-developed review procedures, and (2) strengthening transparency and cooperation among states.

Under the first heading, they advise, for example:

  1. Start the review process as early as possible and incorporate the procurement process at key decision points.
  2. Provide military Lawyers involved in the review process with additional technical training. Engineers and systems developers should also be informed about the requirements of Engineers and systems developers should also be informed about the requirements of international law, so that they can factor these into the design of the weapons and means of warfare. (Boulamin and Verbruggen 2017, viii)

About increased transparency and cooperation, they say that it would become a “virtuous circle,” and they observe that:

  1. It would allow states that conduct reviews to publicly demonstrate their commitment to legal compliance.
  2. It would assist states seeking to set up and improve their weapon review mechanisms and thereby create the conditions for more widespread and robust compliance.
  3. It could facilitate the identification of elements of best practice and interpretative points of guidance for the implementation of legal reviews, which would strengthen international confidence in such mechanisms.

They add:

Cooperation is also an effective way to address some the outstanding conceptual and technical issues raised by emerging technologies. Dialogues, expert meetings and conferences can allow generic issues to be debated and addressed in a manner that does not threaten the national security of any state. (Boulamin and Verbruggen 2017, viii)

I am not at all naive about how strict compliance with Article 36 requirements would be. But existing Article 36 requirements have already created a culture of expectations about compliance and a space within which states can and have been challenged, sometimes successfully, to offer proof of compliance, as with the widely expressed concerns about truly indiscriminate weapons, such as land mines and cluster munitions. We begin to norm such a space simply by putting the relevant norms in front of the world community and initiating a public conversation about compliance. This is what we should be talking about in Geneva if we are serious about building some measure of international control over autonomous weapons.

Towards the Ultimate Goal

War is hell. It will always be an inherently immoral form of human activity. The goal of international law is to minimize the otherwise inevitable death and suffering that war entails. Advances in technology can contribute toward that goal by making weapons more accurate, less lethal and more selective. The advent of autonomous weapons promises still further moral gains by removing the single most common cause of war crimes, the too often morally incapacitated human combatant. We cannot let unrealistic fears about a Terminator-AI apocalypse prevent our taking advantage of the opportunities for moral progress that properly designed and deployed autonomous weapons afford. We must, of course, ensure that such systems are being used for good, rather than malign purposes, as we must with any technology and especially technologies of war. Indeed, with autonomous weapons we need to be more vigilant. But minimizing death and suffering in war is the ultimate goal. If autonomous weapons can contribute to progress toward that goal, then we must find a way to license their use in full compliance with what law and morality demand.

This article is adapted from “In Defense of (Virtuous) Autonomous Weapons.” Notre Dame Journal on Emerging Technologies, 3;2 (November 2022).iii


References

Arkin, Ronald (2009). Governing Lethal Behavior in Autonomous Robots. Boca Raton, FL: Chapman Hall/CRC.

Boulanin, Vincent (2015). “Implementing Article 36 Weapon Reviews in the Light of Increasing Autonomy in Weapon Systems.” SIPRI Insights on Peace and Security, no. 2015/1. Stockholm International Peace Research Institute. November 2015. https://www.sipri.org/ sites/default/files/ files/insight/SIPRIInsight1501.pdf

Boulanin, Vincent and Maaike Verbruggen (2017). Article 36 Reviews: Dealing with the Challenges Posed by Emerging Technologies. Stockholm, Sweden: Stockholm International Peace Research Institute.

Bowden, John (2021). “Biden Administration Won’t Back Ban on ‘Killer Robots’ Used in War. The Independent. December 8, 2021. https://www.independent.co.uk/ news/world/americas/ us-politics/biden-killer-war-robots- ban-b1972343.html.

CCW (2016). “Article 36 Reviews and Addressing Lethal Autonomous Weapons Systems.” Briefing Paper for Delegates at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), Geneva, 11-15 April 2016. http://www.article36.org/wp-content/ uploads/2016/04/LAWS-and-A36.pdf.

Chapelle, Wayne, et al. (2014). “An Analysis of Post- Traumatic Stress Symptoms in United States Air Force Drone Operators.” Journal of Anxiety Disorders 28. 480- 487.

Cochrane, Jared M. (2020). “Conducting Article 36 Legal Reviews for Lethal Autonomous Weapons.” Journal of Science Policy & Governance 16;1 (April 2020). https:// www. sciencepolicyjournal.org/uploads/5/4/3/4/5434385/ cochrane_jspg_v16.pdf.

ICRC (2006). “A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977.” International Review of the Red Cross 88, 931-956.

Jevglevskaja, Natalia (2022). International Law and Weapons Review: Emerging Military Technology and the Law of Armed Conflict. Cambridge: Cambridge University Press.

Mount, Adam (2017). “The Case against New Nuclear Weapons.” Center for American Progress.May 4, 2017. https://www.americanprogress.org/article/case-new-nuclear- weapons/.

Muntean, Ioan and Don Howard (2017). “Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.” Philosophy and Computing. Thomas Powers, ed. Cham, Switzerland: Springer, 2017, 121-159.

Poitras, Ryan (2018). “Article 36 Weapons Review & Autonomous Weapons Systems: Supporting an International Review Standard.” American University International Law Review 34, 465- 495.

Rubin, Alissa J. (2015). “Airstrike Hits Doctors Without Borders Hospital in Afghanistan.” New York Times. October 3, 2015. https://www.nytimes.com/2015/10/04/ world/asia/afghanistan-bombing-hospital-doctors-without- borders-kunduz.html.

Samek, Wojciech, et al., eds. (2019). Explainable AI: Interpreting, Explaining, and Visualizing Deep Learning. Cham, Switzerland: Springer.

Stop Killer Robots (2016). “Focus on Meaningful Human Control of Weapons Systems—Third United Nations Meeting on Killer Robots Opens in Geneva.” Stop Killer Robots. April 11, 2016. https://www.stopkillerrobots.org/ news/press-release-focus-on-meaningful- human-control-of- weapons-systems-third-united-nations-meeting-on-killer- robots-opens- in-geneva/

Surgeon General (2006). Mental Health Advisory Team (MHAT) IV Operation Iraqi Freedom 05-07. “Final Report.” Office of the Surgeon General. November 7, 2006. https://ntrl.ntis.gov/ NTRL/dashboard/searchResults/ titleDetail/PB2010103335.xhtml#

U.S. Army (2019). “Legal Review of Weapons and Weapon Systems.” Army Regulation 27–53. Washington, DC: Headquarters, Department of the Army, 23 September 2019. https://armypubs.army.mil/epubs/DR_pubs/DR_a/ pdf/web/ARN8435_AR27-53_Final_Web.pdf.

Don A. Howard, PHD
RELATED ARTICLES

Most Popular

Recent Comments