AI HUMAN
AI HUMAN
AI HUMAN
CONTENTS
I. INTRODUCTION ..........................................................................105
II. THE DEPENDENCE OF ARTIFICIAL INTELLIGENCE ON NUMERICALLY
QUANTIFIABLE GOALS ............................................................... 106
A. Background ......................................................................... 106
B. Objective Performance Standards ....................................... 107
C. Objective Versus Subjective AI? ........................................... 107
D. How Decisions Are Delegated to Machines......................... 110
E. Numerically Quantifiable Goals to Attain Rational Objective
Standards .............................................................................. 111
III. MILITARY APPLICATIONS OF LETHAL ARTIFICIAL INTELLIGENCE
AND THE CHALLENGE OF SUBJECTIVE IHL STANDARDS ............... 113
A. Example One: Quantifying Distinction ............................... 114
B. Example Two: Quantifying Unnecessary Suffering ........... 116
IV. CONCLUSION .............................................................................. 121
I. INTRODUCTION
*U.S. Marine Corps; Stockton Center for the Study of International Law, U.S. Naval War
College
106 I/S: A JOURNAL OF LAW AND POLICY [Vol. 15:1-2
A. Background1
1This section summarizes relevant points from Alan L. Schuller, At the Crossroads of
Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with
International Humanitarian Law, 8 HARV. NAT’L SEC. J. 379 (2017).
2019] SCHULLER 107
There are endless ways to define, describe, and evaluate AI. In the
context of examining AI-enhanced weapon systems and whether they
can comply with the laws of war, however, we are concerned with the
effects produced by the weapon. The processes by which the system
arrives at a given outcome are of less concern. Further, holding AI to
the arguably low standards sometimes demonstrated by humans is
widely considered insufficient. As such, it should be relatively
uncontroversial that an AI-enabled weapon must be “evaluated based
upon how well it performs to rational and objective standards.”3
Setting a rational standard means we must describe an ideal standard
to which we expect the system to perform. In this context, an ideal
standard would fall somewhere in between human performance
standards and perfection. An objective goal indicates measurable,
fact-based standards as compared to subjective personal opinions or
judgment. In the context of an AI-enhanced weapon, however, one
must pause to further consider the difference between these modes of
analysis.
2 Id. at 416.
3 Id. at 401.
108 I/S: A JOURNAL OF LAW AND POLICY [Vol. 15:1-2
4 Legal reviews of new weapons are conducted as a function of customary international law
5 See Moral Machine, MASS. INST. OF TECH., (last visited June 21, 2018)
http://moralmachine.mit.edu [perma.cc/3SB8-TBXT].
2019] SCHULLER 109
6 API, supra note 4, at art. 51(5)(b); see also JEAN-MARIE HENCKAERTS & LOUISE DOSWALD-
9 See Interview with Machine Learning Experts, OpenAI, San Francisco, CA (Jan. 26,
2018).
2019] SCHULLER 111
10It is also worth mentioning that all of the hypotheticals in this paper relate to lethal
weapon system. Without question, AI presents opportunities to ensure our national
security in non-lethal contexts such as intelligence analysis and information operations,
but these non-lethal contexts are beyond the scope of this paper.
112 I/S: A JOURNAL OF LAW AND POLICY [Vol. 15:1-2
11 Jack Nicas, How YouTube Drives People to the Internet’s Darkest Corners, WALL ST. J.
12 API, supra note 4, at art. 51(5)(b) (attacks on targets that would produce a “concrete and
direct military advantage,” and are not otherwise unlawful, are not prohibited); API, supra
note 4, at art. 52(2) (targets are persons and objects “which by their nature, location,
purpose, or use make an effective contribution to military action” and whose destruction or
neutralization “offers a definite military advantage.”).
Id. at art. 52(2) (“Attacks shall be limited strictly to military objectives.”); HENCKAERTS &
13
14API, supra note 4, at art. 51(5)(b); HENCKAERTS & DOSWALD-BECK, supra note 6, at R.14.
15API, supra note 4, at art. 35(2)–(3); Legality of the Threat or Use of Nuclear Weapons,
Advisory Opinion, 1996 I.C.J. 226, ¶ 78 (July 8); HENCKAERTS & DOSWALD-BECK, supra
note 6, at R.70.
2019] SCHULLER 115
API, supra note 4, at art. 52(2) (“Attacks shall be limited strictly to military objectives.”);
16
17API, supra note 4, at art. 57; HENCKAERTS & DOSWALD-BECK, supra note 6, at R.15–21.
2019] SCHULLER 117
In the land context the same principle might theoretically apply but
current technology appears unsuited to make such fine grained
distinction on its own. The challenge of AI applying the concept of
unnecessary suffering is different in kind, however, because the
principle is less amenable to numerically quantifiable rational goals.
The prohibition against causing unnecessary suffering makes
unlawful the use in armed conflict of weapons that by their nature
cause unnecessary suffering and the use of lawful weapons in a
manner that is intended to cause unnecessary suffering.18
Importantly, there is no simple objective test to determine whether
the use of a weapon would constitute unnecessary suffering.19 The first
aspect of the principle is addressed during legal review of proposed
weapons and is contextually specific to the system being evaluated.
The contours of specific prohibitions established by customary law
and treaty are beyond the scope of this article. As such, this discussion
will generally focus on the second prong regarding employment of
weapons already deemed not per se unlawful.
In theory, the goal of preventing unnecessary suffering is a noble
one. Most would agree in principle that no more suffering should be
caused to combatants than is necessary to obtain military victory. In
application, however, the principle is riddled with subjective
judgment. Consider two simple examples. First, if a soldier bayonets
an enemy, does she cause unnecessary suffering if she twists and turns
the bayonet after stabbing the enemy? Some might argue that the
additional pain caused by twisting the weapon is simply unnecessary
as the victim has already been wounded and is most likely out of the
fight. On the other hand, the soldier is permitted under the laws of
war to continue attacking until the enemy is dead, assuming the
enemy does not surrender or is not clearly hors de combat. The action
of twisting the blade will help ensure a fatal wound to the enemy, and,
as such, she is arguably well within the law to twist and turn and stab
again until the enemy is dead. By way of a second example, consider
the use of an artillery barrage of high explosive rounds mixed with
white phosphorus shells, commonly referred to as an HE/WP fire
mission. Some would argue that the combination of these rounds
creates unnecessary suffering because the white phosphorous causes
horribly painful burns on enemy soldiers that are exposed to the
API, supra note 4, at art. 35(2)–(3); Advisory Opinion, supra note 15, at ¶ 78;
18
19 GARY D. SOLIS, THE LAW OF ARMED CONFLICT 271–72 (Cambridge Univ. Press 2010).
118 I/S: A JOURNAL OF LAW AND POLICY [Vol. 15:1-2
barrage, and that the soldiers could be killed more humanely by using
only HE rounds or other less painful weapons. On the other hand, the
employment of HE/WP is highly effective at destroying enemy fuel
depots because after the HE rounds pierce fuel containers, the WP
rounds light the exposed fuel on fire and destroys the target far more
efficiently than one type of round alone. The suffering caused to
attendant enemy soldiers is arguably an unfortunate byproduct of a
completely lawful attack. In sum, when one of the core functions of
the military is to kill human beings during armed conflict, any
discussion about what suffering is unnecessary is fraught with
subjectivity.
These are merely two very simple examples that have plausible
arguments on both sides. We might spend hours debating them
without arriving at consensus regarding whether they violate the
principle of preventing unnecessary suffering. The standard is
inherently subjective and in application nearly impossible to divorce
from one’s biases, be they cultural, institutional, or otherwise. As
such, application of this principle may prove highly problematic for an
AI because it is less amenable to description in numerically
quantifiable terms. In other words, it is quite difficult to approximate
using rational objective goals that are defined by numerical standards.
Since machines are not capable of forming intent, a more detailed
inquiry into the matter starts with how AI systems could be deployed
by humans in a manner intended to cause unnecessary suffering. The
evaluation would hinge on parsing out that suffering which is a
byproduct of defeating the enemy from that suffering which is
excessive, and thus, unnecessary to secure victory. One could of
course conjure up AI systems that would violate this principle.
Suppose that a military developed a “Pain Bot” that was designed to
learn how to kill enemy soldiers as slowly as possible without allowing
itself to be captured or destroyed. Of course, this would be per se
illegal, and it would not be created by any country that was dedicated
to the rule of law and IHL principles. Such a sophomoric example
aside, the analysis becomes complex.
Suppose instead that a country developed a robot for deployment
in urban combat called “Surrender Bot.” It is equipped with a high-
power laser that can cut through an enemy soldier’s body armor. The
country intends to deploy the robot in close quarters as the first
system to enter enemy held buildings. One of their objectives in doing
so is to kill as few enemy soldiers as is necessary in order to achieve
the most efficient military victory possible, thus encouraging post-
conflict reconciliation. Most notably, the robot is equipped with AI
2019] SCHULLER 119
20 Other options could certainly include non-lethal means of incapacitating the enemy but
21 See Protocol on Blinding Laser Weapons, opened for signature Oct. 13, 1995, 1380
22See Jack Clark & Dario Amodei, Faulty Reward Functions in the Wild, OPENAI (Dec. 21,
2016), https://blog.openai.com/faulty-reward-functions/ (“Reinforcement learning
algorithms can break in surprising, counterintuitive ways.”) [http://perma.cc/4T3L-P8J3].
2019] SCHULLER 121
IV. CONCLUSION
23 Proportionality under IHL means that the anticipated loss of civilian life and damage to
property incidental to attacks must not be “excessive in relation to the concrete and direct
military advantage anticipated.” API, supra note 4, at art. 51(5)(b); HENCKAERTS &
DOSWALD-BECK, supra note 6, at R.14.
122 I/S: A JOURNAL OF LAW AND POLICY [Vol. 15:1-2