AI and Human Till Page 11 OODA
AI and Human Till Page 11 OODA
AI and Human Till Page 11 OODA
SHIN-SHIN HUA*
ABSTRACT
AI’s revolutionizing of warfare has been compared to the advent of the nu-
clear bomb. Machine learning technology, in particular, is paving the way for
future automation of life-or-death decisions in armed conflict.
But because these systems are constantly “learning,” it is difficult to predict
what they will do or understand why they do it. Many therefore argue that they
should be prohibited under international humanitarian law (IHL) because
they cannot be subject to meaningful human control.
But in a machine learning paradigm, human control may become unneces-
sary or even detrimental to IHL compliance. In order to leverage the potential
of this technology to minimize casualties in conflict, an unthinking adherence
to the principle of “the more control, the better” should be abandoned.
Instead, this Article seeks to define prophylactic measures that ensure
machine learning weapons can comply with IHL rules. Further, it explains
how the unique capabilities of machine learning weapons can facilitate a more
robust application of the fundamental IHL principle of military necessity.
I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
II. OVERVIEW OF THE TECHNOLOGY . . . . . . . . . . . . . . . . . . . . . . . . 121
A. Defining Autonomous Weapons Systems . . . . . . . . . . . . . . . 121
1. Human-Machine Interactions . . . . . . . . . . . . . . . . 122
2. The Task Performed . . . . . . . . . . . . . . . . . . . . . . . 122
B. Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
1. Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
2. Reinforcement Learning . . . . . . . . . . . . . . . . . . . . 125
3. Legally Relevant Attributes of Machine Learning
Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
117
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
I. INTRODUCTION
Machine learning is the buzzword of our age. Instead of relying on
pre- programming, these systems can “learn” how to do a task through
training, use, and user feedback.1 Having revolutionized fields from
medicine to finance, machine learning is propelling a new artificial
intelligence (AI) arms race among the world’s major military powers to
deploy these technologies in warfare.2 Indeed, the rise of military AI
has been compared to the advent of the nuclear bomb.3
1. STEPHAN DE SPIEGELEIRE, MATTHIJS MAAS & TIM SWEIJS, ARTIFICIAL INTELLIGENCE AND THE
FUTURE OF DEFENSE: STRATEGIC IMPLICATIONS FOR SMALL- AND MEDIUM-SIZED FORCE PROVIDERS 35–
39 (2017).
2. America v China-The Battle for Digital Supremacy, THE ECONOMIST (Mar. 15, 2018), https://
www.economist.com/leaders/2018/03/15/the-battle-for-digital-supremacy; Karla Lant, China,
Russia and the US Are in an Artificial Intelligence Arms Race, FUTURISM (Sept. 12, 2017), https://
futurism.com/china-russia-and-the-us-are-in-an-artificial-intelligence-arms-race.
3. Tom Simonite, AI Could Revolutionize War as Much as Nukes, WIRED (July 19, 2017), https://
www.wired.com/story/ai-could-revolutionize-war-as-much-as-nukes/.
118 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
2019] 119
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
13. Id. But see infra Section III.D.3.d (discussing Schuller’s alternative theory).
14. See infra Section III.D (discussing the doctrine of “meaningful human control”). But see
Ashley Deeks, Noam Lubell & Daragh Murray, Machine Learning, Artificial Intelligence, and the Use of
Force by States, 10 J. NAT’L SECURITY L. & POL’Y 1 (2019); Matthias, supra note 11; Alan L. Schuller,
At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with
International Humanitarian Law, 8 HARV. NAT’L SECURITY J. 379 (2017).
15. But see infra Section III.D.3.d (discussing Schuller’s approach).
16. Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the
Protection of Victims of International Armed Conflicts (Protocol I), art. 57, June 8 1977, 1125
U.N.T.S. 3 (1977), https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/7c4d08d9b287a42141256739003
e636b/f6c8b9fee14a77fdc125641e0052b079 [hereinafter Additional Protocol I].
17. HEATHER M. ROFF & RICHARD MOYES, “MEANINGFUL HUMAN CONTROL, ARTIFICIAL
INTELLIGENCE AND AUTONOMOUS WEAPONS”: BRIEFING PAPER PREPARED FOR THE INFORMAL MEETING
OF EXPERTS ON LETHAL AUTONOMOUS WEAPONS SYSTEMS, 4–5 (April 2016), http://www.article36.
org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf.
18. Peter Margulies, Making Autonomous Weapons Accountable: Command Responsibility for
Computer-Lethal Force in Armed Conflicts, in RESEARCH HANDBOOK ON REMOTE WARFARE 405–42, 433–
34 (Jens David Ohlin ed., 2017).
120 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
2019] 121
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
force, notably in the targeting cycle.”22 This definition has two key ele-
ments: (1) the balance of human-machine control (“partial or full
replacement of a human”); and (2) the function being carried out by
the AWS (“tasks governed by IHL . . . notably in the targeting cycle”).
These two aspects are defined in further detail below.
1. Human-Machine Interactions
Autonomous systems can be categorized in the following ways
according to the distribution of control between human and machine:
122 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
B. Machine Learning
A machine’s control system governs its decision-making process.
Control systems can be categorized based on their capacity to govern
their own behavior and deal with environmental uncertainties.32
Automatic systems, for example, rely on a series of pre-programmed
“if-then rules” which prescribe how the system should react to a given
2019] 123
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
1. Deep Learning
Deep learning is a type of representation learning method, which
denotes systems that can “learn how to learn.”38 These systems can work
off raw data, extracting representations (features) that are useful to
their specific machine learning tasks.39 They do this through deep neu-
ral networks, which are networks of hardware and software that are
inspired by the human brain.40
The key advantage of deep learning compared to older types of
machine learning is that it does not require manual feature engineer-
ing, which involves the refinement of each raw dataset before it can be
124 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
2. Reinforcement Learning
Reinforcement learning technology merges the training and applica-
tion phases of a machine system, which are distinct in traditional neural
networks. A reinforcement learning system trains within its operating
environment by pursuing various alternative action routes in a trial-
and-error fashion, using the results to continuously hone its own
parameters.44 A machine that can learn “on the job” is far better at
adapting to uncertain surroundings.45
A recent example of reinforcement learning is AlphaGo Zero, a sys-
tem designed by the AI company DeepMind. AlphaGo Zero was trained
to play Go, a game considered far more difficult than chess for
machines to master due to the enormous number of possible moves.46
While its predecessor AlphaGo first trained on thousands of human
amateur and professional games, AlphaGo Zero was able to skip this
step and learn simply by playing games against itself. In doing so, it
swiftly and dramatically exceeded human playing capabilities.47
AlphaGo Zero demonstrated the great potential of reinforcement
learning for use in future AWSs. First, reinforcement learning has the
potential to greatly surpass human abilities in carrying out the kind of
complex problem-solving required to wage war.48 Second, reinforce-
ment learning systems can generate novel solutions unconstrained by
41. Id.
42. Matthias, supra note 11, at 179.
43. Mittelstadt et al., supra note 10, at 6.
44. Matthias, supra note 11, at 179.
45. Id.
46. David Silver et al., AlphaZero: Shedding new light on chess, shogi, and Go, DEEPMIND BLOG (Dec.
6, 2018), https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-
shogi-and-go, (last visited Aug. 19, 2019).
47. David Silver & Demis Hassabis, AlphaGo Zero: Starting from Scratch, DEEPMIND BLOG (Oct. 18,
2017), https://deepmind.com/blog/alphago-zero-learning-scratch/.
48. See Elsa B. Kania, Quest for an AI Revolution in Warfare, THE STRATEGY BRIDGE (June 8, 2017),
https://thestrategybridge.org/the-bridge/2017/6/8/-chinas-quest-for-an-ai-revolution-in-warfare.
2019] 125
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
126 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
56. See, e.g., Pat Host, Deep Learning Analytics Develops DARPA Deep Machine Learning Prototype,
DEFENSE DAILY (Nov. 5, 2016), https://www.defensedaily.com/deep-learning-analytics-develops-
darpa-deep-machine-learning-prototype/advanced-transformational-technology/.
57. SPIEGELEIRE, MAAS, & SWEIJS, supra note 1, at 88–89.
58. BOULANIN & VERBRUGGEN, supra note 25, at 25–26.
59. Schuller, supra note 14, at 410; see also BOULANIN & VERBRUGGEN, supra note 25, at 65–82.
60. See supra Section I.
61. PAUL SCHARRE, AUTONOMOUS WEAPONS AND OPERATIONAL RISK 12 (Feb. 2016), https://s3.
amazonaws.com/files.cnas.org/documents/CNAS_Autonomous-weapons-operational-risk.pdf?
mtime=20160906080515.
62. INT’L COMM. OF THE RED CROSS, AUTONOMOUS WEAPON SYSTEMS: IMPLICATIONS OF
INCREASING AUTONOMY IN THE CRITICAL FUNCTIONS OF WEAPONS 13 (2016), https://www.icrc.org/
en/publication/4283-autonomous-weapons-systems.
2019] 127
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
63. MICHAEL HOROWITZ, PAUL SCHARRE & CENTER FOR A NEW AMERICAN SECURITY, MEANINGFUL
HUMAN CONTROL IN WEAPON SYSTEMS: A PRIMER 7–8 (2015), http://www.cnas.org/sites/default/
files/publications-pdf/Ethical_Autonomy_Working_Paper_031315.pdf.
64. ICL is just one way in which IHL is enforced. Generally, ICL prosecutions are reserved for
the “most serious crimes of concern to the international community.” Rome Statute of the
International Criminal Court preamble, art. 25(2), July 17, 1998, 2187 U.N.T.S. 90 (entered into
force July 1, 2002). Therefore, not all violations of IHL automatically constitute international
crimes.
65. See, e.g., id. art. 30 (explaining that (1) in order to establish a crime under the Rome
Statute, the requisite mens rea must be present and (2) the general mens rea standard, short of
intent, is knowledge i.e., “awareness that a circumstance exists or a consequence will occur in the ordinary
course of events”).
66. See Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 1996 I.C.J. 226,
261, 493 (July 8) (dissenting opinion of Weeramantry J.).
67. Id.
68. See, e.g., Hague Convention (IV) Respecting the Laws and Customs of War on Land and Its
Annex: Regulations Concerning the Laws and Customs of War on Land, preamble, 26 Stat. 2277
(“the desire to diminish the evils of war, as far as military requirements permit”).
128 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
69. See, e.g., the principle of distinction as set out in Article 48 and 52 of Additional Protocol I,
supra note 16 (defining who is a combatant and a military object that may be permissibly attacked
under IHL); see also the principle of proportionality as embodied inter alia in Article 51(5)(b),
57(2)(a)(iii) and 57(2)(b) of Additional Protocol I, supra note 16.
70. Id. art. 48, art. 52.
71. Additional Protocol I, supra note 16, art. 57(1); JEAN-MARIE HENCKAERTS & LOUISE
DOSWALD-BECK, INT’L COMM. OF THE RED CROSS, CUSTOMARY INTERNATIONAL HUMANITARIAN LAW:
VOLUME I: RULES 51 (2005); CLAUDE PILLOUD ET AL., INT’L COMM. OF THE RED CROSS,
COMMENTARY ON THE ADDITIONAL PROTOCOLS OF 8 JUNE 1977 TO THE GENEVA CONVENTIONS OF 12
AUGUST 1949 ¶ 2191 (1987)
72. Additional Protocol I, supra note 16, art. 57(1).
73. THEO BOUTRUCHE, EXPERT OPINION ON THE MEANING AND SCOPE OF FEASIBLE PRECAUTIONS
UNDER INTERNATIONAL HUMANITARIAN LAW AND RELATED ASSESSMENT OF THE CONDUCT OF THE
PARTIES TO THE GAZA CONFLICT IN THE CONTEXT OF THE OPERATION “PROTECTIVE EDGE” 8 (2015),
https://www.diakonia.se/en/IHL/News-List/eo-on-protective-edge/.
74. TERRY GILL ET AL., ILA STUDY GROUP ’THE CONDUCT OF HOSTILITIES AND INTERNATIONAL
HUMANITARIAN LAW: CHALLENGES OF 21ST CENTURY WARFARE’ - INTERIM REPORT 15 (2014),
https://pure.uva.nl/ws/files/2346971/157905_443635.pdf.
75. COMMENTARY ON THE ADDITIONAL PROTOCOLS OF 8 JUNE 1977 TO THE GENEVA CONVENTIONS
OF 12 AUGUST 1949, supra note 71, ¶ 2191.
76. PROGRAM ON HUMANITARIAN POLICY AND CONFLICT RESOLUTION, COMMENTARY TO THE
HPCR MANUAL ON INTERNATIONAL LAW APPLICABLE TO AIR AND MISSILE WARFARE 124–125 (2010).
2019] 129
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
77. Int’l Crim. Trib. for the Former Yugoslavia, Final Report to the Prosecutor by the
Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of
Yugoslavia, 39 I.L.M. 1257 (June 13, 2000).
78. Id. ¶ 29.
79. Michael N. Schmitt, Autonomous Weapon Systems and International Humanitarian Law: A Reply
to the Critics, HARV. NAT’L SECURITY J. FEATURES 1, 20 (2013); Markus Wagner, The Dehumanization
of International Humanitarian Law: Legal, Ethical, and Political Implications of Autonomous Weapon
Systems, 47 VAND. J. TRANSNAT’L L. 1371, 1397 (2014).
80. See, e.g., CCW Protocol (III) on Prohibitions or Restrictions on the Use of Incendiary
Weapons, art. 1(5), Oct. 10, 1980, 1342 U.N.T.S. 71 (entered into force Dec. 2, 1983).
81. TALLINN MANUAL 2.0 ON THE INTERNATIONAL LAW APPLICABLE TO CYBER OPERATIONS
(Michael N. Schmitt ed., 2d ed. 2017) [hereinafter TALLINN MANUAL 2.0].
82. TALLINN MANUAL ON THE INTERNATIONAL LAW APPLICABLE TO CYBER WARFARE (Michael
Schmitt ed., 2013).
83. TALLINN MANUAL 2.0, supra note 81, at 1–12.
130 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
2019] 131
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
89. HOROWITZ, SCHARRE, & CENTER FOR A NEW AMERICAN SECURITY, supra note 63, at 7.
90. Nehal Bhuta, Susanne Beck & Robin Geiss, Present Futures: Concluding Reflections and Open
Questions on Autonomous Weapons Systems, in AUTONOMOUS WEAPONS SYSTEMS LAW, ETHICS, POLICY
347, 375 (Nehal Bhuta et al. eds., 2016).
91. David Akerson, The Illegality of Offensive Lethal Autonomy, in INTERNATIONAL HUMANITARIAN
LAW AND THE CHANGING TECHNOLOGY OF WAR 65, 87 (Dan Saxon ed., 2013).
92. Volume 1: Basic Doctrine, Levels of War, CURTIS E. LEMAY CENTER, https://www.doctrine.af.
mil/Portals/61/documents/Volume_1/V1-D34-Levels-of-War.pdf (last visited Feb. 15, 2019).
93. Id.
94. Id.
95. ROFF & MOYES, supra note 17 at 4–5.
96. Id. at 5.
132 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
Roff and Moyes’s concern is that applying MHC only at the higher
levels of warfare (i.e., at the strategic and/or operational levels) could
progressively dilute the quality of the legal and operational judgments
reached.97 This objection rests on the idea that greater physical distance
between decision-makers and the battlefield could lead to poorer contex-
tual awareness, for example of the geographic space and time in which
the AWS would be used.98 This lack of proximity between decision-makers
and the battlefield reality could reach a point where “[the] ability to pre-
dict outcomes becomes either non-existent or minimal.”99
a. Big Data
Big data is a key component of military decision-making today.100
Intelligence data can inform each of the steps in the targeting cycle,
which typically consist of: (1) setting objectives; (2) developing and pri-
oritizing targets; (3) analyzing capabilities; (4) assigning forces; (5) mis-
sion planning and execution; and (6) assessment.101 Each of these steps
contains its own feedback loop and attendant time lags, which is exacer-
bated by the need to process intelligence data.102 Indeed, the amount
of data available to inform targeting decisions can overwhelm human
analysts.103
But where functions in the targeting cycle are delegated to machine
learning systems, legal and operational judgments could improve as
they are continuously and seamlessly updated according to realities on
the ground. This is because learning systems, by their nature, use the
knowledge gained through experience to automatically improve the
performance of the system through changing its structure, program or
97. Id.
98. Id.
99. Id.
100. Kimberly Trapp, Great Resources Mean Great Responsibility: A Framework of Analysis for
Assessing Compliance with API Obligations in the Information Age, in INT’L HUMANITARIAN LAW AND THE
CHANGING TECH. OF WAR 159–60 (Dan Saxon ed., 2013).
101. SPIEGELEIRE, MAAS & SWEIJS, supra note 1, at 89.
102. Id.
103. Id.
2019] 133
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
134 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
b. Inscrutability
Even if a Learning AWS works off of relatively limited or sparse
data,110 the processes of the most advanced machine learning technolo-
gies, such as deep learning and reinforcement learning, might be in-
scrutable to a human.111
A recent example of such a “black box” AI is NVIDIA’s self-learning
and self-driving car.112 The car did not require a single instruction pro-
vided by an engineer or programmer.113 It relied instead on a deep
learning algorithm that had taught itself to drive by observing human
driving behavior.114 The problem with this self-taught driving ability is
that it is not entirely clear how the car makes its decisions.115 The system
is so complicated that even the engineers who designed it might be
unable to identify the reason for any single action, and there is cur-
rently no clear way to give the system the ability to explain why it did
what it did in every case.116
With automatic weapons systems that follow more basic, “if-then”
rules, irregularities in the decision-making process are easier to spot.
These could give prior warning that an erroneous targeting decision
was about to be made, at which point the human supervisor could over-
ride the system. The opacity of machine learning techniques, on the
other hand, could make it impossible for a human supervisor to iden-
tify process irregularities and pre-empt a malfunctioning targeting
decision.
The U.S. Department of Defense (DoD) has identified this “dark se-
cret heart of AI” as a key stumbling block in the military use of learning
machines.117 The DoD has even initiated an Explainable Artificial
Intelligence Program that is developing ways for machine learning sys-
tems to provide a rationale for their outputs.118 However, these ration-
ales have severe drawbacks. First, they will generally be simplified,
meaning that vital information might be lost in transmission.119 And
110. See, e.g., John Keller, DARPA TRACE program Using Advanced Algorithms, MILITARY
AEROSPACE (July 24, 2015) (discussing DARPA’s ATR system), https://www.militaryaerospace.
com/articles/2015/07/hpec-radar-target-recognition.html.
111. Mittelstadt et al., supra note 10, at 4, 6.
112. Knight, supra note 52.
113. Id.
114. Id.
115. Id.
116. Id.
117. Id.
118. Id.
119. Mittelstadt et al., supra note 10, at 4.
2019] 135
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
they might take time both to put together and to understand.120 On the
battlefield, these seconds could matter.
Any future Learning AWS will likely make targeting decisions based
on big data in time-critical situations, following inscrutable decision-
making processes. In these circumstances, human supervision or input
becomes practically meaningless. But despite being inscrutable and
ungovernable by human operators during their operation, these
Learning AWSs might still be capable of better IHL compliance than a
human-controlled AWS. This might be due, for example, to their ability
to process and analyse much larger quantities of data more accurately
and swiftly than any human. They should not be prohibited simply
because they cannot be meaningfully supervised by a human operator.
Instead, we should consider why human supervision is necessary in the
first place.
136 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
Article will use it to assess the minimum level of human intervention that is required under IHL
rules on precautions.
123. Id. at 433.
124. Id. at 433–34.
125. Id.
126. Id.
127. Id. at 434.
128. Additional Protocol I, supra note 16, art. 57.
129. Margulies, supra note 18, at 434 (emphasis in original).
130. Id.
2019] 137
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
Although this is a step forward from Roff and Moyes, Margulies still
does not go far enough. For more basic weapons that employ reactive
systems following simple “if-then” rules, it may be logical to require the
possibility of human intervention. Their processes are transparent and
their behaviors are predictable.131 However, with future Learning
AWSs, optimal IHL outcomes may require that there be no possibility
for human override, contrary to Margulies’s standard.
Take the example of reinforcement learning technology again.132
AlphaGo Zero was more powerful than previous versions of AlphaGo
because by training against itself it was unconstrained by “the precon-
ceived notions, rules of thumb, and conventional wisdom upon which
most human decision-makers rely.”133 By training purely against a
machine, AlphaGo Zero was able to discover unconventional strategies
and imaginative new moves.134 Similarly, any future Learning AWS that
uses reinforcement learning could make targeting decisions “that
humans may not have considered, or that they considered and rejected
in favor of more intuitively appealing options.”135
Consider a hypothesis where a future Learning AWS plans to strike
target X, leading to ten civilian casualties. If the Learning AWS includes
the possibility of human override as advocated by Margulies, a human
supervisor would most likely override the Learning AWS and opt
instead for target Y, which might be more intuitively “lawful” under
IHL but in fact leads to less than ten civilian casualties. Target Y might
be a more appealing choice to a human supervisor because, for exam-
ple, he or she assumes that the factual scenario fits a recurring factual
pattern, but in fact some of the variables are different. In this scenario,
it seems clear that the duty to take constant care would actually require
the Learning AWS to be insulated from human control because this is
the best way to ensure that “the civilian population, civilians and civilian
objects” are spared.136
The duty to take constant care must always be interpreted in light of
the purpose set out in Article 57 of Additional Protocol 1 of the Geneva
Conventions of 12 August, 1949, which is to “spare the civilian popula-
tion, civilians and civilian objects.”137 The paradigm of “the more
138 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
2019] 139
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
140 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
150. See, e.g., Chantal Grut, The Challenge of Autonomous Lethal Robotics to International
Humanitarian Law, 18 J. CONFLICT & SEC. L. 5, 14–15 (2013); Mary L. Cummings, Automation and
Accountability in Decision Support System Interface Design, 32 J. OF TECH. STUD. 23 (2006).
151. Kathleen Mosier et al., Aircrews and Automation Bias: The Advantages of Teamwork?, 11 INT’L
J. AV. PSYCHOL. 1 (2001).
152. See, e.g., Grut, supra note 150, at 14–15 (discussing the 1988 USS Vincennes incident).
153. Cosima Gretton, The Dangers of AI in Healthcare: Risk homeostasis and automation bias,
TOWARDS DATA SCIENCE (June 24, 2017), https://towardsdatascience.com/the-dangers-of-ai-in-
health-care-risk-homeostasis-and-automation-bias-148477a9080f.
154. See discussion supra Section III.D.3.b.
155. Grut, supra note 150, at 19.
156. Kevin Neslage, Does “Meaningful Human Control” Have Potential for the Regulation of
Autonomous Weapon Systems?, 6 NAT’L SEC. & ARMED CONFLICT L. REV. 151, 173–4 (2015).
157. See discussion supra Section III.D.3.c.
158. Schuller, supra note 14, at 409–13.
2019] 141
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
159. Schuller, supra note 14, at 408; see also Prosecutor v. Delalić, Case No. IT-96-21-T,
Judgment, ¶ 395 (Int’l Crim. Trib. for the Former Yugoslavia Nov. 16, 1998) (“[N]ecessary and
reasonable measures” are “limited to such measures as are within someone’s power, as no one can
be obliged to perform the impossible.”).
160. Schuller, supra note 14, at 408.
161. Id.
142 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
2019] 143
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
144 [Vol. 51
RETHINKING MEANINGFUL HUMAN CONTROL
176. Magnus Stensmo & Terrence J. Sejnowski, Learning Decision Theoretic Utilities Through
Reinforcement Learning, in ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 9 1061–1067
(Michael C. Mozer, Michael I. Jordan, & Thomas Petsche eds., 1996), http://dl.acm.org/citation.
cfm?id=2998981.2999130 (last visited Dec. 21, 2018).
177. CUSTOMARY INTERNATIONAL HUMANITARIAN LAW, supra note 71, at 54.
178. Margulies, supra note 18, at 420.
179. Id.
180. Bhuta, Beck & Geiss, supra note 90, at 370.
181. Id. at 375.
2019] 145
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
IV. CONCLUSION
The future implementation of machine learning techniques such as
deep learning and reinforcement learning in AWSs demands a radical
rethink of notions of “meaningful human control.” Learning AWSs
may in the future comply with IHL just as well, or even better, without
human control. Or it may require human control to comply with IHL.
The point is that a case-by-case analysis is required. Banning these weap-
ons in all instances because they cannot be meaningfully controlled
could mean losing a potential instrument for minimizing human suf-
fering in future conflicts.
Instead, the paradigm of “the more human control, the better,” cur-
rently favored by many scholars and members of the international com-
munity, should be reconsidered. Schuller provides the most workable
framework in the context of Learning AWSs by shifting the focus from
human control to predictability, i.e., whether a Learning AWS can pre-
dictably comply with IHL. This Article argues that this also rightly shifts
the focus to the development of prophylactic measures to ensure that
machine learning weapons can comply with IHL rules in the first place.
In a machine learning paradigm, it is at the stage of design, testing, and
verification that human control and human supervision could be most
meaningful.
AlphaGo Zero was able to greatly surpass human abilities because it
was not prone to “the preconceived notions, rules of thumb, and con-
ventional wisdom upon which most human decision-makers rely.”182
The AlphaGo Zero experience should inspire a fresh and more
nuanced approach to the application of IHL to these new technologies,
in order to fully leverage their potential to minimize human suffering
in armed conflict.
146 [Vol. 51