Robotics and Military Operations - Battista
Robotics and Military Operations - Battista
Robotics and Military Operations - Battista
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Strategic Studies Institute, US Army War College is collaborating with JSTOR to digitize,
preserve and extend access to this content.
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
CHAPTER 3. ROBOTICS AND MILITARY
OPERATIONS: POLICY IMPLICATIONS
Tony Battista
47
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
this phenomenon, especially in light of developments
in autonomous armed systems?
The current and future security environment,
increasingly defined by asymmetric and unpredictable
threats, international laws, and norms—both new and
revised—must grapple with emerging challenges in
order to prevent or minimize the loss of life, either by
human hands or by machine. It is clear that nonstate
enemy combatants are unlikely to act in accordance
with international laws regarding the use of auton-
omous systems. We should also question whether
certain states would even comply. Exploiting the ambi-
guities of these emerging autonomous (and disrup-
tive) technologies by providing an edge to a belligerent
would make compliance with international norms and
regimes even more profoundly complex, not less.
Notwithstanding these enormous challenges, the
2015 Kingston Conference on International Security
(KCIS) participants rightly acknowledged the need
to further the understanding of the legal, ethical, and
strategic implications of autonomous and semi-auton-
omous systems. Robotics is still in a pioneer stage and,
as such, we have much to learn and to discover about
their full potential, implications, and lethality. More-
over, learn we must, as the advances are accelerating
at an impressive pace. Nuclear weapons were consid-
ered unthinkable for future use after Hiroshima and
Nagasaki, yet their development continued thereafter
at an alarming pace. Unlike the post-nuclear age, how-
ever, there is currently no comparable robotic stigma,
and the international community has yet to define
what even constitutes an autonomous system. Hence,
the argument can be made that the policy implications
and the development of a credible (and enforceable)
control regime will remain elusive for quite some time.
48
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
Controversy abounds when trying to demarcate
autonomous systems and, more broadly, robots. For
instance, in the United States, there are attempts to
define robotic systems by making distinctions between
the execution and performance of a machine. Alterna-
tively, some academics base their definitions on a more
technical level and argue that a robot is composed of
sensors, processors, and tools. The lack of consensus
on an internationally accepted definition has hindered
the development of laws governing these systems. Sev-
eral scholars at the conference maintained that in order
to make a legal assessment of these systems, one needs
to examine a particular weapon in a particular context.
Moreover, a number of attendees at the conference
took issue with the Human Rights Watch campaign
to prohibit the rise of “killer robots.” Human Rights
Watch contends that these:
49
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
systems complies with the Law of Armed Conflict.
Arguably, a massive point of legal contention revolves
around the reliability of these systems; we fear a sci-
ence fiction based scenario where robotic systems sur-
pass our own intelligence, or we fear that autonomous
systems are not intelligent enough to be reliable when
paired with lethal ordnances.
This possibility underlines the threat potential
of autonomous systems to the global community at
large. Even if autonomous systems are intended for
use by allies to undertake surveillance activities or as a
force multiplier, potential users of these systems (both
friend and foe) will always find unforeseen applica-
tions for these devices. To assess thoroughly the threat
potential of these devices, as well as how the Canadian
Armed Forces (CAF) and its allies and partners should
respond, one must consider all the possible ways these
systems might be used, rather than simply focus on
how we use them now or the manner in which they
were intended to be used.
With rapid technological growth come challenges,
such as defining robotics and their legal applications in
combat. However, this rapid growth also creates oppor-
tunities. We should view these opportunities as both
an evolution and a potential revolution in the security
environment. Robotics is unlikely to replace all aspects
of human control and oversight in combat. Yet, it gives
us the capabilities we need to wage a smarter form of
warfare, including the promise of reducing the risk to
our soldiers. Ultimately, technology will advance, and
war will persist, as we continue to face determined
enemies and threats that we have yet to appreciate.
Consider the following from a policy implication
perspective:
50
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
1. The trend for further development of quasi- or
fully-autonomous systems for military pur-
poses will continue, and the use of these sys-
tems is virtually inevitable. We should think on
how to deal with its implications, rather than
stick our heads in the sand and pretend that it
will not happen, or focus all of our energy to
prevent their development in the first place.
2. Policymakers are generally not well prepared
for tough decisions with long-term, strategic
implications. This is even more so in democratic
states, which usually have 4- or 5-year cyclical
horizons. As such, more efforts need to be made
to prepare decision makers to think and act on
longer-term horizons.
3. To paraphrase George Friedman, war is an old
dance now being accompanied by new musical
instruments. We must stay in-step with these
new musical instruments; otherwise, we may
not only find ourselves off the dance floor but
under it!
4. For Canada and like-minded allies, there is a
need to strengthen the focus on collaboration
on innovation, and interoperability and inte-
gration (CI2I)—(or see eye-to-eye)—regarding
new and emerging autonomous systems. The
focus has to be broadened beyond the exist-
ing American, British, Canadian, Australian,
and New Zealand Armies’ Program, which is
focused on the interoperability issue of auton-
omous systems. This approach would give
Canada a better chance to stay abreast of new
technological breakthroughs, mitigate the pos-
sibility of adversaries developing and using
robotics against us, and provide for a better
51
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
understanding of the implications of autono-
mous systems. A new international, inter-dis-
ciplinary Manhattan Project for autonomous
systems could be a visionary step that would
give like-minded states an edge in the devel-
opment of autonomous systems and mitigate
the use of these same systems against them by
unscrupulous groups and rogue states.
5. There is a need to invest in effective wargaming
with autonomous systems, including broaden-
ing our understanding of the implications for
command and control (C2). These systems are
already being used—in various degrees and
sophistication—at the tactical level in many
military operations around the world. If “killer
robots” were given the ability to select and
engage targets without human intervention,
what are the implications for C2 nodes at the
various levels of military operations (tacti-
cal, operational, strategic/grand strategic, and
political)? Perhaps the most complex of implica-
tions is the danger of “moral de-skilling” of the
human military professional at all levels, and
replacing the human at crucial decision making
nodes that have broad implications in the con-
duct of military operations (as a means to an
end). According to a recent article by Megan
Spurrell, fully autonomous weapons are capa-
ble of detecting and executing targets without
human intervention. The technology remains
in experiential development, but experts warn
that it will not take long to transform the next
generation of unmanned aerial vehicles (UAVs)
into “killer robots.” The Bureau of Investigative
Journalism estimates that all that is needed is an
52
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
algorithm to fire missiles on a drone’s own rec-
ognized targets, and we are not far from reach-
ing this stage.3
6. While robotics is no silver bullet for either
deterring war or waging it successfully, miti-
gating surprise by a ruthless adversary is essen-
tial. While it matters if someday we are able to
advance technology to a state whereby auton-
omous systems are able to completely replace
and supplant humans, it is even more import-
ant to understand how a determined adver-
sary might use technological advancements to
wage war, including the unpleasant possibility
of humans becoming robots themselves! Ulti-
mately—and despite the very unconstrained
actions of some barbaric groups—we should
continue our efforts to ensure that the purpose
of waging and managing organized violence
should remain a tool of last resort. It is one thing
for machines to kill each other; it is another for
machines to decide how, when, and why to kill
human beings.
53
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
ENDNOTES - CHAPTER 3
2. Ibid.
54
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms