Robotics and Military Operations - Battista

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Strategic Studies Institute, US Army War College

Report Part Title: ROBOTICS AND MILITARY OPERATIONS: POLICY IMPLICATIONS


Report Part Author(s): Tony Battista

Report Title: ROBOTICS AND MILITARY OPERATIONS


Report Editor(s): William G. Braun <suffix>III</suffix>, Stéfanie von Hlatky, Kim Richard
Nossal
Published by: Strategic Studies Institute, US Army War College (2018)
Stable URL: https://www.jstor.org/stable/resrep20100.7

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

Strategic Studies Institute, US Army War College is collaborating with JSTOR to digitize,
preserve and extend access to this content.

This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
CHAPTER 3. ROBOTICS AND MILITARY
OPERATIONS: POLICY IMPLICATIONS

Tony Battista

While predictions in military affairs have always


proven challenging, one can identify emerging trends
in the use of autonomous systems by the military that
are worthy of serious consideration by the scientific
community, military planners and practitioners, schol-
ars, legal experts, and policymakers alike. This chapter
is largely for the purpose of discerning policy impli-
cations for the use of autonomous systems in future
conflict and military operations.
One thing is relatively certain: geopolitics, technol-
ogy, and war remain inseparable. Technology, geopol-
itics’ companion, has evolved dramatically: nuclear
weapons, satellites, Global Positioning System (GPS),
precision-guided weapons systems, the microchip
and nanotechnology, artificial intelligence and robot-
ics, and huge advances in communication technology,
including social media—among other wonders and
horrors—have changed not only the rules of war but
also the circumstances under which war is possible
and to what end! Arguably, more than ever, there is
an increasing trend to blur the distinctions between
criminal and terrorist acts, and war. We now live in
a high-tech versus a low-tech world, often confronted
by the dark-age mentality of parasites and chameleons
who have no recognizable standards to constrain their
violent actions, whether legal, moral, or ethical. Some
nonstate groups—and even self-proclaimed states—
make no compunction about dying for their cause; in
fact, they plan on dying! So how does one rationalize

47
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
this phenomenon, especially in light of developments
in autonomous armed systems?
The current and future security environment,
increasingly defined by asymmetric and unpredictable
threats, international laws, and norms—both new and
revised—must grapple with emerging challenges in
order to prevent or minimize the loss of life, either by
human hands or by machine. It is clear that nonstate
enemy combatants are unlikely to act in accordance
with international laws regarding the use of auton-
omous systems. We should also question whether
certain states would even comply. Exploiting the ambi-
guities of these emerging autonomous (and disrup-
tive) technologies by providing an edge to a belligerent
would make compliance with international norms and
regimes even more profoundly complex, not less.
Notwithstanding these enormous challenges, the
2015 Kingston Conference on International Security
(KCIS) participants rightly acknowledged the need
to further the understanding of the legal, ethical, and
strategic implications of autonomous and semi-auton-
omous systems. Robotics is still in a pioneer stage and,
as such, we have much to learn and to discover about
their full potential, implications, and lethality. More-
over, learn we must, as the advances are accelerating
at an impressive pace. Nuclear weapons were consid-
ered unthinkable for future use after Hiroshima and
Nagasaki, yet their development continued thereafter
at an alarming pace. Unlike the post-nuclear age, how-
ever, there is currently no comparable robotic stigma,
and the international community has yet to define
what even constitutes an autonomous system. Hence,
the argument can be made that the policy implications
and the development of a credible (and enforceable)
control regime will remain elusive for quite some time.

48
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
Controversy abounds when trying to demarcate
autonomous systems and, more broadly, robots. For
instance, in the United States, there are attempts to
define robotic systems by making distinctions between
the execution and performance of a machine. Alterna-
tively, some academics base their definitions on a more
technical level and argue that a robot is composed of
sensors, processors, and tools. The lack of consensus
on an internationally accepted definition has hindered
the development of laws governing these systems. Sev-
eral scholars at the conference maintained that in order
to make a legal assessment of these systems, one needs
to examine a particular weapon in a particular context.
Moreover, a number of attendees at the conference
took issue with the Human Rights Watch campaign
to prohibit the rise of “killer robots.” Human Rights
Watch contends that these:

‘killer robots,’ would be able to select and engage


targets without human intervention. Precursors to these
weapons, such as armed drones, are being developed
and deployed by nations including China, Israel, South
Korea, Russia, the United Kingdom, and the United
States. It is questionable that fully autonomous weapons
would be capable of meeting international humanitarian
law standards, including the rules of distinction,
proportionality, and military necessity, while they would
threaten the fundamental right to life and principle of
human dignity.1

For these reasons, Human Rights Watch has called


for “a preemptive ban on the development, production,
and use of fully autonomous weapons.”2 The count-
er-argument asserts that an arms control approach
is not meaningful and is, in fact, counterproductive.
Rather than preventing their development, greater
efforts should be made to ensure that the use of robotic

49
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
systems complies with the Law of Armed Conflict.
Arguably, a massive point of legal contention revolves
around the reliability of these systems; we fear a sci-
ence fiction based scenario where robotic systems sur-
pass our own intelligence, or we fear that autonomous
systems are not intelligent enough to be reliable when
paired with lethal ordnances.
This possibility underlines the threat potential
of autonomous systems to the global community at
large. Even if autonomous systems are intended for
use by allies to undertake surveillance activities or as a
force multiplier, potential users of these systems (both
friend and foe) will always find unforeseen applica-
tions for these devices. To assess thoroughly the threat
potential of these devices, as well as how the Canadian
Armed Forces (CAF) and its allies and partners should
respond, one must consider all the possible ways these
systems might be used, rather than simply focus on
how we use them now or the manner in which they
were intended to be used.
With rapid technological growth come challenges,
such as defining robotics and their legal applications in
combat. However, this rapid growth also creates oppor-
tunities. We should view these opportunities as both
an evolution and a potential revolution in the security
environment. Robotics is unlikely to replace all aspects
of human control and oversight in combat. Yet, it gives
us the capabilities we need to wage a smarter form of
warfare, including the promise of reducing the risk to
our soldiers. Ultimately, technology will advance, and
war will persist, as we continue to face determined
enemies and threats that we have yet to appreciate.
Consider the following from a policy implication
perspective:

50
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
1. The trend for further development of quasi- or
fully-autonomous systems for military pur-
poses will continue, and the use of these sys-
tems is virtually inevitable. We should think on
how to deal with its implications, rather than
stick our heads in the sand and pretend that it
will not happen, or focus all of our energy to
prevent their development in the first place.
2. Policymakers are generally not well prepared
for tough decisions with long-term, strategic
implications. This is even more so in democratic
states, which usually have 4- or 5-year cyclical
horizons. As such, more efforts need to be made
to prepare decision makers to think and act on
longer-term horizons.
3. To paraphrase George Friedman, war is an old
dance now being accompanied by new musical
instruments. We must stay in-step with these
new musical instruments; otherwise, we may
not only find ourselves off the dance floor but
under it!
4. For Canada and like-minded allies, there is a
need to strengthen the focus on collaboration
on innovation, and interoperability and inte-
gration (CI2I)—(or see eye-to-eye)—regarding
new and emerging autonomous systems. The
focus has to be broadened beyond the exist-
ing American, British, Canadian, Australian,
and New Zealand Armies’ Program, which is
focused on the interoperability issue of auton-
omous systems. This approach would give
Canada a better chance to stay abreast of new
technological breakthroughs, mitigate the pos-
sibility of adversaries developing and using
robotics against us, and provide for a better

51
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
understanding of the implications of autono-
mous systems. A new international, inter-dis-
ciplinary Manhattan Project for autonomous
systems could be a visionary step that would
give like-minded states an edge in the devel-
opment of autonomous systems and mitigate
the use of these same systems against them by
unscrupulous groups and rogue states.
5. There is a need to invest in effective wargaming
with autonomous systems, including broaden-
ing our understanding of the implications for
command and control (C2). These systems are
already being used—in various degrees and
sophistication—at the tactical level in many
military operations around the world. If “killer
robots” were given the ability to select and
engage targets without human intervention,
what are the implications for C2 nodes at the
various levels of military operations (tacti-
cal, operational, strategic/grand strategic, and
political)? Perhaps the most complex of implica-
tions is the danger of “moral de-skilling” of the
human military professional at all levels, and
replacing the human at crucial decision making
nodes that have broad implications in the con-
duct of military operations (as a means to an
end). According to a recent article by Megan
Spurrell, fully autonomous weapons are capa-
ble of detecting and executing targets without
human intervention. The technology remains
in experiential development, but experts warn
that it will not take long to transform the next
generation of unmanned aerial vehicles (UAVs)
into “killer robots.” The Bureau of Investigative
Journalism estimates that all that is needed is an

52
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
algorithm to fire missiles on a drone’s own rec-
ognized targets, and we are not far from reach-
ing this stage.3
6. While robotics is no silver bullet for either
deterring war or waging it successfully, miti-
gating surprise by a ruthless adversary is essen-
tial. While it matters if someday we are able to
advance technology to a state whereby auton-
omous systems are able to completely replace
and supplant humans, it is even more import-
ant to understand how a determined adver-
sary might use technological advancements to
wage war, including the unpleasant possibility
of humans becoming robots themselves! Ulti-
mately—and despite the very unconstrained
actions of some barbaric groups—we should
continue our efforts to ensure that the purpose
of waging and managing organized violence
should remain a tool of last resort. It is one thing
for machines to kill each other; it is another for
machines to decide how, when, and why to kill
human beings.

In conclusion, Canada and its close allies are urged


not only to maintain a close eye on emerging techno-
logical trends, but also to participate in the develop-
ment of these technologies and understand the impact
of their use, while continuing to embrace the Laws of
Armed Conflict. War may not be the best way of solv-
ing differences, but it does ensure that differences are
not settled for us. We should not limit the pursuit of
technology that would allow us to defend ourselves
and to settle those differences—in our interest—with
fewer losses.

53
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms
ENDNOTES - CHAPTER 3

1. Human Rights Watch, “Killer Robots,” n.d., Human Rights


Watch, available from https://www.hrw.org/topic/arms/killer-robots.

2. Ibid.

3. Megan Spurrell, “Analysis: Worst of Both Worlds: How


automated warfare is fusing American foreign policies,” Con-
ference of Defence Associations (CDA) Institute, June 3, 2015,
available from https://cdainstitute.ca/worst-of-both-worlds-how-
automated-warfare-is-fusing-american-foreign-policies/.

54
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:09:57 UTC
All use subject to https://about.jstor.org/terms

You might also like