Acting Moral

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Acting vs.

Being Moral:
The Limits of Technological Moral Actors

Aaron M. Johnson Sidney Axinn


Electrical and Systems Engineering Department Philosophy Department
University of Pennsylvania University of South Florida
Philadelphia, Pennsylvania, USA Tampa, Florida, USA
Email: [email protected] Email: [email protected]

Abstract—An autonomous robot (physical or digital artificial itself raises concerns [4]. But more critically a programmed
being) may be capable of producing actions that if performed by robot cannot make such a choice; instead it may only do
a human could be considered moral, by mimicking its creators that which its algorithms dictate. Such algorithms may be
actions or following their programmed instructions, but it cannot quite complicated, including e.g. neural networks, machine
be moral. Morality cannot be fully judged by any behavioral learning, or stochasticity, but still will result in the outcome
test, as the answer to moral questions is less important than
of the code’s execution applied to the sensory input. Against
the process followed in arriving at the answer (as evidenced by
disagreement among ethicists on the correct answer to many such a given example input it might produce the same action as an
questions based on the individual’s moral style). The distinction independent observer would. The observer might think that the
between acting and being moral was recently considered for robot appeared to act morally, but that is quite different from
lethal autonomous military robots [1], and in this paper is the robot actually making the moral decision to choose right
further clarified in the context of more broad applications. In over wrong.
addition such a distinction has implications for what types of
tasks autonomous robots should not be allowed to do, based In this paper we start by exploring this difference between
on what must be moral decisions. Here we draw a distinction being moral and acting moral, in Sections II-A and II-B,
between what might be illegal for an agent to do (which relates respectively. This leads to the question, in Section II-C of if
more to the agreed upon laws of the current political leadership), there can ever be a test of morality that a robot could hope to
and what actions are so innately moral decisions that we cannot pass. We conclude that an autonomous robot can only mimic
delegate them to a machine, no matter how advanced it appears. moral actions, and therefore in Section III-A we propose a
restriction on the use of robots in situations where acting upon
I. I NTRODUCTION a moral decision is necessary. This is surely not the only ethical
restriction on the use of such technology, but is the focus of
Humans react better to robots that are anthropomorphized, the present paper. As such we have found only a short, though
with life-like appearance or behavior, which allow for more quite important, list of roles that the proposed restriction would
familiar and comfortable interaction [2]. Indeed designing a apply to. It is thus apparently only a major hurtle in the
robot’s behavior to be more human like may be an unavoidable adoption of autonomous systems in a few situations, which
key to the widespread adoption of such technology in human Section III-B takes to be an optimistic view on the potential
environments [3]. However when a robot acts like a human value of new technology more broadly in the future.
does we may be tricking ourselves into believing that it is
really thinking like humans do. When it appears to have Throughout the course of this paper we will use the term
emotions it is only natural for the people it interacts with to “robot” to refer to any autonomous cyber-physical system. This
attribute the emotions that they would have in a situation to the then excludes tele-operated systems such as remotely piloted
robot. These behaviors may be a beneficial factor in their social drones, which certainly have legal and moral questions to
integration, but it is important to be clear that reproducing such be considered but are outside the scope of this paper. The
actions is different from experiencing the underlying emotions. focus here is on actions, and hence the use of the term robot
Commanding a robot to appear sad does not mean that the (whose algorithms have the ability to sense and act upon the
robot is experiencing sadness in the same way that humans world) and not simply computer program, although many of
do. The physical expression is a consequence, not the source, the arguments apply equally to both.
of experiencing an emotion. We also limit the scope of this paper to conventional, pre-
Similarly the consequences of morality are often observed singularity [5] levels of intelligence where the algorithms that
through actions, but those actions are not the source of the control the operation of the system may be quite complicated,
morals. Instead it is the internal decision to take a certain but still execute finite programs and are formally equivalent to
action. In this way morality requires free will – it is the functional evaluation. In a hypothetical future where artificial
choice to do the right thing instead of the wrong thing. This intelligence has become so advanced that it is truly distinct
first requires the capable of doing the wrong thing, which from this sort of program and could be considered a person in
its own right then this topic, among many others, will need to
This paper to appear in the 2014 IEEE International Symposium on Ethics be revisited. However for now such levels of intelligence lie
in Engineering, Science, and Technology. in the realm of science fiction.
II. M ORAL ACTORS imagination goes beyond the question of free will – to satisfy
the categorical imperative one must have both the facilities
A. Being Moral to consider the result should the maxim of your actions be a
First we shall consider the characteristics of a human universal law, and the freedom to change your actions based
person, as a basis for comparison. This concept of a person is on that result.
taken from Immanuel Kant’s work on the subject, “personality
Kneller is far from the only Kantian to emphasize the
is the characteristic of a being who has rights, hence a moral
role of the imagination. Bernard Freydberg explains that, “the
quality” [6, p. 220]. For the foreseeable future, any robot in
moral law, its forms, and the maxims that flow from it,
question will surely not meet this requirement and therefore
are one and all synthetic a priori judgments and therefore
not be considered a person. However even if they are not
include imagination” [9, p. 120]. And Fernando Costa Mattos
a person they might still have the moral reasoning facilities
has emphasized the, “imagination, guided by morality” [9,
equivalent to that of a person (though we argue in the sequel
p. 138]. Imagination therefore appears to be a prerequisite
that they cannot).
for morality. However imagination by its nature requires a
One aspect of a person that is particularly relevant here is novel consideration of the world, different from the way one
the imperative of morality, and that, “the concept of freedom, has in the past. Robots may be quite good at simulating
which points in the direction of the concept of duty, is that of a many possible laws of physical interaction, and can incorporate
person” [6, p. 227]. To Kant a person has duties indicated by parametric or stochastic variations on those laws. However they
the categorical imperative. The categorical imperative is the cannot by their nature consider anything truly novel – beyond
demand that you, “act as though the maxim of your action the classes of variations preprogrammed into their algorithms.
were, by your will, to become a universal law of nature” [6,
In summary, a person has not only rights, duties, free
p. 39]. This presupposes that such persons have free will, and
will, but also the imagination to understand the effect of
can decide whether to follow the categorical imperative, or
different actions, and the ability to impose on him or herself
instead to follow a selfish principle. According to Kant, to
the categorical imperative. How close do robots come to the
freely choose to follow the categorical imperative is to impose
features of a human person, the features that make for moral
on one-self the principle of morality.
motivation and moral action? Such robots certainly do not have
Even if a person is not a Kantian they may still certainly rights. They do have programmed commands that seem at first
be considered moral and simply follow a different moral to be close to the concept of duties. However a duty is not a
style [7]. Rather than considering all of humanity to be the command that must be followed; rather it is a desirable choice
moral benefactor they may be utilitarian and consider some that one should follow. Robots do not appear to have free will –
of humanity to be the moral benefactor, or consider God or if they did we might call them “out of control.” While they may
country to be the ultimate benefactor. Whatever the style of have a range of choices that they consider in a given situation,
morality followed, it is that the person chooses to follow it that range is specified by their programs, not by themselves.
and make some sacrifice for their moral benefactor that makes Imagination requires one to consider the world not as it is, but
it a moral action. It then appears that to freely choose to follow as it could be, while robots can only consider the classes of
a moral style more generally is to impose on one-self the world models allowed by their algorithms. Without free-will
principle of morality. and imagination, they cannot impose the categorical imperative
on themselves; they cannot consciously sacrifice selfishness for
The freedom to make this choice is an oft debated topic in morality. And, if they cannot do that, they cannot be moral.
philosophy as, “the possibility of freedom cannot be directly
proved, but only indirectly, through the possibility of the
categorical imperative of duty” [6, p. 213]. Furthermore, “the B. Acting Moral
concept of freedom emerges from the categorical imperative
In a stage play good actors will convey the emotions of the
of duty” [6, p. 227]. If one is considering whether or not to
character that they are portraying. By controlling their voice
obey the demands of duty, one apparently has the free will to
and movements they can tell a story about what that character
do either one.
is feeling and how they are experiencing the world. To do
Stated another way, a major feature of rational beings is this well the director may have told them to pause before a
that, “everything in nature works according to laws. Only a certain line, or follow another actor with their gaze. However
rational being has the capacity of acting according to the we would not say that they are necessarily experiencing that
conception of laws, i.e. according to principles. This capacity emotion, they are simply acting it out, especially if they were
is will” [8, p. 29]. It appears therefore that in order to told what actions to take by the director.
act according to principles, as required by the categorical
imperative, we again find the requirement for free will. If their character does something amoral, say stealing
money from another character, we would not blame the actor.
A person has rights, duties, and free will, and imposes The action taken by the actor is the same as that of a street
on him or herself the categorical imperative. In addition, as thief, however they were simply acting out the amoral action
the Kant scholar Jane Kneller has put it, “...for the possibility that the writer put into the script. The writer telling an actor
of moral motivation, the imagination is indeed strangely but to act out an amoral action does not mean that the actor
obviously at the root of all human experience” [9, p. 161]. isn’t moral. Similarly programming a robot to follow a certain
For moral motivation, as Jane Kneller has said, one must action does not mean that the robot is moral. Neither has
additionally have the imagination to consider the effect on considered whether the principle of their actions could be
a person of various different actions. This requirement of generalized. Neither has chosen a moral style, or considered
sacrifice for the good of themselves, their nation, their religion, III. L IMITS OF T ECHNOLOGICAL U SE
or for all of humanity.
A. Disallowed Uses of Autonomy
A stage actor is working from a script, which generally
In light of robots inability to be moral actors, they should
allows for only one set of outcomes. A robot is working
therefore be excluded from performing certain roles.
from its programming, and its programmed rules of morality
given by its programmer is also fundamentally limited – they 1) Soldiers: The clearest actions that robots should not be
cannot possibly cover all scenarios, for that would take infinite allowed to do is kill people. As argued in e.g. [1], the decision
memory. The program might be quite complicated, evaluating to take a human life is inherently a moral decision. Fortunately
thousands or millions of scenarios, and so it may appear there are few places where humans routinely take such actions,
to have many options to choose from. Ultimately, however, first and foremost being in war. Certainly in war humans
the robot can act out only those actions that have been have always used technology to assist in killing each other,
programmed into its finite memory, and of those it will choose however it has always been the human making the decision and
the action dictated by its program as a result of the given input. the human who initiates the action. As technology progressed
from swords to crossbows, guns, missiles, and now drones the
Another simplified example of something that gives the physical distance between the attacker and the attacked has
appearance of a moral action is a video or other recording, grown. But while the missile for example is self propelled, its
“when playing back a video of a moral act, one would not propulsion is launched by a human, and its target trajectory
say that the video was moral; it is simply replaying the selected by a human.
moral act of its subject” [1, p. 135]. An autonomous robot
is obviously much more than a recording, as it can interact While the battlefield is a different environment from gen-
with the physical world. But since the robot is programmed eral life, there are no fewer variations in the infinite possible
to respond to a certain situation in a way that appears moral, situations a robot soldier might find themselves in. Even if
it is also obviously not choosing to follow a moral style or their use was restricted to a certain task [10], say building
choosing to make a sacrifice. clearing, the possible scenarios it could encounter are still
innumerable and therefore nonprogrammable. The humans that
program and launch the robot cannot decide a priori how the
C. Test of Morality robot should handle all of these scenarios or even all of the
Some have argued that since humans are notoriously im- possible actions the robot might need to take. As it cannot
perfect moral actors, a programmed system could eventually make a moral judgment it must not be given the power to
be more moral than a human system [10]. However this notion decide to attack a human. The use of autonomous robots in
that there can be a “test” of morality that a robot could hope war must be limited to actions which do not require moral
to pass (or score higher than a human), is inherently flawed. decisions, such as reconnaissance.
An action can only be considered moral, according to Kant 2) Politicians: We elect politicians (presidents, governors,
for example, if the actor choses to follow the categorical legislators, etc) in part to be our moral leaders. They are
imperative. There may be common agreement about what the supposed to take action based on what they believe is right
result of that choice is in some circumstances, however it is or wrong and not based only on polls or their political party.
the choice and not the result that can be considered moral. Computers, and robots more generally, may certainly help our
leaders. For example a robot may provide tele-presence for
Furthermore there is not always common agreement about
politicians so they may interact both with leaders at the capitol
what the correct “moral” choice is in ethical problems. A
and community members at home. They may help calculate
moral actor must give themselves the moral imperative by
or simulate the possible effects of a given law or policy, and
choosing some moral style [7]. Someone who is a Kantian
their exacting and tireless calculations are key in informing
(takes all of mankind as the moral beneficiaries) may disagree
the politician.
with someone who follows a religious style (who takes God,
or gods, as the benefactor). Both are considered to be moral Robots cannot, however, be moral leaders and therefore
actors they are simply following a different moral style. In this cannot take the place of a politician. A robot cannot write a law
way it is not the answer to a moral question that is moral or not that dictates what is right or wrong in a certain jurisdiction, or
but rather the process by which one comes up with the answer. what a fair penalty should be. A robot cannot choose to declare
a state of emergency, inconveniencing some to potentially save
Morality is not simply following the law. For example the others. A robot cannot decide whether to fund levees or other
laws of war are in part a set of rules, and therefore a test municipal projects that on one hand are expensive and use
on what is included in them or not is certainly possible. Just public resources, but could save lives or properties should a
because morality requires one to follow the laws of war, acting disaster strike. We ask our legislative and executive politicians
within this set of rules does not make one’s actions moral. to make these decisions, to weigh the good against the bad,
Furthermore the laws of war are not only a set of written and we place our trust in their moral leadership.
down rules, as stated in the U.S. Army Field Manual 27-10,
“although some of the law of war has not been incorporated in 3) Judges and Juries: While robots can recount an entire
any treaty or convention...this body of unwritten or customary code of laws in a fraction of a second, they cannot judge a
law is firmly established” [11, p. 4]. Thus the written laws are case on its merits. One can conceive of an advanced computer
not the entirety of what a moral actor must consider, and so program that when fed the transcript of a court case returns
even a test on the ability to follow the written laws of war a ruling and sentence according to the laws in the current
would be incomplete. jurisdiction. However to test such a routine one would need to
compare its result to the consensus of several human judges, as IV. C ONCLUSION
any individual judge may deviate from case to case. Humans
Robotics, like any emerging technology, raises new ethical
may not be perfect by this measure, however the notion of
questions that humanity must carefully examine. It is far better
perfect is ill defined here. So long as they are honorably
to consider these questions now, while the technology is in
interpreting the laws and considering the merits of a case, we
its infancy, and not wait until after its use in the world.
accept some variability in our judges. The role of the judiciary
Many of the scientists who worked on the atomic bomb later
is to interpret the laws, not to compute them.
regretted their part in the creation of such a weapon [12], even
though the fundamental math and science needed in its creation
In many countries you have a right to be judged by a jury
certainly had academic merit in their own right.
of your peers. The main reason is that if a jury is required
to make a decision that can have such a great impact on Autonomous robots with no human in the loop cannot be
someone’s life, stripping them of their freedom and sending moral actors. They lack both the imagination to conceive of the
them to jail for example, it is important for the decision to effects should the principle of their actions be made universal,
be made by a person or persons who can empathize both as well as the free will to make the choice to follow a moral
with the accused and with the accuser. Different jurists will style. There is no test of morality that a robot could pass as
have different moral styles and therefore they may come up such as only the actions resulting from moral decisions are
with different verdicts. It is not which particular verdict in testable. They may appear to be acting morally, as they may
a particular case that makes the jurist a moral actor, but the take the same action we would expect a moral person to take,
process of deciding right from wrong. but that does not make them moral.
For these reasons they should not be employed in situations
requiring moral action. They cannot be trusted to decide on
B. Uses of Autonomy killing humans, or on attacking buildings or vehicles, they
should certainly have no autonomous lethal use. They are
While the previous sections have shown that the use of incapable of being moral leaders, and as such cannot re-
robots for moral actions should be precluded, that does not place humans in legislative, judicial, or executive governance.
mean that work on autonomous systems should be halted. However that leaves ample territory where robots can help
While a robot and its algorithms cannot carry out a moral humans by doing what they are good at: exact computations,
action on its own, it can certainly use its sensors, calculating mechanical strength, and tireless focus.
abilities, and physical interactions to aid humans. Humans
have always turned to technology to aid them in making R EFERENCES
decisions, and in caring out actions based on those decisions. [1] A. M. Johnson and S. Axinn, “The morality of autonomous robots,”
With ever more accurate and capable sensors, and sophisticated Journal of Military Ethics, vol. 12, no. 2, pp. 129–141, 2013.
algorithms to process that sensor information, a robot can give [2] B. R. Duffy, “Anthropomorphism and the social robot,” Robotics and
autonomous systems, vol. 42, no. 3, pp. 177–190, 2003.
a soldier more situational awareness. It can tell a soldier that
[3] R. Kirby, J. Forlizzi, and R. Simmons, “Affective social robots,”
the enemy soldier went into a house, or that it saw something Robotics and Autonomous Systems, vol. 58, p. 322, March 2010.
move that it thinks is a civilian. The moral hazard arises if a
[4] A. F. Beavers, “Between angels and animals: The question of robot
soldier begins to substitute the robot’s judgment for his or her ethics, or is kantian moral agency desirable,” in Association for practical
own on what to do in a certain situation. and professional ethics, eighteenth annual meeting, Cincinnati, Ohio,
March, 2009, pp. 5–8.
In fact the inability for a robot to make moral decisions [5] V. Vinge, “The coming technological singularity,” Whole Earth Review,
does not appear to restrict them from many roles in which vol. 81, pp. 88–95, 1993.
humans are currently employed. Building and driving cars [6] I. Kant, Opus Postumum (English Translation). Cambridge: Cambridge
do not inherently require moral actions but rather consistent University Press, 1993, translated by Eckart Forster and Michael Rosen.
and precise execution of the robot’s programming – indeed [7] S. Axinn, “Moral style,” The Journal of Value Inquiry, vol. 24, no. 2,
pp. 123–133, 1990.
these are some places where robotics is making great strides.
Machines are often used to clean our laundry and dishes, [8] I. Kant, Foundations of the Metaphysics of Morals. New York: The
Liberal Arts Press, 1959, translated by Lewis White Beck.
though we don’t often use the term robot in this case. As
[9] M. L. Thompson, Ed., Imagination in Kant’s critical philosophy.
technology progresses it is easy to imagine robots helping in Berlin: De Gruyter, 2013.
construction, logistics, retail, and other professions where it [10] R. Arkin, Governing lethal behavior in autonomous robots. Boca
is in fact preferable that they simply and reliably perform the Raton, FL: CRC Press, 2009.
same actions, and not change them based on moral hazards. [11] Department of the Army, “FM27-10. The law of
There will certainly be other issues to consider, and likely land warfare,” 1956, still authoritative. [Online]. Available:
some roles that humanity decides not to relinquish to a robot, http://www.loc.gov/rr/frd/Military Law/pdf/law warfare-1956.pdf
but the inability to be a moral actor is only a hindrance where [12] L. Szilard, “A petition to the president of the united states,” July 1945.
such morality is required. [Online]. Available: http://www.dannen.com/decision/45-07-17.html

You might also like