Zwhen Is A Robot A Moral Agent
Zwhen Is A Robot A Moral Agent
Zwhen Is A Robot A Moral Agent
153
An important emotional attachment is formed between all the agents in this situ-
ation, but the attachment of the two human agents is strongest toward the dog. We
tend to speak favorably of the relationships formed with these animals using terms
identical to those used to describe healthy relationships with other humans.
The Web site for Guide Dogs for the Blind quotes the American Veterinary
Association to describe the human-animal bond:
The human-animal bond is a mutually beneficial and dynamic relationship between people
and other animals that is influenced by behaviors that are essential to the health and well-
being of both. This includes, but is not limited to, emotional, psychological, and physical
interaction of people, animals, and the environment.1
Certainly, providing guide dogs for the visually impaired is morally praisewor-
thy, but is a good guide dog morally praiseworthy in itself? I think so. There
are two sensible ways to believe this. The least controversial is to consider that
things that perform their function well have a moral value equal to the moral
value of the actions they facilitate. A more contentious claim is the argument that
animals have their own wants, desires, and states of well-being, and this auton-
omy, though not as robust as that of humans, is nonetheless advanced enough
to give the dog a claim for both moral rights and possibly some meager moral
responsibilities as well.
The question now is whether the robot is correctly seen as just another tool or
if it is something more like the technology exemplified by the guide dog. Even at
the present state of robotics technology, it is not easy to see on which side of this
disjunction that reality lies.
No robot in the real world – or that of the near future – is, or will be, as cognitively
robust as a guide dog. Yet even at the modest capabilities of today’s robots, some
have more in common with the guide dog than with a simple tool like a hammer.
In robotics technology, the schematic for the moral relationship between the
agents is:
Programmer(s) → Robot → User
Here the distinction between the nature of the user and that of the tool can blur
so completely that, as the philosopher of technology Cal Mitcham argues, the
“ontology of artifacts ultimately may not be able to be divorced from the philos-
ophy of nature” (Mitcham 1994, p.174), requiring us to think about technology
in ways similar to how we think about nature.
I will now help clarify the moral relations between natural and artificial agents.
The first step in that process is to distinguish the various categories of robotic
technologies.
1 Retrieved from the Web site: Guide Dogs for the Blind; http://www.guidedogs.com/about-
mission.html#Bond
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.
Sullins154
Categories of Robotic Technologies
It is important to realize that there are currently two distinct varieties of robotics
technologies that have to be distinguished in order to make sense of the attribu-
tion of moral agency to robots.
There are telerobots and there are autonomous robots. Each of these
technologies has a different relationship to moral agency.
Telerobots
Telerobots are remotely controlled machines that make only minimal autono-
mous decisions. This is probably the most successful branch of robotics at this
time because they do not need complex artificial intelligence to run; its operator
provides the intelligence for the machine. The famous NASA Mars Rovers are
controlled in this way, as are many deep-sea exploration robots. Telerobotic sur-
gery has become a reality, as may telerobotic nursing. These machines are now
routinely used in search and rescue and play a vital role on the modern battlefield,
including remotely controlled weapons platforms such as the Predator drone
and other robots deployed to support infantry in bomb removal and other combat
situations.
Obviously, these machines are being employed in morally charged situations,
with the relevant actors interacting in this way:
Operator → Robot → Patient/Victim
The ethical analysis of telerobots is somewhat similar to that of any technical
system where the moral praise or blame is to be born by the designers, program-
mers, and users of the technology. Because humans are involved in all the major
decisions that the machine makes, they also provide the moral reasoning for the
machine.
There is an issue that does need to be explored further though, and that is the
possibility that the distance from the action provided by the remote control of
the robot makes it easier for the operator to make certain moral decisions. For
instance, a telerobotic weapons platform may distance its operator so far from
the combat situation as to make it easier for the operator to decide to use the
machine to harm others. This is an issue that I address in detail in other papers
(Sullins 2009). However, for the robot to be a moral agent, it is necessary that
the machine have a significant degree of autonomous ability to reason and act on
those reasons. So we will now look at machines that attempt to achieve just that.
Autonomous Robots
For the purposes of this paper, autonomous robots present a much more inter-
esting problem. Autonomy is a notoriously thorny philosophical subject. A full
discussion of the meaning of “autonomy” is not possible here, nor is it neces-
sary, as I will argue in a later section of this paper. I use the term “autonomous
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.
Sullins156
strong stand on this position. His dispute with this claim centers on the fact that
robots will never have an autonomous will because they can never do anything
that they are not programmed to do (Bringsjord 2007). Bringsjord shows this
with an experiment using a robot named PERI, which his lab uses for experi-
ments. PERI is programmed to make a decision to either drop a globe, which rep-
resents doing something morally bad, or hold on to it, which represents an action
that is morally good. Whether or not PERI holds or drops the globe is decided
entirely by the program it runs, which in turn was written by human program-
mers. Bringsjord argues that the only way PERI can do anything surprising to the
programmers requires that a random factor be added to the program, but then its
actions are merely determined by some random factor, not freely chosen by the
machine, therefore, PERI is no moral agent (Bringsjord 2007).
There is a problem with this argument. Because we are all the products of
socialization and that is a kind of programming through memes, we are no bet-
ter off than PERI. If Bringsjord is correct, then we are not moral agents either,
because our beliefs, goals, and desires are not strictly autonomous: They are the
products of culture, environment, education, brain chemistry, and so on. It must
be the case that the philosophical requirement for robust free will demanded by
Bringsjord, whatever that turns out to be, is a red herring when it comes to moral
agency. Robots may not have it, but we may not have it either, so I am reluctant to
place it as a necessary condition for moral agency.
A closely related position to this argument is held by Bernhard Irrgang who
claims that “[i]n order to be morally responsible, however, an act needs a partici-
pant, who is characterized by personality or subjectivity” (Irrgang 2006). Only
a person can be a moral agent. As he believes it is not possible for a noncyborg
(human machine hybrids) robot to attain subjectivity, it is impossible for robots
to be called into moral account for their behavior. Later I will argue that this
requirement is too restrictive and that full subjectivity is not needed.
The third possible position is the view that we are not moral agents but robots
are. Interestingly enough, at least one person actually held this view. In a paper
written a while ago but only recently published, Joseph Emile Nadeau claims
that an action is a free action if and only if it is based on reasons fully thought
out by the agent. He further claims that only an agent that operates on a strictly
logical basis can thus be truly free (Nadeau 2006). If free will is necessary for
moral agency and we as humans have no such apparatus operating in our brain,
then using Nadeau’s logic, we are not free agents. Robots, on the other hand, are
programmed this way explicitly, so if we built them, Nadeau believes they would
be the first truly moral agents on earth (Nadeau 2006).2
2 One could counter this argument from a computationalist standpoint by acknowledging that it is
unlikely we have a theorem prover in our biological brain; but in the virtual machine formed by
our mind, anyone trained in logic most certainly does have a theorem prover of sorts, meaning that
there are at least some human moral agents.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.
Sullins158
have that quality. I mean to use the term autonomy as engineers do, simply that
the machine is not under the direct control of any other agent or user.
The robot must not be a telerobot or temporarily behave as one. If the robot
does have this level of autonomy, then the robot has a practical independent
agency. If this autonomous action is effective in achieving the goals and tasks of
the robot, then we can say the robot has effective autonomy. The more effective
autonomy the machine has, meaning the more adept it is in achieving its goals
and tasks, then the more agency we can ascribe to it. When that agency 3 causes
harm or good in a moral sense, we can say the machine has moral agency.
Autonomy thus described is not sufficient in itself to ascribe moral agency.
Consequently, entities such as bacteria, animals, ecosystems, computer viruses,
simple artificial life programs, or simple autonomous robots – all of which exhibit
autonomy as I have described it – are not to be seen as responsible moral agents
simply on account of possessing this quality. They may very credibly be argued to
be agents worthy of moral consideration, but if they lack the other two require-
ments argued for next, they are not robust moral agents for whom we can plau-
sibly demand moral rights and responsibilities equivalent to those claimed by
capable human adults.
It might be the case that the machine is operating in concert with a number
of other machines or software entities. When that is the case, we simply raise the
level of abstraction to that of the group and ask the same questions of the group.
If the group is an autonomous entity, then the moral praise or blame is ascribed
at that level. We should do this in a way similar to what we do when describing
the moral agency of groups of humans acting in concert.
Intentionality
The second question addresses the ability of the machine to act “intentionally.”
Remember, we do not have to prove the robot has intentionality in the strongest
sense, as that is impossible to prove without argument for humans as well. As
long as the behavior is complex enough that one is forced to rely on standard
folk psychological notions of predisposition or intention to do good or harm,
then this is enough to answer in the affirmative to this question. If the complex
interaction of the robot’s programming and environment causes the machine to
act in a way that is morally harmful or beneficial and the actions are seemingly
deliberate and calculated, then the machine is a moral agent.
There is no requirement that the actions really are intentional in a philosophi-
cally rigorous way, nor that the actions are derived from a will that is free on all
levels of abstraction. All that is needed at the level of the interaction between
the agents involved is a comparable level of personal intentionality and free will
between all the agents involved.
3 Meaning self-motivated, goal-driven behavior.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.
Sullins160
moral status may exceed our own. How could this happen? The philosopher Eric
Dietrich argues that as we are more and more able to mimic the human mind
computationally, we need simply forgo programming the nasty tendencies evolu-
tion has given us and instead implement “only those that tend to produce the
grandeur of humanity, [for then] we will have produced the better robots of our
nature and made the world a better place” (Dietrich 2001).
There are further extensions of this argument that are possible. Nonrobotic
systems such as software “bots” are directly implicated, as is the moral status
of corporations. It is also obvious that these arguments could be easily applied
to the questions regarding the moral status of animals and environments. As I
argued earlier, domestic and farmyard animals are the closest technology we have
to what we dream robots will be like. So these findings have real-world applica-
tions outside robotics to animal welfare and rights, but I will leave that argument
for a future paper.
Conclusions
Robots are moral agents when there is a reasonable level of abstraction under
which we must grant that the machine has autonomous intentions and respon-
sibilities. If the robot can be seen as autonomous from many points of view, then
the machine is a robust moral agent, possibly approaching or exceeding the moral
status of human beings.
Thus, it is certain that if we pursue this technology, then, in the future, highly
complex, interactive robots will be moral agents with corresponding rights and
responsibilities. Yet even the modest robots of today can be seen to be moral
agents of a sort under certain, but not all, levels of abstraction and are deserving
of moral consideration.
References
Arkin, Ronald (2007): Governing Lethal Behavior: Embedding Ethics in a Hybrid
Deliberative/Reactive Robot Architecture, U.S. Army Research Office Technical
Report GIT-GVU-07–11. Retrived from: http://www.cc.gatech.edu/ai/robot-lab/
online-publications/formalizationv35.pdf.
Arkin, Ronald (2009): Governing Lethal Behavior in Autonomous Robots, Chapman & Hall/
CRC.
Bringsjord, S. (2007): Ethical Robots: The Future Can Heed Us, AI and Society (online).
Dennett, Daniel (1998): When HAL Kills, Who’s to Blame? Computer Ethics, in Stork,
David, HAL’s Legacy: 2001’s Computer as Dream and Reality, MIT Press.
Dietrich, Eric (2001): Homo Sapiens 2.0: Why We Should Build the Better Robots of Our
Nature, Journal of Experimental and Theoretical Artificial Intelligence, Volume 13, Issue
4, 323–328.
Floridi, Luciano, and Sanders, J. W. (2004): On the Morality of Artificial Agents, Minds
and Machines, 14.3, pp. 349–379.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.
162
The challenges facing those working on machine ethics can be
divided into two main categories: philosophical concerns about the feasibility
of computing ethics and challenges from the AI perspective. In the first category,
we need to ask first whether ethics is the sort of thing that can be computed. One
well-known ethical theory that supports an affirmative answer to this question is
Act Utilitarianism. According to this teleological theory (a theory that maintains
that the rightness and wrongness of actions is determined entirely by the conse-
quences of the actions), the right act is the one, of all the actions open to the agent,
which is likely to result in the greatest net good consequences, taking all those
affected by the action equally into account. Essentially, as Jeremy Bentham (1781)
long ago pointed out, the theory involves performing “moral arithmetic.”
Of course, before doing the arithmetic, one needs to know what counts as “good”
and “bad” consequences. The most popular version of Act Utilitarianism –
Hedonistic Act Utilitarianism – would have us consider the pleasure and displea-
sure that those affected by each possible action are likely to receive. As Bentham
pointed out, we would probably need some sort of scale to account for such things
as the intensity and duration of the pleasure or displeasure that each individual
affected is likely to receive. This is information that a human being would need to
have, as well, in order to follow the theory. Getting this information has been and
will continue to be a challenge for artificial intelligence research in general, but
it can be separated from the challenge of computing the ethically correct action,
given this information. With the requisite information, a machine could be devel-
oped that is just as able to follow the theory as a human being.
Hedonistic Act Utilitarianism can be implemented in a straightforward manner.
The algorithm is to compute the best action – that which derives the greatest net
pleasure – from all alternative actions. It requires as input the number of peo-
ple affected and, for each person, the intensity of the pleasure/displeasure (for
example, on a scale of 2 to –2), the duration of the pleasure/displeasure (for
example, in days), and the probability that this pleasure or displeasure will occur
for each possible action. For each person, the algorithm computes the product of
10
Philosophical Concerns with
Machine Ethics
Susan Leigh Anderson
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.