Zwhen Is A Robot A Moral Agent

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

When Is a Robot a Moral Agent?

153
An important emotional attachment is formed between all the agents in this situ-
ation, but the attachment of the two human agents is strongest toward the dog. We
tend to speak favorably of the relationships formed with these animals using terms
identical to those used to describe healthy relationships with other humans.
The Web site for Guide Dogs for the Blind quotes the American Veterinary
Association to describe the human-animal bond:
The human-animal bond is a mutually beneficial and dynamic relationship between people
and other animals that is influenced by behaviors that are essential to the health and well-
being of both. This includes, but is not limited to, emotional, psychological, and physical
interaction of people, animals, and the environment.1
Certainly, providing guide dogs for the visually impaired is morally praisewor-
thy, but is a good guide dog morally praiseworthy in itself? I think so. There
are two sensible ways to believe this. The least controversial is to consider that
things that perform their function well have a moral value equal to the moral
value of the actions they facilitate. A more contentious claim is the argument that
animals have their own wants, desires, and states of well-being, and this auton-
omy, though not as robust as that of humans, is nonetheless advanced enough
to give the dog a claim for both moral rights and possibly some meager moral
responsibilities as well.
The question now is whether the robot is correctly seen as just another tool or
if it is something more like the technology exemplified by the guide dog. Even at
the present state of robotics technology, it is not easy to see on which side of this
disjunction that reality lies.
No robot in the real world – or that of the near future – is, or will be, as cognitively
robust as a guide dog. Yet even at the modest capabilities of today’s robots, some
have more in common with the guide dog than with a simple tool like a hammer.
In robotics technology, the schematic for the moral relationship between the
agents is:
Programmer(s) → Robot → User
Here the distinction between the nature of the user and that of the tool can blur
so completely that, as the philosopher of technology Cal Mitcham argues, the
“ontology of artifacts ultimately may not be able to be divorced from the philos-
ophy of nature” (Mitcham 1994, p.174), requiring us to think about technology
in ways similar to how we think about nature.
I will now help clarify the moral relations between natural and artificial agents.
The first step in that process is to distinguish the various categories of robotic
technologies.
1 Retrieved from the Web site: Guide Dogs for the Blind; http://www.guidedogs.com/about-
mission.html#Bond
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

Sullins154
Categories of Robotic Technologies
It is important to realize that there are currently two distinct varieties of robotics
technologies that have to be distinguished in order to make sense of the attribu-
tion of moral agency to robots.
There are telerobots and there are autonomous robots. Each of these
technologies has a different relationship to moral agency.
Telerobots
Telerobots are remotely controlled machines that make only minimal autono-
mous decisions. This is probably the most successful branch of robotics at this
time because they do not need complex artificial intelligence to run; its operator
provides the intelligence for the machine. The famous NASA Mars Rovers are
controlled in this way, as are many deep-sea exploration robots. Telerobotic sur-
gery has become a reality, as may telerobotic nursing. These machines are now
routinely used in search and rescue and play a vital role on the modern battlefield,
including remotely controlled weapons platforms such as the Predator drone
and other robots deployed to support infantry in bomb removal and other combat
situations.
Obviously, these machines are being employed in morally charged situations,
with the relevant actors interacting in this way:
Operator → Robot → Patient/Victim
The ethical analysis of telerobots is somewhat similar to that of any technical
system where the moral praise or blame is to be born by the designers, program-
mers, and users of the technology. Because humans are involved in all the major
decisions that the machine makes, they also provide the moral reasoning for the
machine.
There is an issue that does need to be explored further though, and that is the
possibility that the distance from the action provided by the remote control of
the robot makes it easier for the operator to make certain moral decisions. For
instance, a telerobotic weapons platform may distance its operator so far from
the combat situation as to make it easier for the operator to decide to use the
machine to harm others. This is an issue that I address in detail in other papers
(Sullins 2009). However, for the robot to be a moral agent, it is necessary that
the machine have a significant degree of autonomous ability to reason and act on
those reasons. So we will now look at machines that attempt to achieve just that.
Autonomous Robots
For the purposes of this paper, autonomous robots present a much more inter-
esting problem. Autonomy is a notoriously thorny philosophical subject. A full
discussion of the meaning of “autonomy” is not possible here, nor is it neces-
sary, as I will argue in a later section of this paper. I use the term “autonomous
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

When Is a Robot a Moral Agent? 155


robots” in the same way that roboticists use the term (see Arkin 2009; Lin, et al.
2008), and I am not trying to make any robust claims for the autonomy of robots.
Simply, autonomous robots must be capable of making at least some of the major
decisions about their actions using their own programming. This may be simple
and not terribly interesting philosophically, such as the decisions a robot vacuum
makes to navigate a floor that it is cleaning. Or they may be much more robust
and require complex moral and ethical reasoning, such as when a future robotic
caregiver must make a decision as to how to interact with a patient in a way that
advances both the interests of the machine and the patient equitably. Or they may
be somewhere in between these exemplar cases.
The programmers of these machines are somewhat responsible for the actions
of such machines, but not entirely so, much as one’s parents are a factor but not
the exclusive cause in one’s own moral decision making. This means that the
machine’s programmers are not to be seen as the only locus of moral agency in
robots. This leaves the robot itself as a possible location for a certain amount of
moral agency. Because moral agency is found in a web of relations, other agents
such as the programmers, builders, and marketers of the machines, other robotic
and software agents, and the users of these machines all form a community of
interaction. I am not trying to argue that robots are the only locus of moral
agency in such a community, only that in certain situations they can be seen as
fellow moral agents in that community.
The obvious objection here is that moral agents must be persons, and the
robots of today are certainly not persons. Furthermore, this technology is unlikely
to challenge our notion of personhood for some time to come. So in order to
maintain the claim that robots can be moral agents, I will now have to argue that
personhood is not required for moral agency. To achieve that end I will first look
at what others have said about this.
Philosophical Views on the Moral Agency of Robots
There are four possible views on the moral agency of robots. The first is that
robots are not now moral agents but might become them in the future. Daniel
Dennett supports this position and argues in his essay “When HAL Kills,
Who Is to Blame?” that a machine like the fictional HAL can be considered a
murderer because the machine has mens rea, or a guilty state of mind, which
includes motivational states of purpose, cognitive states of belief, or a non-
mental state of negligence (Dennett 1998). Yet to be morally culpable, they
also need to have “higher order intentionality,” meaning that they can have
beliefs about beliefs, desires about desires, beliefs about its fears, about its
thoughts, about its hopes, and so on (1998). Dennett does not suggest that we
have machines like that today, but he sees no reason why we might not have
them in the future.
The second position one might take on this subject is that robots are incapa-
ble of becoming moral agents now or in the future. Selmer Bringsjord makes a
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

Sullins156
strong stand on this position. His dispute with this claim centers on the fact that
robots will never have an autonomous will because they can never do anything
that they are not programmed to do (Bringsjord 2007). Bringsjord shows this
with an experiment using a robot named PERI, which his lab uses for experi-
ments. PERI is programmed to make a decision to either drop a globe, which rep-
resents doing something morally bad, or hold on to it, which represents an action
that is morally good. Whether or not PERI holds or drops the globe is decided
entirely by the program it runs, which in turn was written by human program-
mers. Bringsjord argues that the only way PERI can do anything surprising to the
programmers requires that a random factor be added to the program, but then its
actions are merely determined by some random factor, not freely chosen by the
machine, therefore, PERI is no moral agent (Bringsjord 2007).
There is a problem with this argument. Because we are all the products of
socialization and that is a kind of programming through memes, we are no bet-
ter off than PERI. If Bringsjord is correct, then we are not moral agents either,
because our beliefs, goals, and desires are not strictly autonomous: They are the
products of culture, environment, education, brain chemistry, and so on. It must
be the case that the philosophical requirement for robust free will demanded by
Bringsjord, whatever that turns out to be, is a red herring when it comes to moral
agency. Robots may not have it, but we may not have it either, so I am reluctant to
place it as a necessary condition for moral agency.
A closely related position to this argument is held by Bernhard Irrgang who
claims that “[i]n order to be morally responsible, however, an act needs a partici-
pant, who is characterized by personality or subjectivity” (Irrgang 2006). Only
a person can be a moral agent. As he believes it is not possible for a noncyborg
(human machine hybrids) robot to attain subjectivity, it is impossible for robots
to be called into moral account for their behavior. Later I will argue that this
requirement is too restrictive and that full subjectivity is not needed.
The third possible position is the view that we are not moral agents but robots
are. Interestingly enough, at least one person actually held this view. In a paper
written a while ago but only recently published, Joseph Emile Nadeau claims
that an action is a free action if and only if it is based on reasons fully thought
out by the agent. He further claims that only an agent that operates on a strictly
logical basis can thus be truly free (Nadeau 2006). If free will is necessary for
moral agency and we as humans have no such apparatus operating in our brain,
then using Nadeau’s logic, we are not free agents. Robots, on the other hand, are
programmed this way explicitly, so if we built them, Nadeau believes they would
be the first truly moral agents on earth (Nadeau 2006).2
2 One could counter this argument from a computationalist standpoint by acknowledging that it is
unlikely we have a theorem prover in our biological brain; but in the virtual machine formed by
our mind, anyone trained in logic most certainly does have a theorem prover of sorts, meaning that
there are at least some human moral agents.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

When Is a Robot a Moral Agent? 157


The fourth stance that can be held on this issue is nicely argued by Luciano
Floridi and J. W. Sanders of the Information Ethics Group at the University
of Oxford (Floridi 2004). They argue that the way around the many appar-
ent paradoxes in moral theory is to adopt a “mind-less morality” that evades
issues like free will and intentionality, because these are all unresolved issues in
the philosophy of mind that are inappropriately applied to artificial agents such
as robots.
They argue that we should instead see artificial entities as agents by appro-
priately setting levels of abstraction when analyzing the agents (2004). If we set
the level of abstraction low enough, we can’t even ascribe agency to ourselves
because the only thing an observer can see are the mechanical operations of
our bodies; but at the level of abstraction common to everyday observations
and judgments, this is less of an issue. If an agent’s actions are interactive and
adaptive with their surroundings through state changes or programming that
is still somewhat independent from the environment the agent finds itself in,
then that is sufficient for the entity to have its own agency (Floridi 2004). When
these autonomous interactions pass a threshold of tolerance and cause harm,
we can logically ascribe a negative moral value to them; likewise, the agents can
hold a certain appropriate level of moral consideration themselves, in much the
same way that one may argue for the moral status of animals, environments, or
even legal entities such as corporations (Floridi and Sanders, paraphrased in
Sullins 2006).
My views build on the fourth position, and I will now argue for the moral
agency of robots, even at the humble level of autonomous robotics technology
today.
The Three Requirements of Robotic Moral Agency
In order to evaluate the moral status of any autonomous robotic technology, one
needs to ask three questions of the technology under consideration:
Is the robot significantly autonomous?•
Is the robot’s behavior intentional?•
Is the robot in a position of responsibility?•
These questions have to be viewed from a reasonable level of abstraction, but
if the answer is yes to all three, then the robot is a moral agent.
Autonomy
The first question asks if the robot could be seen as significantly autonomous from
any programmers, operators, and users of the machine. I realize that “autonomy”
is a difficult concept to pin down philosophically. I am not suggesting that robots
of any sort will have radical autonomy; in fact, I seriously doubt human beings
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

Sullins158
have that quality. I mean to use the term autonomy as engineers do, simply that
the machine is not under the direct control of any other agent or user.
The robot must not be a telerobot or temporarily behave as one. If the robot
does have this level of autonomy, then the robot has a practical independent
agency. If this autonomous action is effective in achieving the goals and tasks of
the robot, then we can say the robot has effective autonomy. The more effective
autonomy the machine has, meaning the more adept it is in achieving its goals
and tasks, then the more agency we can ascribe to it. When that agency 3 causes
harm or good in a moral sense, we can say the machine has moral agency.
Autonomy thus described is not sufficient in itself to ascribe moral agency.
Consequently, entities such as bacteria, animals, ecosystems, computer viruses,
simple artificial life programs, or simple autonomous robots – all of which exhibit
autonomy as I have described it – are not to be seen as responsible moral agents
simply on account of possessing this quality. They may very credibly be argued to
be agents worthy of moral consideration, but if they lack the other two require-
ments argued for next, they are not robust moral agents for whom we can plau-
sibly demand moral rights and responsibilities equivalent to those claimed by
capable human adults.
It might be the case that the machine is operating in concert with a number
of other machines or software entities. When that is the case, we simply raise the
level of abstraction to that of the group and ask the same questions of the group.
If the group is an autonomous entity, then the moral praise or blame is ascribed
at that level. We should do this in a way similar to what we do when describing
the moral agency of groups of humans acting in concert.
Intentionality
The second question addresses the ability of the machine to act “intentionally.”
Remember, we do not have to prove the robot has intentionality in the strongest
sense, as that is impossible to prove without argument for humans as well. As
long as the behavior is complex enough that one is forced to rely on standard
folk psychological notions of predisposition or intention to do good or harm,
then this is enough to answer in the affirmative to this question. If the complex
interaction of the robot’s programming and environment causes the machine to
act in a way that is morally harmful or beneficial and the actions are seemingly
deliberate and calculated, then the machine is a moral agent.
There is no requirement that the actions really are intentional in a philosophi-
cally rigorous way, nor that the actions are derived from a will that is free on all
levels of abstraction. All that is needed at the level of the interaction between
the agents involved is a comparable level of personal intentionality and free will
between all the agents involved.
3 Meaning self-motivated, goal-driven behavior.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

When Is a Robot a Moral Agent? 159


Responsibility
Finally, we can ascribe moral agency to a robot when the robot behaves in such a
way that we can only make sense of that behavior by assuming it has a responsibil-
ity to some other moral agent(s).
If the robot behaves in this way, and if it fulfills some social role that carries
with it some assumed responsibilities, and if the only way we can make sense
of its behavior is to ascribe to it the “belief ” that it has the duty to care for its
patients, then we can ascribe to this machine the status of a moral agent.
Again, the beliefs do not have to be real beliefs; they can be merely apparent.
The machine may have no claim to consciousness, for instance, or a soul, a mind,
or any of the other somewhat philosophically dubious entities we ascribe to
human specialness. These beliefs, or programs, just have to be motivational in
solving moral questions and conundrums faced by the machine.
For example, robotic caregivers are being designed to assist in the care of the
elderly. Certainly a human nurse is a moral agent. When and if a machine carries
out those same duties, it will be a moral agent if it is autonomous as described
earlier, if it behaves in an intentional way, and if its programming is complex
enough that it understands its responsibility for the health of the patient(s) under
its direct care.
This would be quite a machine and not something that is currently on offer.
Any machine with less capability would not be a full moral agent. Although it
may still have autonomous agency and intentionality, these qualities would make
it deserving of moral consideration, meaning that one would have to have a good
reason to destroy it or inhibit its actions; but we would not be required to treat
it as a moral equal, and any attempt by humans who might employ these less-
capable machines as if they were fully moral agents should be avoided.
Some critics have argued that my position “unnecessarily complicates the issue
of responsibility assignment for immoral actions” (Arkin 2007, p. 10). However,
I would counter that it is going to be some time before we meet mechanical enti-
ties that we recognize as moral equals, but we have to be very careful that we pay
attention to how these machines are evolving and grant that status the moment it
is deserved. Long before that day though, complex robot agents will be partially
capable of making autonomous moral decisions. These machines will present
vexing problems, especially when machines are used in police work and warfare,
where they will have to make decisions that could result in tragedies. Here, we
will have to treat the machines the way we might do for trained animals such as
guard dogs. The decision to own and operate them is the most significant moral
question, and the majority of the praise or blame for the actions of such machines
belongs to the owners and operators of these robots.
Conversely, it is logically possible, though not probable in the near term, that
robotic moral agents may be more autonomous, have clearer intentions, and a
more nuanced sense of responsibility than most human agents. In that case, their
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

Sullins160
moral status may exceed our own. How could this happen? The philosopher Eric
Dietrich argues that as we are more and more able to mimic the human mind
computationally, we need simply forgo programming the nasty tendencies evolu-
tion has given us and instead implement “only those that tend to produce the
grandeur of humanity, [for then] we will have produced the better robots of our
nature and made the world a better place” (Dietrich 2001).
There are further extensions of this argument that are possible. Nonrobotic
systems such as software “bots” are directly implicated, as is the moral status
of corporations. It is also obvious that these arguments could be easily applied
to the questions regarding the moral status of animals and environments. As I
argued earlier, domestic and farmyard animals are the closest technology we have
to what we dream robots will be like. So these findings have real-world applica-
tions outside robotics to animal welfare and rights, but I will leave that argument
for a future paper.
Conclusions
Robots are moral agents when there is a reasonable level of abstraction under
which we must grant that the machine has autonomous intentions and respon-
sibilities. If the robot can be seen as autonomous from many points of view, then
the machine is a robust moral agent, possibly approaching or exceeding the moral
status of human beings.
Thus, it is certain that if we pursue this technology, then, in the future, highly
complex, interactive robots will be moral agents with corresponding rights and
responsibilities. Yet even the modest robots of today can be seen to be moral
agents of a sort under certain, but not all, levels of abstraction and are deserving
of moral consideration.
References
Arkin, Ronald (2007): Governing Lethal Behavior: Embedding Ethics in a Hybrid
Deliberative/Reactive Robot Architecture, U.S. Army Research Office Technical
Report GIT-GVU-07–11. Retrived from: http://www.cc.gatech.edu/ai/robot-lab/
online-publications/formalizationv35.pdf.
Arkin, Ronald (2009): Governing Lethal Behavior in Autonomous Robots, Chapman & Hall/
CRC.
Bringsjord, S. (2007): Ethical Robots: The Future Can Heed Us, AI and Society (online).
Dennett, Daniel (1998): When HAL Kills, Who’s to Blame? Computer Ethics, in Stork,
David, HAL’s Legacy: 2001’s Computer as Dream and Reality, MIT Press.
Dietrich, Eric (2001): Homo Sapiens 2.0: Why We Should Build the Better Robots of Our
Nature, Journal of Experimental and Theoretical Artificial Intelligence, Volume 13, Issue
4, 323–328.
Floridi, Luciano, and Sanders, J. W. (2004): On the Morality of Artificial Agents, Minds
and Machines, 14.3, pp. 349–379.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

When Is a Robot a Moral Agent? 161


Irrgang, Bernhard (2006): Ethical Acts in Robotics. Ubiquity, Volume 7, Issue 34
(September 5, 2006–September 11, 2006) www.acm.org/ubiquity.
Lin, Patrick, Bekey, George, and Abney, Keith (2008): Autonomous Military Robotics: Risk,
Ethics, and Design, US Department of Navy, Office of Naval Research, Retrived
online: http://ethics.calpoly.edu/ONR_report.pdf.
Mitcham, Carl (1994): Thinking through Technology: The Path between Engineering and
Philosophy, University of Chicago Press.
Nadeau, Joseph Emile (2006): Only Androids Can Be Ethical, in Ford, Kenneth, and
Glymour, Clark, eds., Thinking about Android Epistemology, MIT Press, 241–248.
Sullins, John (2005): Ethics and Artificial Life: From Modeling to Moral Agents, Ethics
and Information Technology, 7:139–148.
Sullins, John (2009): Telerobotic Weapons Systems and the Ethical Conduct of War,
American Philosophical Association Newsletter on Philosophy and Computers, Volume 8,
Issue 2 Spring 2009. http://www.apaonline.org/documents/publications/v08n2_
Computers.pdf.
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.
Copyright © 2011. Cambridge University Press. All rights reserved.

162
The challenges facing those working on machine ethics can be
divided into two main categories: philosophical concerns about the feasibility
of computing ethics and challenges from the AI perspective. In the first category,
we need to ask first whether ethics is the sort of thing that can be computed. One
well-known ethical theory that supports an affirmative answer to this question is
Act Utilitarianism. According to this teleological theory (a theory that maintains
that the rightness and wrongness of actions is determined entirely by the conse-
quences of the actions), the right act is the one, of all the actions open to the agent,
which is likely to result in the greatest net good consequences, taking all those
affected by the action equally into account. Essentially, as Jeremy Bentham (1781)
long ago pointed out, the theory involves performing “moral arithmetic.”
Of course, before doing the arithmetic, one needs to know what counts as “good”
and “bad” consequences. The most popular version of Act Utilitarianism –
Hedonistic Act Utilitarianism – would have us consider the pleasure and displea-
sure that those affected by each possible action are likely to receive. As Bentham
pointed out, we would probably need some sort of scale to account for such things
as the intensity and duration of the pleasure or displeasure that each individual
affected is likely to receive. This is information that a human being would need to
have, as well, in order to follow the theory. Getting this information has been and
will continue to be a challenge for artificial intelligence research in general, but
it can be separated from the challenge of computing the ethically correct action,
given this information. With the requisite information, a machine could be devel-
oped that is just as able to follow the theory as a human being.
Hedonistic Act Utilitarianism can be implemented in a straightforward manner.
The algorithm is to compute the best action – that which derives the greatest net
pleasure – from all alternative actions. It requires as input the number of peo-
ple affected and, for each person, the intensity of the pleasure/displeasure (for
example, on a scale of 2 to –2), the duration of the pleasure/displeasure (for
example, in days), and the probability that this pleasure or displeasure will occur
for each possible action. For each person, the algorithm computes the product of
10
Philosophical Concerns with
Machine Ethics
Susan Leigh Anderson
Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics. Retrieved from
http://ebookcentral.proquest.com
Created from centennial-ebooks on 2019-08-10 18:01:19.

You might also like