Capítulo 4 Prinz PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

4

Moral Emotions∗
JESSE J. PRINZ AND SHAUN NICHOLS

Within Western philosophy, it’s widely recognized that emotions play a role in
moral judgment and moral motivation. Emotions help people see that certain
actions are morally wrong, for example, and they motivate behavioral responses
when such actions are identified. Philosophers who embrace such views often
present them at a high level of abstraction and without empirical support.
Emotions motivate, they say, but they don’t explain how they motivate or
why (e.g. Ayer, 1936; McDowell, 1985; Dreier, 1990). We think that empirical
moral psychology is essential for moving from imprecise formulations into more
detailed explanations.
We shall address several questions. First, we shall survey empirical evidence
supporting the conjecture that emotions regularly occur when people make
moral judgments. We shall describe several models of moral judgment that are
consistent with these data, but we won’t adjudicate which is right, since all these
models agree that emotions are important for moral psychology. Then we shall
turn to the question of which emotions are involved in moral judgment, and offer
some suggestions about how to distinguish moral emotions from non-moral
emotions. And, finally, we shall discuss two moral emotions that are central
to moral thought and action: anger and guilt. We shall discuss other moral
emotions in passing, but anger and guilt are arguably the most prevalent, at least
in Western morality, and a careful analysis of how they function will help to
underscore why they are so important. Indeed, we shall conclude by suggesting
that these moral emotions may be necessary for morality in some sense.

1. The Role of Emotions in Moral Cognition


Almost all major ethical theories in Western philosophy implicate the emotions
in one way or another. On some of these theories, emotions are essential to
∗ We are grateful to John Doris and Dan Kelly for comments on a previous draft of this chapter.
112    

morality, and in others they are not. But, even those authors who deny that
emotions are essential usually find a place for them in moral psychology. This
is true even for Kant, who is notorious for arguing that morality depends on
reason rather than sentiment. In Kant’s (1997: 43) system, reason tells us that
we follow the moral law, but acting from the moral law begins with respect
for the law, which is constituted by respect for persons, which is a natural
consequence of recognizing the dignity of each person as a law-governed
agent. In addition to respect, Kant (1996: 160) claims that moral judgments are
accompanied by moral feelings. It is difficult to find a philosopher who does
not think emotions are important to morality.
Despite this consensus, there is considerable disagreement about the exact
role that emotions are supposed to play. In addition, even where moral
philosophers have invoked emotions, they seldom attend carefully to the
psychological characteristics of the emotions to which they appeal. Indeed,
it would be hard to exaggerate the extent to which philosophers, even self-
described sentimentalists, have neglected psychological research on the moral
emotions. This chapter is intended as a partial corrective.
We shall begin by considering two ways in which emotions have been alleged
to contribute to morality. Then we shall go on to discuss how empirical work
on specific emotions might be used to characterize these contributions with
greater detail and accuracy.

1.1. Motivation
It is widely assumed that emotions play a role in moral motivation. This
is hardly surprising, because emotions are thought to be major sources of
motivation in general. Some philosophers conjecture that reasons can be
motivating in the absence of emotions, but even they would likely admit that
emotions can motivate when present.
There are two contexts in which the link between emotions and moral
motivation are often explicitly discussed. The first is in discussions of prosocial
behaviors. In the social sciences, several lines of research explore that link. In
developmental psychology, there is research on correlations between emotions
and good deeds. Children are more likely to engage in various praiseworthy
behaviors if they have certain emotional responses (Chapman et al., 1987;
Eisenberg, 2000). For example, when children show concern in response
to the distress of others, they are likely to engage in consolation behaviors
(Zahn-Waxler et al., 1992). Psychologists have also investigated the relationship
between emotions and good deeds in adults. For example, Isen and Levin (1972)
show that induction of positive affect dramatically increases the likelihood that
  113

a person will help a stranger pick up some papers that have fallen on the street.
There is also work looking at the role of emotion in promoting cooperative
behavior in prisoner’s dilemmas and other economic games (Batson & Moran,
1999). Positive emotions and affiliative feelings may promote cooperation. Of
course, cooperative behavior might be motivated by self-interest and emotions,
such as greed. One topic of current debate is whether genuine altruism must be
motivated by certain emotional states, such as sympathy, concern, empathy, or
care (see Chapter 5; and, for a less optimistic view, see also Prinz, forthcoming).
In all these examples, emotions play a role in promoting prosocial behavior.
But it is important to notice that one can engage in prosocial behavior without
judging that it is good. This is true even in many cases of altruistic motivation.
You might help someone who is drowning, for example, without first thinking
to yourself that doing so would be what morality demands. The term ‘‘moral
motivation’’ is ambiguous between motivation to act in a way that (as a
matter of fact) fits with the demands of morality and motivation to act in
a way because one judges that morality demands such action. Research on
altruism, cooperation, and prosocial behavior in children typically falls in the
former category. But helping because you care is different, psychologically
speaking, from helping because you think it is what morality demands. Both
forms of motivation should be distinguished, and both can be empirically
investigated.
Like altruistic motivation, the motivation to do something because it is what
morality demands is also widely thought to involve emotions. Doing something
because it is demanded by morality requires making a moral judgment. A moral
judgment is a judgment that something has moral significance. In expressing
moral judgments we use terms such as right and wrong, good and bad, just and
unjust, virtuous and base. Typically, when people judge that something has
moral significance they become motivated to behave in accordance with those
judgments. Emotions are widely believed to contribute to that motivation.
This conjecture has been widely discussed within philosophy. There is a
raging debate about whether moral judgments are intrinsically motivating (see
Frankena, 1973; Brink, 1989; Smith, 1994). Motivation internalists claim that
we cannot make moral judgments without thereby being motivated to act in
accordance with them. To make this case, they sometimes suppose that moral
judgments are constituted, at least in part, by emotions. Judgments are mental
states, and to say that they are partially constituted by emotions is to say that the
mental state of judging, for example, that killing is immoral is constituted by a
mental representation of killing along with an emotional state directed toward
that represented act. Externalists argue that judgments can be made without
motivation, but they often agree with internalists that, when motivation
114    

accompanies a judgment, it derives from an emotional state. In other words,


internalists and externalists usually agree that emotions contribute to moral
motivation for those individuals who are motivated to act in accordance with
their moral judgments.
In sum, emotions motivate or impel us to act morally, and they can do
so in the absence of a moral judgment or as a consequence of a moral
judgment. But this leaves open numerous questions. When emotions motivate
as a consequence of moral judgments, it may be because they are somehow
contained in those judgments or because they are elicited by those judgments
(a dispute that won’t concern us here). In addition, the emotions involved in
prosocial behavior may vary. Buying Girl Scout cookies to support the Scouts
may involve different emotions than saving someone who is drowning or
attending a rally to protest government injustice. We shall not discuss this issue
here. Instead we shall focus on the emotions that arise when people make moral
judgments, and, as we shall see in Section 2, these too may vary significantly.

1.2. Moral Judgment


In recent years, empirical research has been used to demonstrate that emotions
arise in the context of moral judgment. This may seem obvious, but empirical
work is needed to confirm casual observation and to clarify the specific roles
that emotions play.
Some research suggests that emotions can serve as moral ‘‘intuitions.’’ The
basic idea is that, under some circumstances, we do not come to believe that
something is, say, morally wrong by reasoning about it. Rather, we have an
emotion, and the emotion leads us to judge that it is wrong. This model has
been gaining momentum in psychology. One version of it has been developed
by Jonathan Haidt (2001). He and his collaborators have shown that one can
influence moral judgments by manipulating emotions. In one study, Schnall
et al. (2008) placed subjects at either a clean desk or a filthy desk and had them
fill out questionnaires in which they were asked to rate the wrongness of various
actions, such as eating your pet dog for dinner after killing it accidentally.
Subjects at the filthy desk gave higher wrongness ratings than subjects at the
clean desk. The same effect was obtained using disgusting films and ‘‘fart spray.’’
In another study, Wheatley and Haidt (2005) used hypnosis to cause subjects
to feel disgusted whenever they heard a neutral word, such as ‘‘often.’’ Then
they asked subjects to morally assess the actions of various characters. Subjects
gave more negative ratings when the disgust-eliciting word was included in the
description of those actions. They even gave more negative moral appraisals
when the action was neutral. For example, disgust-induced subjects gave more
  115

negative appraisals to a student body representative after reading that he ‘‘often’’


invites interesting speakers to campus. Wheatley and Haidt conclude that the
pang of disgust causes subjects to make a negative moral judgment. Of course,
in ordinary life, our emotions are not usually induced hypnotically, and they
may lead to more consistent and defensible intuitions. If you see someone kick
a dog, for example, that might disgust you, and the feeling of disgust might
lead you to say the perpetrator has done something terribly wrong.
Haidt thinks that emotional intuitions are the basis of most moral judgments.
Others argue that emotions play a role only when making specific kinds of
moral judgments, but not others. For example, Joshua Greene (2008) argues
that there are two ways people arrive at moral decisions: through reason and
through passion. He argues further that people who apply consequentialist
principles may be guided by cool reason, and people who make deontological
judgments may use their emotions. To test this, he and his collaborators present
subjects in a functional magnetic resonance imaging (fMRI) scanner with
moral dilemmas that require killing one person to save several others. In some
dilemmas people tend to make judgments that conform to ‘‘deontological’’
prohibitions (don’t kill, regardless of the consequences), while, in others,
people make consequentialist judgments (do what ever maximizes the number
of lives saved). When brain activity is examined, the deontological judgments
are associated with stronger activation in areas associated with emotion.
The results of Greene et al. invite the view that consequentialist judgments
may not involve emotion at all, but their data just show that emotions are
less active, and working memory is more active. It is possible, therefore,
that these are cases where the emotional intuition is overridden by reasoning.
Alternatively, the two cases may involve different kinds of emotions: a negative
feeling associated with transgression may guide deontological judgments, while
a positive feeling that motivates helping may guide consequentialist judgments.
Indeed, other studies of moral judgment using fMRI have consistently shown
emotional activation. Moll and colleagues have shown that emotions are active
when we judge sentences to express something morally wrong and when we
look at pictures of moral transgressions (Moll et al., 2002, 2003). Heekeren et al.
(2003) found evidence of emotional activations as people assessed sentences
for moral, as opposed to semantic, correctness. Sanfey et al. (2003) found a
similar pattern when people made judgments of unfairness during economic
games. Just about every fMRI study of moral judgment has shown emotional
activations. Collectively such results show that emotions arise in making moral
judgments at least some, and perhaps even all, of the time.
The data we have been reviewing strongly suggest that emotions are regularly
and reliably active during episodes of moral cognition. Emotions seem to arise
116    

no matter what the moral task, and induction of emotions prior to moral
evaluation can influence responses, suggesting that the link between emotion
and moral cognition is not merely correlational or epiphenomenal.
Still, the data we have reviewed are consistent with several different models
of how emotions and moral judgment relate. We shall briefly describe these
here, though we shall not adjudicate. We think that on any of these models
the study of moral emotions will be of considerable importance.
The first model, which we shall call affective rationalism, combines what
we shall call judgment rationalism with motivation emotionism. As we shall
define these terms, judgment rationalism combines the following two views:
• Rational genesis: the judgment that something is moral or immoral is
typically arrived at through a process of reasoning that can occur without
emotions.
• Rational essence: when emotions arise in the context of moral judgment,
they are contingent in the sense that a token of the very same judgment
could have occurred in the absence of those emotions.
The affective rationalist accepts this, but also adds:
• Emotional motivation: emotions play a central and reliable role in moti-
vating people to act in accordance with their moral judgments.
For the affective rationalist, we need only reason to recognize the moral law,
but we may need emotion to care about the moral law. Since most of us care,
emotions normally arise when we make moral judgments. Affective rationalists
must also admit that emotions can influence moral judgments (as the data
clearly demonstrate), but they chalk this up to a more general phenomenon of
emotional influence: emotions can have an impact on judgment quite generally
and, when this happens, it is usually a form of bias or noise that we should try
to guard against.
The second model is Haidt’s social intuitionism. According to Haidt, we do
not typically arrive at moral judgments through reasoning, so he rejects rational
genesis. Rather, we typically arrive at moral judgments through ‘‘intuitions’’
which are gut feelings that lead us to conclude that something is right or wrong:
• Emotional genesis: the judgment that something is moral or immoral is
typically arrived at as a consequence of emotional feelings.
Haidt probably also accepts emotional motivation, but his stance on rational
essence is a bit hard to determine. He sometimes implies that a moral
judgment is a cognitive state that could, in principle, arise without emotions,
even if this happens only rarely in practice.
  117

The next model we will consider is Nichols’s (2004) ‘‘sentimental rules’’


theory. Nichols argues that healthy moral judgment involves two different
components: a normative theory and systems of emotion. The normative theory
specifies rules and the emotions alter their character and give them motivational
force. Nichols thus endorses emotional motivation. He also says that moral
judgments can arise through the influence of either affect or reasoning, and
characteristically involve an element of both; so he accepts both emotional and
rational genesis. The most distinctive feature of the sentimental rules theory is
the rejection of the rational essence principle. Nichols argues that moral rules
have a characteristic functional role, which includes being regarded as very
serious and authority independent, and being justified by appeal to the suffering
of victims. These features usually result from emotional responses. The concern
we have for victims infuses our judgments with feelings, which influences both
our justificatory patterns and our sense that moral rules are serious. Emotions
also lead us to feel that moral judgments are, in some sense, independent of
authority. Even if authorities approve of cruelty, for example, is still just feels
wrong. Thus, for Nichols, the characteristic role of moral judgments depends
crucially on emotions. One can make a moral judgment without emotions,
but it would be a moral judgment of an unusual and perhaps pathological kind,
because it would not have the characteristic functional role. Thus Nichols
would reject rational essence as formulated above. In its place, he would say:
• Emotional essence: when emotions arise in the context of moral judgment,
they are necessary in the sense that a token of the very same judgment
would not have occurred in the absence of those emotions.
Nichols is not saying that moral judgments are impossible without emotions,
just that these would be abnormal. The normative theory component can
function when emotions are absent. In this respect, his view about the role
of emotions can be regarded as a causal model, meaning that moral judgments
can occur with emotions and, when this happens, those emotions influence
the role of the judgment in a way that significantly affects it functional profile,
making it a judgment of a very distinctive kind (an emotion-backed judgment).
This idea can be captured by an elaboration of emotional essence:
• Emotional essence (causal): when emotions arise in the context of moral
judgment, they are necessary in the sense that they causally influence the
functional role of those judgments such that they become judgments of a
different type than they would be were the emotions absent.
The final model we shall consider is the ‘‘emotional constitution model’’,
one version of which can be found in the ‘‘constructive sentimentalism’’ of
118    

Prinz (2007). On this account, token instances of moral concepts, such as


 and , literally contain emotions in the way that tokens of the
concept  might contain amusement. When one applies these concepts,
one is having an emotional response. One judges that a joke is funny by
being amused, and one judges that something is immoral by being outraged.
Thus the emotional constitution model embraces emotional essence but in a
somewhat different way than the sentimental rules model. To wit:
• Emotional essence (constitution): emotions are necessary to moral judg-
ments because they are essential parts: moral judgments are psychological
states that include emotional states as parts, and a judgment would not
qualify as moral if these emotional states were absent.
We think that empirical research can be used to adjudicate between these
models. Some of the research already mentioned may even be helpful in this
regard, but most of these findings really leave all the models as open possibilities.
Our goal here is not to advocate one or another. Rather, we want to point
out that they all share a crucial feature. They all say that emotions generally
arise in the context of making moral judgments. Therefore defenders of all the
models should welcome research on moral emotions. They can find common
ground in investigating which emotions arise in the context of moral judgment,
what information those emotions convey, and how those emotions influence
behavior. Answering these questions in detail is crucial for a fully developed
account of how emotions contribute to moral judgment. These details can
be illuminated empirically, and we shall analyze two specific moral emotions
below. First, we need to gain a bit more clarity on what moral emotions are.

2. The Moral Emotions


Any complete account of the relationship between emotions and morality must
say something about the nature of emotions and the nature of moral emotions
in particular. This is not the place to survey theories of emotion, much less to
present evidence in favor of any one theory (see Prinz, 2004). But we offer a
few remarks.
Theories about the nature of emotions divide into two classes. Some
researchers claim that emotions are essentially cognitive. On these views, every
emotion necessarily has a cognitive component. That cognitive component is
usually presumed to be an evaluative judgment or ‘‘appraisal’’ (e.g. Lazarus,
1991; Scherer, 1997). ‘‘Sadness,’’ for example, might comprise the judgment
that there has been an irrevocable loss, together with a distinctive feeling and
  119

motivational state. In Lazarus’s terminology, each emotion has a core relational


theme: a relation between organism and environment that occasions the onset
of the emotion. When we judge that there has been a loss, we become sad;
when we judge that we are in danger, we become afraid; and when we judge
that our goals have been fulfilled, we become happy.
Other researchers argue that emotions can occur without cognitive com-
ponents. The feeling of sadness might occur, they claim, without any explicit
judgment to the effect that there has been a loss. Defenders of such non-
cognitive theories agree with defenders of cognitive theories that each emotion
has a specific range of eliciting conditions. They agree that sadness occurs in
contexts of loss. They simply deny that one must judge that there has been a
loss (Prinz, 2004). Perhaps it is enough merely to see a loved one depart on a
train, or grow distant in demeanor. On the non-cognitive view, each emotion
has a core relational theme, but the relevant relations are sometimes registered
without judgment. A sudden loss of support can directly trigger fear, and a
noxious smell can instill disgust.
We do not wish to resolve this debate here. Both sides agree that each
emotion has a characteristic class of eliciting conditions, and that these condi-
tions must ordinarily be recognized or internally represented for an emotion
to occur. For a cognitive theorist, recognition takes the form of an explicit
appraisal (e.g. a judgment that there has been a loss). For the non-cognitive
theories, recognition will often consist in a representation of an event that
happens to be an instance of a particular category without explicit recognition
of its membership in that category (e.g. a perception of a loved one leaving);
the emotional feeling itself constitutes the appraisal (e.g. I feel sad when you
depart). For both theories, emotions are elicited by the same kinds of events
(e.g. sadness is elicited by loss). To remain neutral about this, we shall use the
term ‘‘elicitor’’ when talking about the events that serve to elicit emotional
responses.
Let’s turn now to a more important question for this chapter: what makes an
emotion qualify as moral? One possibility is that there is no distinction between
moral and non-moral emotions, either because all emotions are intrinsically
moral (that seems unlikely—consider fear of heights), or because no emotions
are moral. On this latter view, it may happen that emotions can occur in the
context of moral judgments, but the emotions that do so are not special in
any way; perhaps any emotion could participate in moral cognition. Another
possibility is that moral emotions contain moral judgments. This could be
the case only if cognitive theories of emotion were correct. A further pair of
possibilities is that moral emotions are emotions that are either constitutive of
moral judgments or causally related to moral judgments in a special way (they
120    

reliably give rise to such judgments or are caused by such judgments). Finally,
moral emotions might be defined as emotions that promote moral behaviors.
Some of these definitions are compatible. One could define one class of moral
emotions as the emotions that promote moral behaviors, and another class as
the ones that are either constitutively or causally related to moral judgments.
Notice that these ways of defining moral emotions will work only if we
can find some way of defining morality, because the definition builds in the
concept moral. In philosophy, morality is concerned with norms, where norms
are construed as rules about how we should act or what kind of people we
should be. But this is, at best, a necessary condition for qualifying as moral.
Not all norms have moral content. There are norms of etiquette, for example,
and norms of chess playing (Foot, 1972). Psychologists try to narrow down
the moral domain by drawing a distinction between moral and conventional
rules (Turiel, 1983; Smetana, 1993; Nucci, 2001). If we can define morality
by appeal to moral, as opposed to conventional, rules, then we can define
moral emotions as those that promote behavior that accords with moral rules
or those that play a causal or constitutive role in mentally representing such
rules. Of course, this will avoid circularity only if we can cash out the concept
of moral rules without using the concept moral. On the standard approach
in psychology, moral rules are operationalized as being distinctive in three
ways. We have already alluded to this operationalization in sketching Nichols’s
sentimental rules model above. In comparison to conventional rules, moral
rules are regarded as more serious, less dependent on authority, and more
likely to be justified with reference to empathy and the suffering of others
(Turiel, 1983). The attempt to use these features to define the notion of
morality has been challenged (Shweder et al., 1987; Kelly et al., 2007). For
example, there may be moral rules that are authority contingent and don’t
involve suffering (e.g. rules prohibiting consensual incest with birth control),
and conventional rules that are very serious (e.g. driving on the right-hand side
of the road). Moreover, researchers persuaded by moral relativism believe that
all rules may depend on convention (Prinz, 2007). We find these objections
compelling. Nevertheless, we think the evidence from this tradition is getting
at important differences in the way people think about different classes of
violations. Many people seem to have strong intuitions about whether certain
rules (say, prohibitions against joy killing) are moral or merely conventional.
We think such intuitions must ultimately be explained.
Here we mention two ways of trying to make sense of people’s intuitions
about a moral/conventional distinction. One option is to define moral rules
negatively as rules that are not believed to depend on any specific social
conventions. The idea is that, when people construe a norm as conventional,
  121

they know that it depends on the opinions or practices of others. With moral
rules, people presume this is not the case. This straightforward suggestion may
be workable, but there are a few concerns. One is that some moral rules may be
construed as depending on others, such as the rule that one should follow the
ways of one’s elders or the rule that one should obey dietary customs. These
rules are sometimes construed morally in some cultures. Another concern is that
this formulation implies that a rule’s status as moral or conventional depends
on how we think about its ultimate source or justification. This is intellectually
demanding. It’s not implausible that we acquire both moral and conventional
rules before having any beliefs about how they are justified or where they come
from. Young children may be sensitive to the moral/conventional distinction
without having a concept of social conventions. A third concern is that there
may be non-moral norms that do not depend on conventions, such as personal
norms or prudential norms. A fourth concern is that, if relativism is right,
moral rules may depend on conventions, but people who embrace relativism
need not deny that moral rules exist.
Another strategy for defining moral norms makes reference to the emotions.
There is evidence that moral norms are associated with different emotions than
conventional norms (Arsenio & Ford, 1985; Tangney et al., 1996; Takahashi
et al., 2004; see also Grusec & Goodnow, 1994, for indirect evidence). If one
violates a rule of etiquette, one might feel embarrassed, but one is unlikely to
feel guilty or ashamed. On this approach, moral norms are defined as norms
whose violation tends to elicit moral emotions. This definition overcomes
many of the worries we have been discussing, but it is problematic in the
present context. We were trying to define moral emotions in terms of moral
norms, and now we are defining moral norms in terms of moral emotions. One
could break out of it by simply listing the moral emotions. Moral norms could
be defined as norms that implicate the emotions on that list. On this view,
moral norms are defined in terms of moral emotions, and moral emotions are
simply stipulated.
Perhaps one of these approaches to the moral/conventional distinction can
be made to float. Perhaps other options are available. We shall not take a
stand. When we invoke the moral/conventional distinction in what follows,
we shall try to rely on uncontroversial examples of each (e.g. the assassination
of Martin Luther King was morally wrong, and eating with a hat on is
merely conventionally wrong). We’ll rely on these folk intuitions, which are
confirmed—at least among North America respondents in the literature on
moral development. The fact that people have strong intuitions about whether
certain rules are moral or conventional suggests that there is some psychological
basis for the distinction.
122    

Until there is an uncontroversial definition of what makes a norm count as


moral, it may be easiest to define moral emotions as those that are associated
with paradigm cases of moral rules (e.g. emotions that arise in the context
of killing, stealing, and giving to charity). Following other authors, we find
it useful to distinguish several different families (see, e.g., Ben-Ze’ev, 2000;
Haidt, 2003). First, there are prosocial emotions, which promote morally good
behavior by orienting us to the needs of others. These include empathy,
sympathy, concern, and compassion. It’s possible that empathy and sympathy
are not emotions in their own right, but rather capacities to experience other
people’s emotions vicariously. Second, there are self-blame emotions, such as
guilt and shame. Third, there are other-blame emotions, such as contempt,
anger, and disgust.
One might wonder why there are so many different moral emotions. Why
isn’t there just a single emotion of disapprobation, for example, rather than
multiple ways of experiencing blame? The obvious answer is that each moral
emotion has a different functional role. This, we think, is an important feature
of moral emotions, which has been underappreciated in philosophy. Consider
the emotions of other-blame: contempt, anger, and disgust. Rozin et al.
(1999) have argued that these three emotions play different roles corresponding
to three different kinds of moral norms. They build on the work of the
anthropologist Richard Shweder (e.g. Shweder et al., 1999), who argues that
three different kinds of moral rules, or ‘‘ethics,’’ crop up cross-culturally. Rozin
et al. map the emotions of other-blame onto Shweder’s taxonomy, by asking
subjects in Japan and the United States to indicate which of these emotions
they would likely feel in a range of circumstances corresponding to Shweder’s
three kinds of norms. Here is what they find. Contempt arises when people
violate community norms, such as norms pertaining to public goods or social
hierarchies. Anger arises when people violate autonomy norms, which are norms
prohibiting harms against persons. Disgust arises, in non-secular societies, when
people violate divinity norms, which require that people remain pure; in secular
societies, people also have norms about purity, but they construe violations
of these norms as crimes against nature, rather than as crimes against gods.
Coincidently, the research reveals pairings of contempt and community, anger
and autonomy, and disgust and divinity. Therefore, Rozin et al. call this the
CAD model.
In addition to prosocial emotions, self-blame emotions, and other-blame
emotions, there may be a variety of other emotions that have moral significance
even though they have not been extensively studied in that context. There
are emotions such as self-righteousness, gratitude, admiration, and elevation,
which may serve as rewards for good behavior. There are emotions such
  123

as loyalty, affection, and love, which create and promote morally relevant
commitments (for a useful survey, see Fessler & Haley, 2003 and Haidt, 2003).
These are clearly worthy of further investigation.
We cannot possibly review the research on each moral emotion here. We
leave aside prosocial emotions (but see Preston & de Waal, 2002 and Hoffman,
2000). We focus instead on emotions associated with blame, and, more
specifically, on anger and guilt. These illustrate the other-blame and self-
blame categories respectively, and have been extensively studied by researchers
interested in morality. We choose anger and guilt because they are associated
with violations of autonomy norms, which are perhaps the most familiar
kinds of moral norms. Autonomy norms are ones that regulate how we
treat individual persons, and canonical violations of autonomy norms involve
harming another person. Sometimes the person is physically hurt; sometimes
property is taken. In other cases, nothing is taken, but the person doesn’t get
what they deserve or are entitled to. In still others, a person is prevented from
doing something even though the prevention is unwarranted. Most broadly,
autonomy norms are characteristically construed in terms of harms or rights.
Someone is harmed when they are hurt or lose property. Someone’s rights are
violated when they are not given certain benefits or allowances.
Autonomy norms are privileged in Western philosophy and Western culture.
Most normative ethical theories in the West (with the possible exception of
virtue ethics) focus on autonomy in one way or another. Even utilitarianism
counts as an autonomy-focused theory, on the technical definition used in
psychology, because it focuses on benefits and harms to persons (as opposed to,
say, status hierarchies or victimless sexual acts that strike members of traditional
societies as impure). Patterns of justification in the West tend to focus on
how a victim has been affected. If asked whether a certain sexual behavior is
wrong, for example, Westerners often try to determine whether it was harmful.
Victimless behaviors such as masturbation are considered morally permissible
in contemporary Western societies because no one is harmed, although, in
earlier times, masturbation (or onanism) was viewed as a crime (Tissot, 1766).
In contrast, bestiality and incest are regarded as wrong by contemporary
Westerners, because they assume that one party does not consent. Westerners
do moralize some acts that have no unwilling victims, such as consensual
incest (Haidt, 2001). But they seem to exhibit discomfort or puzzlement in not
being able to provide a victim-based justification. Members of non-Western
cultures and people with low socioeconomic status are often more comfortable
with saying that there can be victimless crimes (Haidt et al., 1993). This
tendency may also be found to some degree in Westerners who are politically
conservative (Haidt, 2007). Traditional value systems regulate everything from
124    

sex to dress and diet. Those who endorse traditional values may be more
willing than modern Western liberals to condemn behavior that is not harmful
to anyone. Moral theories in modern Western philosophy shift away from
traditional values and try to identify norms that may be more universal. Norms
regulating harm are purported to have this status (although their universality
may be challenged; Prinz, 2008). By focusing on the emotions that arise in
response to violations of autonomy norms, we present those that are most
directly relevant to Western ethical theories. But we don’t mean to imply
that these emotions, anger and guilt, are exclusively Western. They may be
culturally widespread emotions with deep biological roots.

3. Anger
Of all the emotions that look to play a prominent role in moral psychology,
anger is the one most commonly thought to have clear analogues in lower
mammals (e.g. Panksepp, 2000). Indeed, anger has been explored extensively
by biologically oriented emotion theorists. It is included on most lists of ‘‘basic
emotions.’’ Basic emotions, as they are understood in contemporary emotion
research, are evolved emotions that do not contain other emotions as parts.
They are usually associated with distinctive elicitors, physiological responses,
action tendencies, and expressive behaviors. This is certainly true of anger. It is
associated with a distinctive facial expression (furrowed brow, thin lips, raised
eyelids, square mouth) (Ekman, 1971), and it is physiologically differentiated
from other basic emotions (high heart rate and high skin temperature) (Ekman
et al., 1983: 1209). In infants, anger is thought to be triggered by goal obstruc-
tions (Lemerise & Dodge, 2008). Affective neuroscientific work indicates that
anger is associated with activations in the amygdala, the hypothalamus, and
the periaqueductal gray of the midbrain (Panksepp, 1998: 187, 195–196). And
already with Darwin, we get a proposal about the adaptive function of anger:
it serves to motivate retaliation. Darwin writes: ‘‘animals of all kinds, and their
progenitors before them, when attacked or threatened by an enemy, have
exerted their utmost powers in fighting and in defending themselves. Unless
an animal does thus act, or has the intention, or at least the desire, to attack its
enemy, it cannot properly be said to be enraged’’ (1872: 74).
Because anger seems to be present in all mammals, and because weasels
are perhaps not typically regarded as having moral emotions, it is tempting
for the moral psychologist to restrict attention to ‘‘moral anger’’ or ‘‘moral
outrage’’ (cf. Fessler & Haley, 2003; Gibbard, 1990). But we want to begin our
discussion of anger without explicitly restricting it to moral anger.
  125

3.1. Psychological Profile of Anger


In the recent moral psychological literature on anger, the familiar characteriza-
tion of the profile of anger is that it’s caused by a judgment of transgression and
it generates an inclination for aggression and retributive punishment. Thus we
find Averill claiming that ‘‘the typical instigation to anger is a value judgment.
More than anything else, anger is an attribution of blame’’ (1983: 1150). Lazarus
(1991) maintains that the ‘‘core relational theme’’ for anger is ‘‘a demeaning
offence against me and mine.’’ Similarly, Ortony and Turner suggest that ‘‘the
appraisal that an agent has done something blameworthy . . . is the principal
appraisal that underlies anger’’ (Ortony & Turner, 1990: 324). Relatedly,
according to the CAD model of Rozin et al. (see Section 2), anger is triggered
by violations of ‘‘autonomy’’ norms.
Much of the evidence for this profile comes from work in social psychology
over the last two decades. The dominant method in this research has been to
ask subjects to recall a past situation in which they were angry, then to elaborate
in various ways on the details of the situation and the experience (e.g. Averill,
1983; Baumeister et al., 1990; Ellsworth & Smith, 1988; Scherer, 1997; Shaver
et al., 1987; Smith & Ellsworth, 1985). In these sorts of surveys, investigators
have consistently found that people tend to give examples of feeling anger in
which they report that (i) the cause was an injustice and (ii) the effect was a
motivation to retaliate.
Drawing on self-reports of anger episodes, Shaver et al. maintain that
the prevailing cognitive antecedent is the ‘‘judgment that the situation is
illegitimate, wrong, unfair, contrary to what ought to be’’ (1987: 1078). This
is present in 95% of the cases (p. 1077). More recently, Scherer reports the
results of a large cross-cultural study in which subjects were asked to recall
episodes in which they felt each of several emotions (sadness, joy, disgust,
fear, shame, anger, and guilt) and also to rate the situation along several
dimensions. Across cultures, subjects tended to assign high ratings of unfairness
to the anger-inducing situation. Indeed, overall, anger-inducing situations
got the highest rating of unfairness out of all the emotions (1987: 911).
Furthermore, when people are asked to recall an event in which they were
treated unjustly, the most common emotion reported is anger/rage/indignation
(Mikula, 1986).
Shaver et al. also found that people tended to report similar kinds of
reactions to anger-inducing situations: ‘‘the angry person reports becoming
stronger . . . in order to fight or rail against the cause of anger. His or her
responses seem designed to rectify injustice—to reassert power or status, to
126    

frighten the offending person into compliance, to restore a desired state of


affairs’’ (1987: 1078). This, of course, nicely fits the standard profile on which
anger generates a motivation for retaliation.
Another line of research in social psychology explicitly tries to induce
anger by presenting subjects with scenarios or movies that seem to depict
injustice. In these cases, too, anger is the dominant emotion that people
feel (or expect to feel) in response to the unjust situation (Mikula, 1986;
Clayton, 1992; Sprecher, 1992). Furthermore, presenting subjects with appar-
ently unjust scenarios generates behavioral effects associated with anger. For
instance, Keltner and colleagues (1993) had subjects imagine a scenario in
which they were treated unfairly. Then they were presented with anoth-
er scenario, this time describing an awkward social situation. Subjects who
were first exposed to the unjust scenario were more likely than control
subjects to blame the situation on other people (p. 745). So it seems that
presentations of unjust scenarios trigger emotions that increase blame attri-
butions. In a related study, Lerner and colleagues showed subjects in one
condition, the anger condition, a video clip of a bully beating up a teenag-
er; in the other condition, the neutral emotion condition, subjects watched a
video clip of abstract figures (Lerner et al., 1998: 566). After this emotion-
induction condition, subjects were presented with an unrelated scenario in
which a person’s negligence led to an injury. Subjects were asked ‘‘To what
extent should the construction worker . . . be punished for not preventing
your injury, if at all?’’ (p. 567). Lerner and colleagues found that subjects
who had been shown the bully video were more punitive than control
subjects.
The punitive impulses that emerge from injustice-induced anger are appar-
ently not de dicto desires to improve society. This is beautifully illustrated in
a study by Haidt and Sabini (ms) in which they showed subjects film clips
that depicted injustices, and then subjects were asked to rate different possible
endings. Subjects were not satisfied with endings in which the victim dealt
well with the loss and forgave the transgressor. Rather, subjects were most
satisfied when the perpetrator suffered in a way that paralleled the original
injustice. That is, subjects preferred the ending in which the perpetrator got the
comeuppance he deserved. Perceived injustice thus seems to generate a desire for
retribution.
Some of the most interesting recent evidence on anger comes from
experimental economics. One familiar experimental paradigm in the field
is the ultimatum game, played in pairs (typically anonymously). One subject,
the ‘‘proposer,’’ is given a lump of money to divide with the other subject, the
‘‘responder.’’ The proposer’s offer is communicated to the responder, who also
  127

knows the total amount of money to be divided. The responder then decides
whether to accept or reject the offer. If he rejects the offer, then neither he nor
the proposer gets any of the money. The consistent finding (in most cultures,
but see Henrich et al., 2004) is that low offers (e.g. 20% of the total allocation)
often get rejected. That is, the responder often decides to take nothing rather
than a low offer.
Over the last several years, evidence has accumulated that suggests that anger
plays an important role in leading to rejections in these sorts of games. In
one of the earliest published findings on the matter, Pillutla and Murninghan
(1996) had subjects play an ultimatum game after which they were invited to
give open-ended responses to two questions: ‘‘How did you react when you
received your offer? How did you feel?’’ (p. 215). Independent coders rated the
responses for anger and perceived unfairness. Reports of anger and perceived
unfairness were significantly correlated. Anger (which was almost always
accompanied by perceived unfairness) was a strong predictor of when a subject
would reject an offer. Unfairness alone also predicted rejections, but more
weakly than anger. Pillutla and Murninghan’s suggestion is straightforward:
when subjects view the offer as unfair, this often leads to anger, and this
response increases the tendency to reject the offer (p. 220).
Using somewhat different economic games, other researchers have also
found that perceived unfairness in offers generates anger, which leads to
retaliatory actions (Bosman & van Winden, 2002: 159; Hopfensitz & Reuben,
forthcoming, ms: 13–14). But there is one finding, using public-goods games,
that deserves special mention. In a public-goods game, anonymous participants
are each allotted a sum of money that they can invest in a common pool.
Their investment will lead to increased overall wealth for the group, but
the payoff for the investing individual will always be a loss. So, for instance,
for each $1 a person invests, each of the four group members will get
$0.40, so that the investor will get back only 40% of his own investment.
Fehr and Gächter (2002) had subjects play a series of public-goods games
in which it was made clear that no two subjects would be in the same
game more than once. After each game, subjects were given an opportunity
to punish people in their group—for each 1 monetary unit the punisher
pays, 3 monetary units are taken from the punishee. Since the subjects know
that they will never again interact with any agent that they are allowed
to punish, punishing apparently has no future economic benefit for the
punisher. Nonetheless, punishment was in fact frequent in the experiment.
Most people punished at least once, and most punishment was directed at
free-riders (individuals who contributed less than average) (Fehr & Gächter
2002: 137).
128    

Why do subjects pay to punish even when they presumably know it is


not in their financial self-interest? Anger, according to Fehr and Gächter. To
support this hypothesis, they asked their subjects how they would feel in the
following situation: ‘‘You decide to invest 16 francs to the project. The second
group member invests 14 and the third 18 francs. Suppose the fourth member
invests 2 francs to the project. You now accidentally meet this member. Please
indicate your feeling towards this person’’ (Fehr & Gächter, 2002: 139). As
predicted, subjects indicated high levels of anger toward this person. Fehr and
Gächter maintain that this finding, together with the pattern of punishment
responses, fits very well with the hypothesis that negative emotions like anger
provide the motivation to punish free-riders. Furthermore, as with Haidt and
Sabini’s findings, it seems that the motivation is for retributive punishment, not
for rehabilitation or other happy endings.
Thus far, we’ve been summarizing evidence that anger is elicited by injustice.
That may not be the only elicitor, however. In the social psychological literature
mentioned initially, the empirical strategy is rather striking: subjects are asked
to give a single example of an episode of anger from their own life (Averill,
1983; Baumeister et al., 1990; Ellsworth & Smith, 1988; Scherer, 1997; Shaver
et al., 1987; Smith & Ellsworth, 1985). It is an important finding that people
tend to volunteer injustice-related episodes. But it would obviously be rash
to draw conclusions about the representativeness of these examples. Perhaps
injustice-induced anger is just especially salient, but not especially common.
One of the most illuminating studies on the matter is also one of the oldest.
Hall (1898) did a mammoth survey study, collecting detailed reports from over
2000 subjects, and he reproduces numerous examples offered by the subjects,
which enables us to see something of the range of what people take to be
typical elicitors of anger. References to injustice abound in Hall’s sample of
subjects’ reports of the causes of their anger: ‘‘Injustice is the worst and its
effects last longest’’; ‘‘injustice to others’’; being ‘‘accused of doing what I did
not do’’; ‘‘injustice’’; ‘‘self gratification at another’s expense, cruelty, being
deceived’’ (pp. 538–539). This fits well with the suggestion that perceived
injustice is a central trigger for anger.
Thus, subjects do mention injustice, both in the recent social psychological
work and in Hall’s nineteenth-century surveys. However, while recent work
tends to ask subjects to recall a single experience of anger, Hall asks subjects
more open-ended questions about their experience with anger and to list
things that cause them to feel anger. As a result, his data have the potential to
reveal a richer range of causes of anger. That potential is abundantly realized.
Most of the sample quotations he gives concerning causes of anger include
situations in which judgments of injustice seem not to be involved. Here are
130    

Figure 4.1. The psychological profile of anger

personal possessions of space, being insulting or offensive—all these things


have a negative impact on a victim, and thus fail to respect individual
rights or autonomy. Injustice is just one special and prevalent case. When
people fail to cooperate or take more than is just for themselves, they treat
others as inferior or less deserving. In our society, where presumptions of
equality are strongly emphasized, that is seen as a harm. This is especially
true in cases of free-riding. If one party incurs a cost for a benefit, and
another takes the benefit without incurring the cost, then the person who
incurs the cost is, in effect, incurring the cost for two. This is a form of
harm, because it takes advantage of the victim. In summary, then, many
of the various elicitors of anger can be regarded as violations of autonomy
norms, with injustice being a paradigm case. This profile is represented in
Figure 4.1.
Typically, anger will be experienced by the victim. In the economic games,
one player gets angry at another and retaliates. It is worth noting, however,
that the anger can also be experienced by third parties who observe the
transgression. Sometimes the victim of an autonomy norm violation is passive,
indifferent, oblivious, or even dead. Third parties who empathize with the
victim or have norms against the offending behavior may incur the cost of
retaliating. This has rarely been observed in non-human animals, but it is
relatively common in humans, as when Western liberals rally for the end of
foreign wars or when PETA activists intervene to protect animals.

3.2. Anger and Moral Motivation


Does anger play an important role in moral motivation? There’s little reason
to think that anger plays a direct internal role in motivating us to save
drowning children, give money to Oxfam, or refrain from stealing, killing, and
raping. But anger serves to underwrite external enforcement of morally valued
  129

some examples of typical causes of anger offered by a range of people from


schoolchildren to adults:
‘‘unpleasant manners and looks’’;
‘‘narrow mindedness’’;
‘‘girls talking out loud and distracting me in study hours’’;
‘‘being kept waiting, being hurried, having my skirt trodden on, density in others’’;
‘‘If I am hurrying in the street and others saunter, so that I cannot get by . . . or
when given a seat in church behind a large pillar’’;
‘‘Frivolity in others, asking needless questions, attempting to cajole or boot-lick the
teachers’’;
‘‘An over tidy relative always slicking up my things’’;
‘‘slovenly work, want of system, method and organization’’;
‘‘late risers in my own house, stupidity’’;
‘‘when I see a girl put on airs, strut around, talk big and fine. I scut my feet and want
to hit her, if she is not too big’’ (pp. 538–539)
None of these examples obviously invokes considerations of injustice, and for
some of them, it seems rather implausible that perceived injustice is at the
root of the anger. Perhaps if you squint, you can see all of them as involving
perceived injustice. But we are somewhat skeptical about the methodological
propriety of squinting.
That anger is frequently triggered by something other than perceived
injustice should not be surprising if, as seems likely, the anger system is
evolutionarily ancient. On the contrary, if anger (or a close analogue) reaches
deep into our mammalian ancestry, it would be surprising if it turned out
that the only activator for anger is an appraisal of unfairness. In older phyla,
the homologues of anger may be more typically elicited by physical attacks
by conspecifics or the taking of resources (battery or theft). Similar responses
may also arise in hierarchical species, when an animal that is not dominant
tries to engage in dominant behavior or take a privilege reserved for dominant
individuals. This is the furry equivalent, perhaps, of seeing a girl put on airs,
strut around, talking all big and fine.
The common denominators here for eliciting anger seems to relate to
challenges to the individual’s autonomy (as discussed in Section 2). So it
is perhaps not surprising that, in humans, anger gets triggered by violations
of autonomy norms. When a person aggresses against another, that causes
harm, and violates the victim’s entitlement to be left in peace. Theft is a
harm that violates a victim’s entitlement to possessions. Putting on airs is a
way of acting superior, which harms others by placing them in a position
of subordination or inequitable distribution and insulting them by treating
them as inferior. Being annoying or disruptive, thwarting goals, violating
  131

Figure 4.2. Average contributions without and with punishment (from Fehr &
Gächter, 2000)

behaviors. And the awareness of anger serves to provide further motivation for
prosocial behaviors.
In the previous section, we noted that free-riding in public-goods games
generates anger and a corresponding motivation to punish the free-rider by
paying to deplete his funds. What we didn’t mention is that punishment is
remarkably powerful in shaping behavior in these games. Over a number
of experiments, Fehr and his colleagues have consistently found that when
punishment is an available option in public-goods games, cooperation increases
dramatically (Fehr & Gächter, 2000; Fehr & Fischbacher, 2004). Perhaps the
most impressive illustration of this occurs when subjects first play several
rounds in which punishment is not an option. In one such experiment (Fehr &
Gächter, 2000), ten rounds are played in which punishment is not an option;
by the tenth round the average contribution dropped to very low levels (below
20%). Punishment becomes available for the eleventh round, and the average
contribution skyrockets. By the fourth round in which punishment is available,
average contributions are at 90%! (see Figure 4.2). If Fehr and Gächter are
right that this punishment is driven by anger, then anger is a formidable force
for motivating cooperation.
Since cooperation is a morally valued behavior, and since punishment in
public-goods games is plausibly driven by anger, it seems that anger plays
a powerful role to the good. For anger apparently secures cooperation by
motivating punishment. In addition, there’s a second indirect role for anger in
moral motivation. In Fehr and Gächter’s experiments, as soon as subjects are
132    

told that punishment is available, there is already a huge leap in cooperation


(see Figure 4.2). Why is this? Fehr and Gächter suggest that it is because subjects
realize that, if they defect, others will become angry and punish them. While
this is a plausible explanation for the dramatic leap in average contributions
between the tenth and eleventh trials, it isn’t the only possible explanation.
Another possibility is that people are more willing to contribute when they
know that they can punish. We suspect that both factors contribute to the
effect and that anticipation of anger-driven retaliation does contribute to the
motivation to cooperate.
So far we have argued that anger facilitates good behavior in others, because
it motivates people to avoid angry responses. This kind of good behavior can
be explained simply in terms of self-interest. But it’s important to note that
anger can motivate good behavior that can’t be assimilated to a simple self-
interest story. For instance, Fehr & Fischbacher (2004) found that third-party
observers would pay to punish people who defect in a prisoner’s dilemma. The
motivation here seems not to be simple self-interest, but, rather, to rectify an
injustice. In this case, the motivation looks to be more properly moral. Thus
anger apparently serves to motivate moral behavior in multiple ways.

4. Guilt
Guilt is perhaps the quintessential moral emotion. No other emotion is more
directly associated with morality. As we pointed out, it’s plausible that anger
can occur in non-moral contexts. By contrast, guilt almost always plays a moral
role and, arguably, guilt occurs in non-moral contexts only when people take
a moral stance toward something that does not warrant such a stance. If you
feel guilty about breaking your diet, for example, you may be unwittingly
moralizing weight ideals and self-control. Guilt is also closely associated with
the idea of conscience. It is construed as a guide that tells us when an
action is wrong. Guilt may play important roles in moral development, moral
motivation, and moral judgment.

4.1. Psychological Profile of Guilt


Guilt has long been regarded as an important emotion in psychology, especially
in the clinical tradition. Freud thought that neuroses sometimes resulted from
guilt about suppressed desires, including the desires associated with the Oedipal
complex. For both Freud and some later psychoanalytic theorists, guilt is
regarded as a kind of inner conflict between components of the self, and is
often negatively regarded as an emotion that does more harm than good. In
  133

recent years, conceptions of guilt have been changing, and it is now widely
regarded as a fundamentally social emotion, which plays a positive prosocial
role. We shall focus on this emerging conception of guilt (for a useful overview,
see Baumeister et al., 1994).
As we noted in Section 2, emotions can be distinguished by their ‘‘core
relational themes’’ (Lazarus, 1991). The core relational theme for guilt seems to
be transgression of moral rules. But this simple formulation cannot distinguish
guilt from shame, or, for that matter, from anger. There are several different
emotions that arise when rules are violated. One obvious difference between
guilt and anger is that we typically feel anger at other people when they violate
rules. By contrast, if I feel guilty, it is typically when I myself have violated
a rule. There may also be cases where I feel angry at myself, but that may
be most common when thinking of the self in the second person (as in the
familiar [to us] self-recrimination, ‘‘You idiot!’’). We suspect such self-directed
anger plays a different role than guilt. People get angry at themselves when
they behave stupidly, but guilt arises when people violate norms. In fact guilt
arises only for a specific class of norms. I don’t feel guilty about jay-walking,
and many people don’t feel any guilt about cheating on their taxes, although
they may feel afraid of getting caught. Guilt is especially likely to arise when
we cause harm to another person, and the likelihood and intensity increase if
that person is close to us in some way. We feel more guilty about harming
members of the in-group than of the out-group, and guilt is most intense when
the victim of our transgression is a loved one. This finding leads Baumeister
et al. (1994) to conclude that guilt is an emotion that arises especially when
there is a threat of separation or exclusion. When you jay-walk or cheat
on your taxes, it is unlikely that you will be socially ostracized, much less
that your primary relationship partners will abandon you. In contrast, when
you harm someone close to you, you potentially undermine the attachment
relation that you have with that person and with other members of your social
group.
As a first pass, then, the core theme for guilt is something like: I have harmed
someone whose well-being is a matter of concern to me. But it is important to
note that the person who experiences guilt is not always actually responsible
for harming anyone. As Baumeister et al. (1994) point out in their review,
people feel guilty when they fare better than other people, even if they are not
responsible for the inequity. The most famous and troubling example of this
is survivor guilt. People who survived the Holocaust, the nuclear attack on
Hiroshima, the AIDS epidemic among gay men, and other great catastrophes
often report feeling guilty about surviving, especially if friends and family
members were killed. Guilt is also experienced by those who keep their jobs
134    

when others are laid off, those who receive greater benefits than someone
who worked just as hard, and even those who are recipients of unrequited
love.
In the face of all these examples, one might be inclined to conclude that guilt
has no special connection to transgression, but is rather an emotion whose core
relational theme is: I am the recipient of a benefit that others whom I care about
did not receive. But this proposal is difficult to reconcile with the fact that we
often feel guilty about causing harm. It would seem that there are two forms
of guilt: one elicited by causing harm and the other elicited by inequitable
benefits. This is inelegant; it doesn’t explain why people use the term ‘‘guilt’’ to
describe these seemingly different cases. We are inclined to think that survivor
guilt and other cases of guilt without transgression are actually over-extensions
of cases involving transgression. People who experience survivor guilt feel
responsible for the misery of others. They sometimes say that they should have
done more to help. It seems that survivors erroneously believe that they have
violated a norm; they think they were obligated to protect their loved ones and
in a position to do so. In cases of guilt stemming from inequity, people often
think, I didn’t deserve this. The idea of desert, again, implies responsibility.
People who feel guilty about, say, earning more than someone else who works
equally hard may believe that they have violated a norm that says we should
share our benefits with anyone who is equally deserving. Even if I am not the
cause of an inequity, I am in a position to do something about it; I can protest
it. Our conjecture is that all these cases hinge crucially on the fact that the
people who experience guilt think that they are responsible for harming others.
The difference between these cases and core cases is that the responsibility
comes from omission rather than commission. If you feel guilty about cheating
on your lover, then, in effect, you feel guilty about causing harm, and if you
feel guilty about earning more than the person in the next cubicle, then you
feel guilty about failing to prevent harm (e.g. failing to protest the unequal
distribution).
In light of the foregoing, we conclude that the core relational theme for
guilt is something like: someone I am concerned about has been harmed and
I have responsibility for that in virtue of what I have done or failed to do.
Guilt also, of course, has characteristic effects. Guilt is an unpleasant emotion,
and, when it is experienced, people try to get rid of it in various ways. The
most common coping strategies are confession, reparation, self-criticism, and
punishment. The last of these was once believed to be especially central to
guilt. Psychoanalysts thought that guilt would lead people to seek punishment
from others, but evidence suggests that this tendency is not especially common
(Baumeister et al., 1994). It more common for guilty people to try to clear
  135

Figure 4.3. Psychological profile of guilt

their conscience by either apologizing or doing something to compensate for


the harm they have caused. We return to this issue below. The story so far is
summarized in Figure 4.3.
We are now in a position to see some of the ways that guilt differs from
shame. For one thing, they have different core relational themes. Guilt occurs
when we are responsible for causing harm to others. Shame differs from
this in two respects. First, it doesn’t always involve harm: one might feel
ashamed about masturbating, for example. Jones et al. (1995) demonstrated
that people feel guilty when they commit relational transgressions (harming
another person), but not when they commit non-relational transgressions (e.g.
masturbating or smoking marijuana). In non-relational cases, shame is more
likely. Second, shame doesn’t require a sense of control. We feel guilty in
cases where we think we could have prevented some bad outcome, but shame
sometimes arises in cases where we don’t think we could have done otherwise.
One might feel ashamed of oneself for compulsively having nasty thoughts
about other people or for lacking the talent required to live up to others’
expectations. Guilt and shame also differ in their effects. Guilty parties typically
try to make up for what they have done by confessing or making amends.
Shame is more likely to lead to withdrawal. People hang their heads in shame,
and they avoid social contact. In studies of toddlers, psychologists exploit these
differences to operationalize the distinction between guilt and shame. In one
study, Barrett et al. (1993) had 2-year-olds play with a doll that had been rigged
to break. Some of the toddlers showed the broken doll to adults apologetically,
and others simply avoided looking at adults after the mishap. Following the
differences noted above, reparative behavior is interpreted as indicating guilt
and gaze avoidance is interpreted as indicating shame. Shame and guilt also
differ in another way. People usually feel guilty about what they do or fail to do;
but people feel ashamed about what they are or what they fail to be. In other
words, guilt is action-directed, and shame is self-directed. Actions may lead
136    

one to feel ashamed, but in feeling ashamed, one feels like a bad or corrupt
person. One can feel guilty without feeling like a bad person.
The contrast between guilt and shame is reminiscent of the contrast between
anger and disgust. Anger tends to be associated with actions. We think it is
likely that one can be angry at a person for doing something wrong without
feeling as if that person is intrinsically bad. Disgust, in contrast, tends to transfer
from action to person. If someone does something that you find repellent
(a sexually perverse act, for example), you might find that person repellent
thereafter. A person who does something disgusting becomes a disgusting
person, but a person that does something irksome does not necessarily qualify
as irksome thereby. These parallels suggest that guilt and shame might be
first-person analogues of anger and disgust respectively (Prinz, 2007). We saw
in Rozin et al.’s (1999) CAD model that anger occurs when someone violates
an autonomy norm and disgust occurs when someone commits an unnatural
act. These emotions are other-directed. If you yourself violate someone’s
autonomy, that qualifies as a form of harm, and if you harm someone, you are
likely to feel guilty. If you perform an unnatural act, you are more likely to
feel ashamed.
Both guilt and shame are sometimes regarded as basic emotions (Ekman,
1994). It is not implausible that they both evolved to play roles in regulating
social behavior. But another possibility is that these emotions are learned by-
products of other emotions that are more fundamental. We will not explore
this idea with respect to shame, but consider guilt (both are discussed by Prinz,
2004). On the face of it, guilt has striking similarities to other emotions. In
particular, it has much in common with sadness and, to a lesser extent, fear.
Like sadness, guilt is often associated with feeling downtrodden. Excessive guilt
is a core symptom of clinical depression (American Psychiatric Association,
1994); young children who score high in sadness ratings, also score high in guilt
(Eisenberg, 2000); when adults recall events that caused them to feel intense
guilt, they also report feeling sadness (Shin et al., 2000); and when asked to
select facial expressions to associate with guilt vignettes, people tend to select
frowning faces (Prinz, unpublished study). The relation between guilt and fear
may be a little weaker, but fear is correlated with guilt in children (Kochanska
et al., 2002). One explanation for these findings is that guilt is actually a form of
sadness, or sadness mixed with a little anxiety. It might be acquired as follows.
When young children misbehave, parents withdraw love. Love withdrawal
threatens the attachment relationship that children have with their parents, and
losing attachments is a paradigm cause of sadness. It can also cause anxiety,
insofar as attachment relations are a source of security. The threat of losing
love leads children to associate sadness with transgression. Initially, they are sad
  137

about losing their parents’ affection, but, through associative learning, sadness
becomes associated with the transgressions themselves. The anxiety-tinged
sadness about wrongdoing is then labeled ‘‘guilt.’’ This would help explain
why guilt is so linked with transgressions that involve harm. Harming another
person is especially likely to threaten that relationship. If a child is caught
breaking a rule that does not involve harm (e.g. taking off clothing in public),
parents may react differently (e.g. scolding or displaying disgust). This story
would also explain why guilt leads people to make amends. If guilt arises in
contexts of potential loss, those all-important relationships can be repaired by
apology or confession. And it also explains the fact that we feel guilt about
harming people even when the harm was not intended, because inadvertent
harm can also threaten attachment relations.

4.2. Guilt and Moral Motivation


We have already touched on ways in which guilt contributes to moral
motivation. When people feel guilty, they confess, make reparations, criticize
themselves, or, less frequently, seek punishment. Notice that these behaviors
are all attempts to make up for misdeeds. Guilt probably plays an important role
motivating us to compensate for harming others (Baumeister et al., 1994). Guilt
may play a second and related motivational role as well. Guilt is unpleasant,
and people may resist doing things when they anticipate feeling guilty about
them. In our discussion of anger, we noted that people may obey rules to
avoid the ire of others. People may also obey rules to avoid feeling guilty later.
Indeed, guilt avoidance may play a more powerful role in rule conformity
than anger avoidance. Anger is associated with aggression, and anticipating
aggression causes fear. But fear is thought to be a comparatively weak moral
motivator (Caprara et al., 2001). For example, power assertion is known to
be a less effective tool in moral education than love withdrawal, which causes
guilt. Guilt is more correlated with good behavior than is fear.
Evolutionary game theorists have argued that anticipatory guilt promotes
cooperative behavior by adding an emotional cost to defection (Trivers,
1971; Frank, 1988). In terms of actual material gain, in many economic games,
defection is a dominant strategy: defectors do better than cooperators regardless
of whether their trading partners defect or cooperate. But this fact makes it
rational for both parties to defect, even though mutual defection is worse than
mutual cooperation in many of these games. Trivers (1971) speculates that
evolution has promoted the emergence of guilt because it makes defection less
attractive. People may gain materially from defecting, but guilt makes them
suffer emotionally, and that leads them to cooperate. Frank (1988) notes that
138    

this tendency is so deeply engrained that people even avoid defection in cases
where the other party is not a likely partner in future exchanges. For example,
people leave good tips in roadside restaurants even though they are unlikely to
ever encounter the waitstaff again. Both Trivers and Frank assume that guilt is
the result of biological evolution, and this would support the hypothesis that
guilt is a basic emotion. But it is equally possible that guilt emerged under
cultural pressure as a tool for ensuring that people cooperate. Cross-cultural
variation in economic games suggests that culture may contribute to how
people construe fairness. The Machiguenga of Peru, for example, seem to
accept very inequitable offers as fair, even when North Americans would judge
otherwise (Henrich & Smith, 2004). In their slash–and-burn economy, the
Machiguenga do not depend much on strangers, so cultural pressure has not
led them to morally condemn those who hoard resources rather than dividing
them equitably.
The forms of moral motivation that we have mentioned so far relate to
transgression; guilt motivates us to make up for misdeeds, and it deters us from
behaving badly. What about good behavior? When we introduced the topic
of moral motivation, we suggested that emotions might lead people to behave
prosocially. It’s not obvious that guilt should play a role in prosocial behavior.
Folk psychology would have us believe that we do nice things for people out of
empathy or concern. There is evidence, however, that guilt can promote good
deeds as well. Carlsmith and Gross (1969) demonstrated that guilt can promote
altruistic behavior. They set up a situation similar to the one used by Stanley
Milgram in his famous obedience studies. Subjects were asked to administer
electric shocks to another person, ‘‘the learner,’’ whenever he failed to answer
questions correctly. The learner was introduced as a volunteer for the study,
but he was actually a confederate in the experiment. Afterwards, the subject
was asked to make phone calls on behalf of an environmental organization. In
one condition, subjects are asked to make the calls by the learner, in another
condition subjects are asked by a witness who observes the whole affair, and
in a third control condition the subject is never asked to administer electric
shocks. Subjects who are asked by the learner to make the calls are not more
likely to do so than subjects in the control condition. This suggests that people
are not inclined to be more altruistic as a way of making amends to victims
of their transgressions. But subjects who are asked to make the calls by a
witness make dramatically more calls than in the other two conditions. This
suggests that, when people feel guilty, they are more inclined to engage in
altruism when the request comes from a third-party witness. This implies that
while good deeds might not serve as form of restitution, they can mitigate the
aversive effects of guilt.
  139

The Carlsmith and Gross study shows that guilt can promote prosocial
behavior. Subsequent work has shown that it is most likely to do so when
individuals have no other way of coping with their guilt. In an ingenious
study, Harris et al. (1975) set up a donation stand for the March of Dimes in
a Catholic church. They solicited donations from people who were either just
coming from confession or on their way to confession. Almost 40% of those
who were on their way to confess made a donation, but under 20% of those
who had already confessed made a donation. The likely interpretation is that,
once confessions are made, guilt is reduced, and that leads to the reduction in
altruism. Guilt may promote altruistic behavior by default, but that disposition
drops precipitously if people can cope with their guilt in a less costly way.
Shame may play a role in this study too, but guilt is more likely to be doing
the work: guilt is behaviorally associated with both confession and making
amends, whereas shame tends to promote inward focus, and it is not easily
assuaged by confession.
Another limitation of guilt is that it may not promote prosocial behavior
toward members of an out-group. As noted above, guilt mostly arises when
harm comes to someone we care about. With out-groups, care is mitigated.
We often find strong popular support for government programs that bring
immense suffering to citizens of other nations. Guilt is especially unlikely if we
construe the victim as an enemy or, in some other respect, undeserving of our
concern. Studies have show that when we harm a member of an out-group,
there is a tendency to justify the transgression by denigrating the victim. In
a chilling Milgram-style study, Katz et al. (1973) had white males administer
electric shocks to a ‘‘learner’’ before and after filling out a questionnaire to assess
the learner’s personality. The responses to the personality questionnaires did
not change when the learner was white, but, after administering strong shocks
to a black learner, subjects tended to dramatically lower the assessment of his
personality. Subjects would denigrate the black man after treating him cruelly.
One explanation is that the subjects felt more guilt about harming a black man,
given the history of injustice toward that group. To diminish the guilt, they
confabulate a justification for their actions, by implying that the black man
somehow deserved to receive shocks. It is unclear from these findings whether
subjects felt guilty and assuaged those feelings through denigration, or whether
the denigration prevented the onset of guilt. In any case, the intensity and
duration of guilt seem to depend on attitudes toward the victim.
In summary, guilt can play a significant role in promoting prosocial behavior,
although that role may be limited. Those who want to expand the impact of
guilt must find ways to broaden the range of people for whom we feel a sense
of responsibility and concern.
140    

5. Concluding Question: Morality without Anger


and Guilt?
In this chapter, we have been looking at ways in which emotion relates to
morality. We began with the observation that emotions typically arise when
people make moral judgments, and went on to observe that the emotions in
question are ones that involve self- or other-directed blame. The actual moral
emotion that arises on any occasion will depend on what kind of transgression
has taken place. We looked at two of these emotions in some detail: anger and
guilt. We suggested that these emotions are particularly important in Western
morality, because they arise in response to autonomy norm violations, and
autonomy norms are central in Western morality. But we do not mean to
imply that autonomy is an exclusively Western concern. Indeed, we want to
conclude by asking whether morality is even possible in the absence of anger
and guilt.
Let us compare anger and guilt to another pair of moral emotions, disgust
and shame. As we have seem, disgust tends to arise when actions are construed
as crimes against nature, and shame tends to arise with unwelcome attention
from others. They are related because crimes against nature bring unwelcome
attention. In this light, it is plausible to think of the disgust/shame dyad as
functioning to shape the social self. By ‘‘social self’’ we mean those aspects of
personal behavior that define oneself as a member of a social group. If there are
behaviors that bring negative attention from the group, we avoid them. We
act in ways that are approved of by our communities. Among these behaviors
are those that are construed as natural. Natural behaviors are characteristically
bodily in nature: how we dress, diet, and sexual conduct are all paradigm
cases. There may also be norms about how we stand, move, breathe, and
digest (whether we can spit, where we defecate, and what to do when we
belch, for example). Each social group seems to have a set of behaviors that
are considered natural. These are expressed in dietary rules (don’t eat bugs
or horses), sexual mores (don’t have sex with animals), and countless norms
regulating self-presentation (wear shirts in public).
Norms that construct the social self are extremely important for identity and
group membership. They also regulate many of our day-to-day activities and
crucial life choices, such as whom we can marry. So it would be a terrible
oversight to neglect such norms, as has too often been the case in Western
moral philosophy. But norms that are enforced by anger and guilt may be even
more important for morality, in some sense. Notice that norms pertaining to
  141

the social self could be replaced by something like customs and conventions
that are not regarded moralistically. They would be regarded as optional forms
of self-expression and group affiliation rather than norms that must be followed.
There is evidence that this trend has occurred in the West, where, for example,
victimless sexual perversions are not seen as morally wrong, at least by high
SES groups (Haidt et al., 1993). But we can’t so easily replace the norms
associated with anger and guilt. These are autonomy norms, and their violation
leads to harms. If these were treated as optional, harms might proliferate and
social stability would be threatened. Norms pertaining to the social self may
be important for identity, but norms pertaining to harm are important for
preservation of life. They are, therefore, less dispensable.
Now one might argue that we can preserve our autonomy norms while
dropping the concomitant emotions. There are several reasons for thinking this
won’t work. First, it may be that these norms are constituted by the emotions,
as suggested by constitution models (Prinz, 2007), or depend on such emotions
for their characteristic moral profile, as suggested by the sentimental rules
theory (Nichols, 2004). Second, even if this is not the case, the emotions
may play a crucial role in maintaining the norms and acting on them. Recall
that anger promotes retaliation (as when we punish free-riders), and retaliation
leads to norm conformity. Anticipation of guilt leads to norm conformity even
when retaliation won’t arise (as when we can get away with being free-riders).
When we anticipate the wrath of others or our own guilt, this can defeat the
temptation to engage in harmful behavior. If these costs were removed, then
norm conformity might drop off dramatically.
We think this lesson follows from the empirical research. Those who regard
emotions as inessential to morality—or even as disruptive to morality—should
study the roles that these emotions play before recommending their elimination.
In our view, anger and guilt may play roles that are especially important.
Whether or not moral judgments essentially involve emotions, as the authors
of this chapter have argued elsewhere, emotions may be essential for the
preservation and practice of morality. If anger and guilt are not core ingredients
of a moral outlook, they may still be the sine qua non.

References
American Psychiatric Association (1994). Diagnostic and Statistical Manual of Mental
Disorders, 4th ed. Washington, DC: American Psychiatric Association.
Arsenio, W. & Ford, M. (1985). The role of affective information in social-
cognitive development: Children’s differentiation of moral and conventional events.
Merrill–Palmer Quarterly, 31: 1–17.
142    

Averill, J. (1983). Studies on anger and aggression: Implications for theories of emotion,
American Psychologist, 38: 1145–1160.
Ayer, A. (1936). Language, Truth, and Logic. London: Gollancz.
Barrett, K. C., Zahn-Waxler, C., & Cole, P. M. (1993). Avoiders versus amenders:
implications for the investigation of guilt and shame during toddlerhood? Cognition
and Emotion, 7: 481–505.
Batson, D. & Moran, T. (1999). Empathy-induced altruism in a prisoner’s dilemma.
European Journal of Social Psychology, 29: 909–924.
Baumeister, R. F., Stillwell, A. M., & Heatherton, T. F. (1994). Guilt: An interpersonal
approach. Psychological Bulletin, 115: 243–267.
Baumeister, R. F., Stillwell, A., & Wotman, S. R. (1990). Victim and perpetrator
accounts of interpersonal conflict: Autobiographical narratives about anger. Journal
of Personality and Social Psychology, 59: 994–1005.
Ben-Ze’ev, A. (2000). The Subtlety of Emotions. Cambridge, MA: MIT Press.
Bosman, R. & van Winden, F. (2002). Emotional hazard in a power to take experiment.
The Economic Journal, 112: 147–169.
Brink, D. (1989). Moral Realism and the Foundation of Ethics. Cambridge, UK: Cambridge
University Press.
Caprara, G., Barbaranelli, C., Pastorelli, C., Cermak, I., & Rosza, S. (2001). Facing
guilt: Role of negative affectivity, need for reparation, and fear of punishment in
leading to prosocial behaviour and aggression. European Journal of Personality, 15:
219–237.
Carlsmith, J. M. & Gross, A. E. (1969). Some effects of guilt on compliance, Journal of
Personality and Social Psychology, 11: 232–239.
Chapman, M., Zahn-Waxler, C., Cooperman, G., & Iannotti, R. (1987). Empathy
and responsibility in the motivation of children’s helping. Developmental Psychology,
23: 140–145.
Clayton, S. D. (1992). The experience of injustice: Some characteristics and correlates.
Social Justice Research, 5: 71–91.
Darwin, C. (1872). The Expression of the Emotions in Man and Animals. London: John
Murray.
Dreier, J. (1990). Internalism and speaker relativism. Ethics, 101: 6–26.
Eisenberg, N. (2000). Emotion, regulation, and moral development. Annual Review of
Psychology, 51: 665–697.
Ekman, P. (1971). Universals and cultural differences in facial expressions of emotion.
Nebraska Symposium on Motivation, J. Cole (ed.). Lincoln, NE: University of Nebraska
Press, 207–283.
(1994). All emotions are basic. In P. Ekman & R. Davidson (eds.), The Nature of
Emotion. New York: Oxford University Press, 15–19.
Ekman, P., Levenson, R. W., & Friesen, W. V. (1983). Autonomic nervous system
activity distinguishes among emotions. Science, 221 (4616): 1208–1210.
Ellsworth, P. C. & Smith, C. A. (1988). From appraisal to emotion: Differences among
unpleasant feelings. Motivation and Emotion, 12: 271–302.
  143

Fehr, E. & Fischbacher, U. (2004). Third party punishment and social norms. Evolution
and Human Behavior, 25: 63–87.
Fehr, E. & Gächter, S. (2000). Cooperation and punishment in public goods experi-
ments. American Economic Review, 90: 980–994.
(2002). Altruistic punishment in humans. Nature, 415: 137–140.
Fessler, D. & Haley, K. (2003). The strategy of affect. In P. Hammerstein (ed.), Genetic
and Cultural Evolution of Cooperation. Cambridge, MA: MIT Press, 7–36.
Foot, P. (1972). Morality as a system of hypothetical imperatives, The Philosophical
Review, 81: 305–316.
Frank, R. (1988). Passions Within Reason. New York: W. W. Norton.
Frankena, W. (1973). Ethics, 2nd ed. Englewood Cliffs, NJ: Prentice Hall.
Gibbard, A. (1990). Wise Choices, Apt Feelings. Cambridge, MA: Harvard University
Press.
Greene, J. (2008). The secret joke of Kant’s soul. In W. Sinnott-Armstrong (ed.), Moral
Psychology, Vol. 3. Cambridge, MA: MIT Press, 35–80.
Grusec, J. E. & Goodnow, J. J. (1994). Impact of parental discipline methods on the
child’s internalization of values: A reconceptualization of current points of view.
Developmental Psychology, 30: 4–19.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review, 108: 814–834.
(2003). The moral emotions. In R. Davidson et al. (eds.), The Handbook of Affective
Sciences. Oxford: Oxford University Press, 852–870.
(2007). The new synthesis in moral psychology. Science, 316: 998–1002.
Haidt, J. & Sabini, J. (ms). What exactly makes revenge sweet?
Haidt, J., Koller, S., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to
eat your dog? Journal of Personality and Social Psychology, 65: 613–628.
Hall, G. S. (1898). A study of anger. The American Journal of Psychology, 10: 516–591.
Harris, M., Benson, S., & Hall, C. (1975). The effects of confession on altruism. Journal
of Social Psychology, 96: 187–192.
Heekeren, H., Wartenburger, I., Schmidt, H., Schwintowski, H., & Villringer,
A. (2003). An fMRI study of simple ethical decision-making. Neuroreport, 14:
1215–1219.
Henrich, J. & Smith, N. (2004). Comparative experimental evidence from Machiguen-
ga, Mapuche, Huinca, and American populations. In J. Henrich et al., Foundations
of Human Sociality. New York: Oxford University Press, 125–167.
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., & Gintis, H. (2004).
Foundations of Human Sociality. New York: Oxford University Press.
Hoffman, M. L. (2000). Empathy and Moral Development. New York: Cambridge
University Press.
Hopfensitz, A., & Reuben, E. (forthcoming). The importance of emotions for
the effectiveness of social punishment. Downloaded 20 September 2005 from
http://www.fee.uva.nl/creed/pdffiles/HR05.pdf.
Isen, A. M. & Levin, P. F. (1972). The effect of feeling good on helping: Cookies and
kindness. Journal of Personality and Social Psychology, 21: 384–388.
144    

Jones, W. H., Kugler, K., & Adams, P. (1995). You always hurt the one you love:
Guilt and transgressions against relational partners. In K. Fisher & J. Tangney (eds.),
Self-conscious Emotions. New York: Guilford, 301–321.
Kant, I. (1996). The Metaphysics of Morals. M. J. Gregor (trans.). Cambridge: Cambridge
University Press.
(1997). Groundwork of the Metaphysics of Morals, ed. Mary Gregor. Cambridge:
Cambridge University Press.
Katz, I., Glass, D., & Cohen, S. (1973). Ambivalence, guilt, and the scapegoating of
minority group victims. Journal of Experimental Social Psychology, 9: 423–436.
Kelly, D., Stich, S., Haley, K., Eng, S., & Fessler, D. (2007). Harm, affect and the
moral/conventional distinction. Mind & Language, 22: 117–131.
Keltner, D., Ellsworth, P. C., & Edwards, K. (1993). Beyond simple pessimism: Effects
of sadness and anger on social perception. Journal of Personality and Social Psychology,
64: 740–752.
Kochanska, G., Gross, J., Lin, M., & Nichols, K. (2002). Guilt in young children:
Development, determinants, and relations with a broader system of standards. Child
Development, 73: 461–482.
Lazarus, R. (1991). Emotion and Adaptation. New York: Oxford University Press.
Lemerise, E. & Dodge, K. (2008). The development of anger and hostile reactions.
In Michael Lewis et al. (eds.), Handbook of Emotions, 3rd edn. New York: Guilford
Press, 730–741.
Lerner, J., Goldberg, J., & Tetlock, P. (1998). Sober second thought: The effects
of accountability, anger, and authoritarianism on attributions of responsibility,
Personality and Social Psychology Bulletin, 24: 563–574.
McDowell, J. (1985). Values and secondary qualities. In T. Honderich (ed.), Morality
and Objectivity. London: Routledge & Kegan Paul, 110–129.
Mikula, G. (1986). The experience of injustice: toward a better understanding of its
phenomenology. In Justice in Social Relations, H. W. Bierhoff, R. L. Cohen, &
J. Greenberg (eds.). New York: Plenum, 103–124.
Moll, J., de Oliveira-Souza, R., & Eslinger, P. J. (2003). Morals and the human brain:
A working model. Neuroreport, 14: 299–305.
Moll, J., de Oliveira-Souza, R., Bramati, I., & Grafman, J. (2002). Functional networks
in emotional moral and nonmoral social judgments. NeuroImage, 16: 696–703.
Nichols, S. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment. New
York: Oxford University Press.
Nucci, L. (2001). Education in the Moral Domain. Cambridge: Cambridge University
Press.
Ortony, A. & Turner, J. (1990). What’s basic about basic emotions? Psychological Review,
97: 315–331.
Panksepp, J. (1998). Affective Neuroscience. New York: Oxford University Press.
(2000). Emotions as natural kinds within the mammalian brain. In M. Lewis
& J. Haviland (eds.) Handbook of Emotions, 2nd ed. New York: Guilford Press,
87–107.
  145

Pillutla, M. & Murnighan, J. (1996). Unfairness, anger and spite. Organizational Behavior
and Human Decision Processes, 68: 208–224.
Preston, S. & de Waal, F. (2002). Empathy: Its ultimate and proximate bases. Behavioral
and Brain Sciences, 25: 1–20.
Prinz, J. (2002). Furnishing the Mind: Concepts and Their Perceptual Basis. Cambridge,
MA: MIT Press.
(2004). Gut Reactions: A Perceptual Theory of Emotion. New York: Oxford Univer-
sity Press.
(2007). The Emotional Construction of Morals. Oxford: Oxford University Press.
(2008). Is morality innate? In W. Sinnott-Armstrong (ed.), Moral Psychology,
Volume 1: The Evolution of Morality: Adaptations and Innateness. Cambridge, MA:
MIT Press, 367–406.
(forthcoming). Is empathy necessary for morality? In P. Goldie & A. Coplan (eds.),
Empathy: Philosophical and Psychological Perspectives. New York: Oxford University
Press.
Rozin, P., Lowry, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis. Journal
of Personality and Social Psychology, 76: 574–586.
Sanfey, A., Rilling, J., Aronson, J., Nystrom, L., & Cohen, J. (2003). The neural basis
of economic decision-making in the ultimatum game. Science, 300: 1755–1758.
Scherer, K. R. (1997). The role of culture in emotion-antecedent appraisal. Journal of
Personality and Social Psychology, 73: 902–922.
Schnall, S., Haidt, J., & Clore, G. (2008). Disgust as embodied moral judgment.
Personality and Social Psychology Bulletin, 34: 1096–1109.
Shaver, P., Schwartz, J., Kirson, D., & O’Connor, C. (1987). Emotion knowledge:
Further exploration of a prototype approach. Journal of Personality and Social Psychology,
52: 1061–1086.
Shin, L. M., Dougherty, D., Macklin, M. L., Orr, S. P., Pitman, R. K., & Rauch, S. L.
(2000). Activation of anterior paralimbic structures during guilt-related script-driven
imagery. Biological Psychiatry, 48: 43–50.
Shweder, R., Mahapatra, M., & Miller, J. (1987). Culture and moral development. In
J. Kagan & S. Lamb (eds.), The Emergence of Morality in Young Children. Chicago, IL:
University of Chicago Press, 1–83.
Shweder, R., Much, N., Mahapatra, M., & Park, L. (1999). The ‘‘Big Three’’ of
morality (autonomy, community, divinity) and the ‘‘Big Three’’ explanations of
suffering. In A. Brandt & P. Rozin (eds.), Morality and Health. New York: Routledge,
119–169.
Smetana, J. (1993). Understanding of social rules. In M. Bennett (ed.), The Development
of Social Cognition: The Child as Psychologist. New York: Guilford Press, 111–141.
Smith, C. A. & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion.
Journal of Personality and Social Psychology, 48: 813–838.
Smith, M. (1994). The Moral Problem. Oxford: Blackwell.
Sprecher, S. (1992). How men and women expect to feel and behave in response to
inequity in close relation. Social Psychology Quarterly, 55: 57–69.
146    

Sripada, C. & Stich, S. (2006). A framework for the psychology of norms. In


P. Carruthers, S. Laurence, & S. Stich (eds.), The Innate Mind: Culture and Cognition.
New York: Oxford University Press.
Stark, R. (1997). The Rise of Christianity. New York: HarperCollins.
Stevenson, C. (1944). Ethics and Language. New Haven, CT: Yale University Press.
Takahashi, H., Yahata, N., Koeda, M., Matsuda, T., Asai, K., & Okubo, Y. (2004).
Brain activation associated with evaluative processes of guilt and embarrassment: An
fMRI study. NeuroImage, 23: 967–974.
Tangney, J. P., Miller, R. S., Flicker, L., & Barlow, D. H. (1996). Are shame, guilt,
and embarrassment distinct emotions? Journal of Personality and Social Psychology, 70:
1256–1269.
Tissot, S. A. D. (1766). A Treatise on the Crime of Onan, Illustrated with a Variety of Cases
Together with the Method of Cure. London: B. Thomas.
Trivers, R. (1971). The evolution of reciprocal altruism, Quarterly Review of Biology,
46: 35–57.
Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention, Cam-
bridge: Cambridge University Press.
Wheatley, T. & Haidt, J. (2005). Hypnotically induced disgust makes moral judgments
more severe. Psychological Science, 16: 780–784.
Zahn-Waxler, C., Radke-Yarrow, M., Wagner, E., & Chapman, M. (1992). Devel-
opment of concern for others, Developmental Psychology, 28: 126–136.
Zelazo, P., Helwig, C., & Lau, A. (1996). Intention, act, and outcome in behavioral
prediction and moral judgment, Child Development, 67: 2478–2492.

You might also like