Uv7092 PDF Eng

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

UV7092

Dec. 18, 2015

Ethics Beneath the Surface

Ethics is an inextricable, centrally important facet of business. Many approaches to values and ethics in
business focus on deliberative processes—that is, they involve how we reason and think through the ethical
implications of a particular business decision. As a practical matter, there’s good reason for this deliberative
focus: navigating the world of business requires that leaders actively consider values and ethics in their decision-
making processes. By improving our deliberative, cognitive approaches to reasoning about ethics in business,
we improve the likelihood of better business decisions that are consistent with individual, organizational, and
societal values.1

But this is not enough. Decision making of any kind is characterized not only by the cognitive, deliberative
reasoning employed, but also by a number of other influences, such as our emotions,2 our intuition, and other
unconscious processes.3 As a result, decisions are, in part, driven by such influences, rather than resulting from
a comprehensive, deliberative decision-making process. Sometimes we are aware of these influences; most of
the time, however, we are not. These nondeliberative forces can create or contribute to a number of biases or
decision traps that can adversely affect our consideration of ethics and values, resulting in “blind spots” and
impaired decision making.4 Therefore, in striving to improve the ethics of our decisions, it is not sufficient to
just work to improve the way we employ deliberative reason to guide our actions; we should also seek to better
understand common ethical blind spots in order to better avoid them.

The purpose of this note is to raise awareness of such blind spots. A consideration of behavioral ethics—
biases and noncognitive triggers that influence our moral judgment—serves as a complement to a focus on
enhancing our deliberate approaches to ethical decision making. Joshua Greene employs the analogy of a
camera to describe the way our brains function in forming moral judgments; the human brain is compared to
a dual-mode camera, which contains both automatic settings and a manual mode.5 Under certain circumstances—
when problems or dilemmas are familiar, for instance—we tend to default to “automatic” mode. When faced

1 See, for instance, Andrew C. Wicks, Jared D. Harris, and Bidhan L. Parmar, “Moral Theory and Frameworks,” UVA-E-0339 (Charlottesville, VA:

Darden Business Publishing, 2003); Andrew C. Wicks, Bidhan L. Parmar, R. Edward Freeman, Jared D. Harris, and Jenny Mead, “An Introduction to
Ethics: Framing and Key Themes in Business Ethics,” UVA-E-0340 (Charlottesville, VA: Darden Business Publishing, 2009); and Andrew C. Wicks, R.
Edward Freeman, Jared D. Harris, Bidhan L. Parmar, and Jenny Mead, “Introduction to Ethics: The Language of Ethics for Managers,” UVA-E-0405
(Charlottesville, VA: Daren Business Publishing, 2015).
2 Antonio Damasio, Descartes’ Error: Emotion, Reason, and the Human Brain (New York, NY: Putnam, 1994).
3 Leonard Mlodinow, Subliminal: How Your Unconscious Mind Rules Your Behavior (New York, NY: Vintage, 2012).
4 See, for example, Max Bazerman and Ann Tenbrunsel, Blind Spots: Why We Fail to Do What’s Right and What to Do About It (Princeton, NJ:

Princeton University Press, 2011); Joshua Greene, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (New York, NY: Penguin, 2013);
and John Brockman, Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction (New York, NY: Harper Perennial, 2013),
especially chapter 15.
5 Greene, 133. See also Greene chapter 12 (“Beyond Point-and-Shoot Morality: Six Rules for Modern Herders”). This dual-process theory of moral

judgment builds upon a broader understanding of our dual-process brains and associated System 1 and System 2 thinking, explained in more detail in
Daniel Kahneman, Thinking, Fast and Slow (New York, NY: Farrar, Straus and Giroux, 2011).

This technical note was prepared by Jared D. Harris, Samuel L. Slover Associate Professor of Business Administration; Morela Hernandez, Associate
Professor of Business Administration; and Cristiano Guarana, Senior Research Specialist. Copyright © 2015 by the University of Virginia Darden School
Foundation, Charlottesville, VA. All rights reserved. To order copies, send an e-mail to [email protected]. No part of this publication may be
reproduced, stored in a retrieval system, used in a spreadsheet, or transmitted in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without
the permission of the Darden School Foundation.

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 2 UV7092

with less familiar dilemmas or problems, we can switch to “manual” mode, invoking more cognitive flexibility
and deliberative reasoning. Both ways of approaching ethical decision making are important, and we ignore one
approach or the other at our peril.

For leaders, this can be critically important to understand, as contextual factors in the business environment
can amplify the consequences of blind spots. Consider, for example, the deaths and injuries caused by the Ford
Pinto. In the early 1970s, Ford was facing intense competition from Volkswagen. The rivalry between the firms
triggered a mindset at Ford that favored quantifying every cost involved with the manufacture and sale of the
Pinto. When a safety risk involving the placement of the fuel tank in that particular model was identified, the
prevailing mindset caused managers to consider the potential harm in a certain way: as an additional cost. As
such, the managers at Ford quantified the fuel tank risks in terms of financial exposure connected to the lawsuits
that would certainly result from accidents caused by a defective fuel tank. Ford rushed to release Pinto to the
market and ignored the larger safety considerations, to disastrous effect.6

Blind spots can cause managers to bend or break ethical norms without being fully aware of the ethical
issues and consequences. Even more surprising is the fact that blind spots may cause managers to unknowingly
and inadvertently encourage unethical behavior in their teams and organizations. In the following sections, we
explore seven common decision-making biases and describe how they may lead responsible managers to engage
in unethical behavior. In addition, we discuss how these seven cognitive biases may potentially be harnessed to
work in the opposite direction: to improve ethical decision making.

Framing Effect

Consider the following scenario involving an ethically questionable business activity:

You are an entrepreneur interested in acquiring a business that is currently owned by a competitor. The competitor,
however, has not shown any interest in either selling his business or merging with your company. To gain inside knowledge
of his firm, you consider hiring a consultant you know to call contacts in your competitor’s business and ask if the
company is having any serious problems that might threaten its viability. If there are such problems, you might be able
to use the information to either hire away the company’s employees or get the competitor to sell.

As of now, your analysis suggests that you have a 75% chance of losing the acquisition.

How likely are you to hire this consultant?

If you were inclined to hire the consultant, you responded to the hypothetical scenario in the same way as
the majority of the participants in a recent study.7 Despite the morally questionable nature of the described
activity, this approach—akin to so-called opposition research in political campaigns—was viewed as an
acceptable strategic action by most of the study’s participants. In business, such a practice is commonly referred
to as industrial espionage.

Now, consider the same scenario, but instead of reading the last two sentences, you read the following:

As of now, your analysis suggests that you have a 25% chance of gaining the acquisition. How likely are you to hire this
consultant?

6 Dennis A. Gioia, “Pinto Fires and Personal Ethics: A Script Analysis of Missed Opportunities,” Journal of Business Ethics 11, no. 5 (1992): 379–89.
7 Mary Kern and Dolly Chugh, “Bounded Ethicality: The Perils of Loss Framing,” Psychological Science 20, no. 3 (2009): 379.

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 3 UV7092

Participants in this condition were much less likely to describe ethically dubious industrial espionage as an
acceptable strategy than participants in the prior condition. Note, however, that the two situations are
objectively (and mathematically) identical; therefore, there is no rational justification for any difference in
behavior. What’s going on here? The difference arises from the contrasting ways the decision is framed, or
described: managers appear more willing to engage in ethically questionable activities as a way to avoid losses
than they do as a way to improve gains. In other words, loss framing “triggered a greater willingness to stretch
ethical boundaries, even in a fictionalized scenario in which participants had nothing to actually gain or lose.”8

But would this same pattern of ethical assessments turn up in ethical actions? A similar pattern occurs in
actual negotiations. For example, in another experiment, participants in a loss-framing scenario (75% chance
of losing their commission) behaved less honestly, lied more about the property, used misrepresentation tactics,
and made more false promises than individuals in the gain-framing scenario (25% chance of gaining their
commission).

Why are people more likely to behave unethically when presented with what they could lose? Our brains
are wired to go to greater lengths to avoid a loss than to obtain a gain of a similar size. This implies that generally
honest individuals will behave more unethically to avoid a loss.

Taken together, these findings caution managers to consciously consider the undesirable unethical
consequences that can result from simply framing the situations as losses. Framing an unmet goal as a loss, for
instance, may be more likely to trigger unethical behavior in the team. Sears experienced the pervasive effect of
loss framing in goal setting in the 1990s. Managers gave automotive mechanics a specific sales goal of $147 per
hour and constantly monitored their unmet goals. Instead of working faster, employees met their goals by
overcharging for services and repairing things that were not broken, a systematic trend that eventually became
a national scandal.9 Whereas an overemphasis on loss framing in goal setting can encourage unethical
behavior,10 conversely, managers who frame the situations as gains can potentially minimize unethical behavior.
Highlighting how goals have been met can easily translate into a gain-framing situation and potentially decrease
undesirable behavior in organizational teams.

Overvaluing Outcomes

Individuals tend to punish outcomes more harshly than bad intentions. Consider the story of two brothers:
Jon and Matt. A man insults their family. Jon wants to kill the offender. He draws and fires a gun but misses.
In contrast, Matt wants only to scare the offender, but accidentally shoots and kills him. In many countries,
Matt can expect a far more serious penalty than Jon. This is because in many cases, the measurable, observable
outcome is the primary criterion used to evaluate whether something is right or wrong. This tendency is borne
out in the results of an experiment in which participants in an economic-allocation game punish others for
unintentional negative outcomes.11

Now consider the following scenario:

8 Kern and Chugh.


9 D. Disheau, “Sears Admits Mistakes at Auto Shops; Overhauling its Sales System,” Associated Press Newswire, June 23, 1992; Lynn S. Paine and
Michael Santoro, “Sears Auto Centers,” HBS no. 394-009 (Boston, MA: Harvard Business School Publishing, 1993).
10 Adam Barsky, “Understanding the Ethical Cost of Organizational Goal-Setting: A Review and Theory Development,” Journal of Business Ethics 81,

no. 1 (2008): 63−81. See also Lisa D. Ordóñez, Maurice E. Schweitzer, Adam D. Galinsky, and Max H. Bazerman, “Goals Gone Wild: The Systematic
Side Effects of Over-Prescribing Goal Setting,” Academy of Management Perspectives 23, no. 1 (2009): 6−16.
11 Fiery Cushman, Anna Dreber, Ying Wang, and Jay Costa, “Accidental Outcomes Guide Punishment in a ‘Trembling Hand’ Game,” PLOS One 4,

no.8 (2009).

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 4 UV7092

A pharmaceutical researcher defines a clear protocol for determining whether or not to include patients as data points in
a study. He is running short on time to collect sufficient data points for his study within an important budgetary cycle in
his firm.

Scenario A continues:

As the deadline approaches, he notices that four subjects were withdrawn from the analysis due to technicalities. He
believes that the data in fact are appropriate to use, and when he adds those data points, the results move from not quite
statistically significant to significant. He adds these data points, and soon the drug goes to market. This drug is later
withdrawn from the market after it kills six patients and injures hundreds of others.

Scenario B continues:

He believes that the product is safe and effective. As the deadline approaches, he notices that if he had four more data
points for how subjects are likely to behave, the analysis would be significant. He makes up these data points, and soon
the drug goes to market. This drug is a profitable and effective drug, and years later shows no significant side effects.

Which researcher was rated more unethical? If you chose Scenario A, you agreed with most of the
participants in a recent study.12 These participants also thought the researcher in Scenario A should be punished
more harshly. In both situations, individuals were rewarded based on the results rather than high-quality
decisions.

Given our propensity to overvalue outcomes, managers should avoid rewarding unethical decisions
regardless of their outcomes. Such behavior by managers can incentivize team members to take undesirable
ethical risks. Eventually, these practices can create or encourage a culture in which ethical issues are ignored.
For example, an emphasis on “management by objectives” can create a focus on ends rather than means,
making it more difficult for team members to recognize ethical issues and easier for them to rationalize unethical
behavior.

Instead, managers could require an explicit discussion and presentation of the decision analysis prior to the
decision being made. Similarly, leaders can reward ethical decisions by identifying and measuring behaviors that
preceded the outcome. For example, managers can incentivize in-role behaviors (e.g., sharing information and
fair treatment) and extra-role behaviors (e.g., helping others and altruistic behavior), which can help to establish
norms for ethical behavior, rather than simply emphasizing and rewarding or punishing the outcomes of
decisions, which can inadvertently encourage shortcuts and unethical corner-cutting.

Status Quo Tendency

Individuals tend to avoid the discomfort of complex choices and usually opt for the default option. In the
early 1990s, New Jersey and Pennsylvania inadvertently ran a real-life experiment providing evidence of status
quo bias. As part of tort law reform programs, citizens were offered two options for their auto insurance: a
cheap option with restricted rights to sue, and an expensive option with the full right to sue. In New Jersey, the
cheaper option was the default and most citizens selected it. Only a minority chose the cheaper option in
Pennsylvania, however, where the more expensive option was the default. People often gravitate toward the
status quo, regardless of its merits.

12 Francesca Gino, Don A. Moore, and Max H. Bazerman, “No Harm, No Foul: The Outcome Bias in Ethical Judgments,” Harvard Business School

Working Paper 08-080, 2008.

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 5 UV7092

Now, imagine you are an aspiring investor. You are a serious reader of the financial pages but until recently,
you have not had sufficient funds to invest—that is, until you inherited a large sum of money from your great-
uncle. Given your inheritance, you are now considering different portfolios. Your choices are to invest in a
moderate-risk company, a high-risk company, treasury bills, or municipal bonds.

Now imagine a similar scenario, but a significant portion of this portfolio is already invested in a moderate-
risk company. What would you do? The majority of the participants in a series of studies chose the option
designated as the status quo.13

Similar effects have been shown for contributions to retirement plans, choice of Internet privacy policies,
and the decision to become an organ donor. In Europe, for instance, some countries observe high levels of
organ donations, whereas others face low levels of organ donations. The difference is very simple: the organ
donation check box on their driver’s license applications. In the high-donating countries, citizens read the
following default option: Yes, I will donate. In contrast, in the low-donating countries, citizens read: No, I will
NOT donate. It is no accident that the default (status quo) option was highly correlated with the empirical results
(Figure 1).

Figure 1. Status quo bias and organ donation.

110
99.98 98 99.91 99.97 99.50 99.64
100
90 85.90
Effective Consent Percentage

80
70
60
50
40
27.50
30
20 17.17
12
10 4.25
0

Data source: William Samuelson and Richard Zeckhauser, “Status Quo Bias in Decision Making,” Journal of Risk and
Uncertainty 1 (1988).

When a proposal to alter a certain default parameter (e.g., product, policy) might have undesirable or
unethical consequences, managers should consider making the more widely beneficial choice the status quo.
Thus rather than relying on individuals to actively choose the more ethical “box,” laying the groundwork to
make that choice the default option could improve the ultimate decision. This is a structure-oriented approach

13 William Samuelson and Richard Zeckhauser, “Status Quo Bias in Decision Making,” Journal of Risk and Uncertainty 1 (1988).

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 6 UV7092

to influencing organizational ethics,14 and has been demonstrated to be highly influential at encouraging or
discouraging (“nudging”) certain desired behaviors.15

Self-Enhancement Bias

Individuals tend to describe themselves as better people than they really are. This positive illusion is also
called the better-than-average effect, and has a pervasive influence on individuals’ judgment and performance.
Research has found, for example, that 70% of high schoolers rate themselves above the average in their
leadership skills; 93% of Americans assess their driving skills as better than the median;16 and 94% of college
professors report doing above-average work.17

Do we also evaluate ourselves as above-average moral individuals? It turns out that yes, we tend to think
we are more moral than the average individual, exemplifying a tendency toward ethical overconfidence.
Students generally report that they are less likely to cheat than the average student.18 This misleading, better-
than-average positive ethical self-evaluation also influences our predictions regarding moral behavior. In one
study, 83% of participants anticipated that they would buy at least one daffodil during an American Cancer
Society fundraiser; nevertheless, only 43% actually did. In another study, participants predicted that they would
give away $2.44 of a $5.00 payment to a charity (e.g., Salvation Army); however, they actually donated only
$1.53.19

This is closely related to what is known as the fundamental attribution error: the tendency to judge others
as less ethical than we are (e.g., my co-worker John padded his expense account because he is dishonest), while
excusing our own ethical lapses as being dictated by the circumstances (e.g., I padded my expense account
because I’m working long hours and my boss underpays me). This is analogous to attributing organizational
successes to our superior leadership abilities while dismissing organizational failures as arising from
circumstances beyond our control.20

This can pose a challenge for ethical leadership, as this decision trap can influence the way we conduct
business. For instance, considering that individuals believe they behave more ethically than the average, they
may be less inclined to monitor their own ethical standards and behavior. This can contribute to the likelihood
of unethical organizational actions. Managers, therefore, need to be aware of such moral distortions and
implement accountability systems to minimize the effects of self-enhancement on ethical misconduct21—but
of course managers need to be careful that such approaches don’t overemphasize the loss-emphasizing framing
effects, as discussed previously.

Managers should also emphasize the importance of ethics in their organization. Since individuals tend to
evaluate themselves as better than average in ethical decision making, managers who send clear signals (e.g.,

14 Ting Zhang, Francesca Gino, and Max Bazerman, “Morality Rebooted: Exploring Simple Fixes to Our Moral Bugs,” Research in Organizational

Behavior 34, 2014: 63−79.


15 Richard H. Thaler and Cass R. Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008).
16 Ola Svenson, “Are We All Less Risky and More Skillful than Our Fellow Drivers?,” Acta Psychologica 47, no. 2 (1981): 143−48.
17 K. Patricia Cross, “Not Can, But Will College Teaching Be Improved?” New Directions for Higher Education 17, no. 17 (1977), 1−15.
18 Gregory G. Manley, Craig J. Russell, M. Ronald Buckley, “Self-Enhancing in Perceptions of Behaving Unethically,” Journal of Education for Business

77, no. 1 (2001), 21−28; Milorad Novicevic, M. Ronald Buckley, Michael G. Harvey, and Helen Fung, “Self-Evaluation Bias of Social Comparisons in
Ethical Decision Making: The Impact of Accountability,” Journal of Applied Social Psychology 38, no. 4 (2008): 1061−91.
19 Nicholas Epley and David Dunning. “Feeling ‘Holier Than Thou’: Are Self-Serving Assessments Produced by Error in Self- or Social Prediction?”

Journal of Personality and Social Psychology 79, no. 6 (2000): 861−75


20 For elaboration on this point, see “Fundamental Attribution Error,” Ethics Unwrapped website,
http://ethicsunwrapped.utexas.edu/video/fundamental-attribution-error (accessed Dec. 1, 2015).
21 Manley, Russell, and Buckley.

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 7 UV7092

communication and behavior modeling) that the “average” moral behavior in the organization is quite high,
can increase the comparison point for subordinates.

Egocentric Bias

A decision trap closely related to the self-enhancement effect is that of self-interested egocentrism.
Individuals often conclude that self-interested outcomes are not only desirable but morally justifiable. Like
other biases, this egocentric bias can happen effortlessly, unconsciously, or automatically.22

What is the fair wage if you work for 10 hours? What if another person did the same work? The fair wage
should be the same, right? In a clever study, researchers showed that people become tightfisted when allocating
money to others. Participants reported that they deserved $35.24 when they had worked 10 hours, but thought
their partner deserved only $30.29 for the same work.23 Similarly, participants randomly assigned to the role of
plaintiff or defendant in a court case disagreed in their perceptions of a fair settlement by almost $18,000 in the
self-serving direction.24 Although some might argue that individuals deliberatively decided that they deserved
more than their counterpart, this evaluative processes happened rapidly and unintentionally, making a
deliberative process less plausible.

Furthermore, individuals are more vulnerable to egocentric bias when they are distracted or otherwise
unmotivated to correct the egocentric behavior. For example, participants asked to make comparative
evaluations of their own versus others’ skills behaved more selfishly when they were also asked to memorize a
string of eight consonant letters. Apparently, these individuals could not allocate sufficient attentional resources
to correct the automatic egocentric default.25 On the other hand, participants who received a financial incentive
for accuracy in their comparative assessments behaved less selfishly. The financial reward may have increased
the participants’ motivation to control their impulses.

As another example of egocentric bias, take Barry Bonds’s record-setting 73rd home run baseball. Alex
Popov caught the ball cleanly after Bonds hit it deep into the right field stands; however, he lost it to Patrick
Hayashi. Popov held the ball first, Hayashi held it last, and both believed they were clearly the rightful owner
for obviously self-serving reasons. Each argued that he was the rightful owner because each wanted the valuable
object. Later, a judge disagreed with both and derived another position: the judge decided that the auction
proceeds should be split evenly between them.26

Egocentric behavior is so pervasive that the most effective debiasing strategy is to intervene before people
have even developed a perspective to bias their judgments. Managers, therefore, should provide balanced data
and evidence before assigning subordinates to a specific role. Social roles change people’s perspectives, and
therefore their perceptions. Once a subordinate is given a particular perspective on a problem, it is almost
inevitable that this perspective will influence his or her judgments, behavior, and moral reasoning.

22 Nicholas Epley and Eugene M. Caruso, “Egocentric Ethics,” Social Justice Research 17 (2004): 171−87.
23 David M. Messick and K. Sentis, “Fairness, Preference, and Fairness Biases,” in Equity Theory: Psychological and Sociological Perspectives, eds. D. M.
Messick and K. S. Cook (New York. NY: Praeger, 1983): 61−94.
24 George Loewenstein, Samuel Issacharoff, Colin Camerer, and Linda Babcock, “Self-Serving Assessments of Fairness and Pretrial Bargaining,”

Journal of Legal Studies 22, no. 1 (1993): 135−59


25 Justin Kruger, “Lake Wobegon Be Gone! The ‘Below-Average Effect’ and the Egocentric Nature of Comparative Ability Judgments,” Journal of

Personality and Social Psychology 77, no. 2 (1999): 221−32.


26 Steve Wilstein, “Bonds’ No. 73 Ball Sparks Story of Greed,” June 26, 2003, http://www.myplainview.com/article_d4e8fb6b-591c-5a74-aa09-

b61608168cde.html (accessed Dec. 10, 2015).

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 8 UV7092

Escalation of Commitment

In addition to the egocentric bias of generally favoring self-beneficial actions and outcomes, individuals
often also develop specific attachments to programs, activities, and investments they are engaged in simply by
virtue of association. This association is amplified when costs—be they financial or psychological—are incurred
in connection with the underlying activity. When we feel financially, emotionally, or psychologically “invested”
in a particular project or course of action, we often retain an unreasonable commitment to that thing, remaining
engaged long after we should. This is even true when the costs are irrecoverable (commonly described as sunk
costs).

As a practical matter, we see this play out in our lives in a variety of different ways. Take for instance a
scenario that involves the decision of whether or not to attend a sporting event:

Two avid sports fans plan to travel 40 miles to see a basketball game. One of them paid for his ticket; the other was on
his way to purchase a ticket when he got one free from a friend. A blizzard is announced for the night of the game. Which
of the two ticket holders is more likely to brave the blizzard to see the game?27

Scholars suggest that the answer is obvious: the fan who paid for his ticket is more likely to drive. But why?
“Mental accounting” provides the explanation. Both fans in this example will presumably be disappointed to
miss the basketball game, but missing the game is perceived as being “distinctly more negative for the one who
bought the ticket and is now out of pocket as well as deprived of the game.”28 This perception is known as the
sunk-cost fallacy and should typically be ignored, as it can drive an escalation of commitment to an
unproductive action or activity. This has clear application to managerial decision making and business behavior:

Imagine a company that has already spent $50 million on a project. The project is now behind schedule and the forecasts
of its ultimate returns are less favorable than at the initial planning stage. An additional investment of $60 million is
required to give the project a chance. An alternative proposal is to invest the same amount in a new project that currently
looks likely to bring higher returns. What will the company do? All too often a company afflicted by sunk costs drives
into the blizzard, throwing good money after bad rather than accept the humiliation of closing the account of a costly
failure.29

The decision trap represented by escalation of commitment (and exacerbated by the sunk-cost fallacy) can
result in tremendous wasting of organizational resources, and helps explain the proliferation of unproductive
strategies and bad investments. Detrimental escalation of commitment can simply arise from organizational
inertia and team commitment, even when well intentioned.

But this decision trap has dire implications for organizational ethics as well. An escalation of commitment
to an ethically questionable policy or action can encourage the perpetuation of that misconduct. Just as a small
lie often necessitates additional lies in order to maintain the initial deception, organizational misconduct often
becomes “normalized” through its enactment, and unethical conduct can snowball and expand under its own
social momentum.30 Once certain ethical compromises have been made, additional—and more egregious—
ethical lapses become more likely. This is due, in part, to the psychological commitment to the ongoing
unethical activity arising simply from one’s past participation in it.

27 Kahneman, 343; adapted from Richard Thaler, “Toward a Positive Theory of Consumer Choice,” Journal of Economic Behavior and Organization 1

(1980): 47−9.
28 Kahneman.
29 Kahneman, 345.
30 Vikas Anand, Blake E. Ashforth, and Mahendra Joshi, “Business as Usual: The Acceptance and Perpetuation of Corruption in Organizations,”

Academy of Management Executive 19, no. 4 (2005).

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 9 UV7092

Managers can work to turn this decision trap on its head, however. Organizations can and should strive to
help employees feel “invested” in ethical organizational action. To the extent leaders can successfully do this,
individuals will feel a heightened commitment to the firm’s values and place a premium on ethical actions,
which may work as a defense mechanism to corporate misconduct. In addition, efforts to diminish the
perceived costs associated with abandoning morally questionable behavior can also encourage individuals to
turn away from improper conduct and reengage with the ethics and values of the organization.

Omission Bias

Consider the following scenario:

John, the best tennis player at a club, wound up playing the final match of the club’s tournament against Ivan Lendl
(then ranked first in the world). John knew that Ivan was allergic to cayenne pepper and that the salad dressing in the
club restaurant contained it. When John went to dinner with Ivan the night before the final, he planned to recommend the
house dressing to Ivan, hoping that Ivan would get a bit sick and lose the match.

Now, imagine the same scenario, but Ivan ordered the dressing himself just before John recommended it,
and John, of course, said nothing. Which behavior is worse? About one-third of the participants in this study
said that John’s behavior was worse when he actively recommended the dressing. But does this mean that
keeping quiet when Ivan ordered the dressing himself is ethically defensible?

Omission bias is the tendency to judge acts that are harmful as worse than omissions that are equally
harmful or even more harmful. Most of us have the goal of not hurting people; but in some cases, doing nothing
is as harmful as doing something. But we tend not to think of it that way.

Now consider this scenario:

At a manufacturing plant, 5,000 workers will lose their jobs. Providing a tax break to this plant would save 5,000
jobs, but would cause the loss of 1,500 other jobs in a competitor’s plant.

Participants were asked: Should the government provide the tax break? Most participants chose “No” even
though choosing “Yes” would lead to fewer workers losing their jobs.31

The problem with these findings is that they suggest that managers systematically overemphasize the effects
of their actions and dismiss the implications of their inactions. In reality, organizational inaction can certainly
be ethically problematic, and our inclination to ignore the effects of our inaction is an ethical blind spot,
especially in light of the status quo tendency previously discussed.

Managers can involuntarily create the conditions for omission bias to happen. For example, whistle-
blowing policies that require individuals to proactively report what they see can influence workers’ likelihood
of inaction when observing ethically questionable behavior; it is simply easier to do nothing. Similarly, decision-
making processes that do not include mechanisms for easy dissention can decrease the likelihood of
subordinates to speak up at all. Therefore, managers should think carefully about the administrative processes
associated with both desirable and undesirable behavior, and be careful to not make ethical conduct contingent
on proactive, self-motivated actions.

31 Jonathan Baron and Ilana Ritov, “Protected Values and Omission Bias as Deontological Judgments,” in Moral Judgment and Decision Making: The

Psychology of Learning and Motivation 50, eds. D. M. Bartels, C. W. Bauman, L. J. Skitka, and D. L. Medin (San Diego, CA: Elsevier, 2009), 133–67.

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.
Page 10 UV7092

Conclusion

Ethics is an ever-present facet of business activity. The practice of business is ultimately a reflection of how
we think about business—but it is also a function of a number of behavioral influences that drive organizational
decision making in implicit ways. Although we are often comfortable assuming that we are aware of the factors
that influence our decision making, the vast breadth of empirical research findings suggests otherwise. Thus an
important way to improve managerial decision making is to uncover and minimize these ethical blind spots. By
understanding seven of the most common cognitive traps—framing effect, overvaluing outcomes, status quo
tendency, self-enhancement bias, egocentric bias, escalation of commitment, and omission bias—we can be
forearmed to fight potentially detrimental unconscious processes, and potentially turn these behavioral
influences to our advantage. In so doing, we increase our ability to employ both our automatic settings and
manual modes of decision making.

This document is authorized for use only in Prof. Vartika Dutta's MBA 23-25 T-5 Leadership and Decision Making (12/Aug/24) at Indian Institute of Management - Amritsar from Sep 2024 to
Dec 2024.

You might also like