HarmfulEvents Version Publié 01-01-2021
HarmfulEvents Version Publié 01-01-2021
HarmfulEvents Version Publié 01-01-2021
A R T I C L E I N F O A B S T R A C T
Keywords: We conduct a large scope field investigation of 19 major incidents in 19 large European insurance
Root cause analysis and banking institutions, based on 116 post-event interviews with managers and top executives
Heuristic biases over a two-year period. We demonstrate the power of the Root Cause Analysis (RCA) method for
Behavioral finance
detecting human biases documented by the behavioral finance, the organizational behavior and
Organizational power structure
occupational psychology literatures. These biases constitute key operational risk factors these
organizations. We find that organizational biases (such as a breach of psychological contract) take
center stage as root causes of incidents in these organizations. We also find that banks are more
exposed to emotional biases (fear and greed) and insurance companies more subjected to
cognitive conservatism as root cause biases. This research has direct implications regarding how
banks and insurance companies may cope with regulations that put a greater emphasis on
measuring and controlling operational risk and specifically misconduct risk.
1. Introduction
It is only in the past 10 years that the concept of behavioral risk has started to gain some traction in the financial industry. Many
financial institutions still do not take this type of risk seriously. The dominant view of risk management in banking institutions is
limited to techniques such as hedging, portfolio risk reduction, and credit and market risk management. In insurance companies, the
focus is largely on actuarial risk.
The need to add behavioral risk as a facet of operational risk management became more acute in the wake of scandals rocking the
financial industry in the 1990s and following the excesses of the 2008 financial crisis. As a result, behavioral risk management has
emerged as a field that takes into account decision makers’ psychological biases and organizational risks within financial organizations
(Shefrin, 2016).
Behavioral risk is pervasive in many organizations, not just financial ones. In his seminal work, Shefrin (2016: 1) states: “Virtually
every major risk management catastrophe in the last 15 years has psychological pitfalls at its root. The list of catastrophes includes the
2008 bankruptcy of Lehman Brothers and subsequent global financial crisis, the 2010 explosion at BP’s Macondo well in the Gulf of
Mexico, which many regard the worst environmental event in US history, and the 2011 nuclear meltdown at the Fukushima Daiichi
power plant.” The financial industry has been a fertile ground for the study of heuristic and psychological biases, which is the main
topic of behavioral finance.
* Corresponding author.
E-mail addresses: [email protected] (C. Faugere), [email protected] (O. Stul).
https://doi.org/10.1016/j.ribaf.2021.101382
Received 14 July 2019; Received in revised form 26 August 2020; Accepted 1 January 2021
Available online 16 January 2021
0275-5319/© 2021 Elsevier B.V. All rights reserved.
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Together, the behavioral finance (Shefrin, 2000), organizational behavior (Champoux, 2011) and occupational psychology
(Millward, 2005) literatures have identified many sources of decision-making biases, misconducts, errors and dysfunctions in indi
vidual and organizational behaviors.
Initially, the behavioral finance literature focused solely on individual investors’ biases. Recently, some researchers (Aren et al.,
2016) have documented that institutional investors are subject to several behavioral biases as well. This new research stream examines
biases found in financial organizations such as mutual funds, banks and insurance companies.
Banks and insurance companies seldom use diagnostic tools to identify causes of organizational breakdowns. By comparison, the
heavy manufacturing and energy sectors have a long tradition of using such diagnostic tools. One example is the Root Cause Analysis
(RCA) method. Latino et al. (2006: 17) define Root Cause Analysis as “any evidence-driven process that, at a minimum, uncovers
underlying truths about past adverse events, thereby exposing opportunities for making lasting improvements”. RCA is a diagnostic
process that aims at revealing the primary causes of an incident or a major dysfunction within an organization.
Our research question can be summarized as follows: what can be learned about the root causes of incidents in financial organi
zations? We investigate this question by transposing to financial organizations the use of diagnostic tools that prior to this study were
the exclusive province of the heavy manufacturing and energy/chemical industries. Our research objective is to unearth possible
behavioral risks that are at the source of these incidents.
The novelty of this study is that we are the first to propose the systematic application of the RCA method as a behavioral risk
diagnostic tool in the banking and insurance sectors. We conduct a large scope field study covering 19 major separate incidents that
occurred in 19 multinational financial institutions. We apply the RCA method to pinpoint the material root causes of these incidents.
We extend the RCA methodology to detect behavioral biases. We go a step further than the standard RCA method and identify the
psychological biases that are behind these root causes.1 Essentially, these represent the psychological causes of the incident. We isolate
three main categories of underlying biases: (1) workplace/organizational biases, (2) heuristic and cognitive biases and (3) emotional
biases. We then ascertain which biases contained in these categories drive these root causes.
Research on investors (individual or institutional) behavioral biases has tended to use second-hand information to infer possible
biases.2 Only a few researchers have ventured into direct field investigation. When they do so, these researchers use short question
naire interactions. They focus on a single stratum of the firm’s management like asset managers or CFOs (Suto and Toshino, 2005;
Ben-David et al., 2007; Lütje and Menkhoff, 2007). By contrast, in this article, we use a multi-layered interview process. We collected
our data over a two-year period by conducting in-depth and semi-structured interviews of 116 of managers and top executives who
were involved in these incidents, either as actors or as simple witnesses.
While our field investigation of RCA in finance and insurance organizations is meant to serve as a pilot study, we discover some
interesting preliminary patterns. We find that the majority of root causes are human causes. These signal the presence of cognitive and/
or emotional biases behind the occurrence of incidents. The next dominant root causes are associated with organizational processes
and managerial styles. Root causes linked to malicious intent represent a very small percentage of the total number of root causes.
Emotional types of biases (fear, greed, overconfidence) show up more often as roots of incidents for banks than for insurance orga
nizations. By contrast, cognitive conservatism appears to be more prevalent as a root cause in insurance companies vs. banks.
The article follows the following outline. In Section 1, we inventory the various types of biases identified in the psychology and
behavioral finance literature. Section 2 introduces the RCA methodology. We then apply RCA to our field study of the banking and
insurance industries in Section 3. Section 4 lays out our main hypotheses for this pioneer study and highlights the key findings. Section
5 discusses the managerial implications of our approach. Finally, we address the limits of this study and possible extensions in the
concluding section.
Organizational incidents occur for various reasons. These may be failures of technical equipment, broken processes and/or indi
vidual misconducts. The four main categories of causes for breakdowns listed in Basel II and Solvency II (banking and insurance
regulations) are processes, people, systems and external events.
We postulate here that the source of organizational incidents is psychological and/or organizational biases. Along with Murata and
Nakamura (2014) we argue that these biases lead to human judgment errors, bad decisions, and misconducts.
We propose to classify biases into three broad categories: 1) Workplace/organizational biases, based on the industrial and orga
nizational/occupational psychology literature (Champoux, 2011; Millward, 2005; Wagner and Hollenbeck, 2010). Then, 2) heuristic
and cognitive biases, based on the cognitive psychology and behavioral finance literatures (Heath et al., 1994; Shefrin, 2000). Finally,
3) emotional biases, based on the neuroscience and behavioral finance literatures (Shefrin, 2000; Lerner et al., 2015).
Examples of cognitive biases are found in Shefrin (2016). The most frequently encountered biases include excessive optimism,
overconfidence, the confirmation bias, aversion to sure losses, and regret aversion. Excessive optimism involves overweighting the
probabilities of favorable outcomes relative to unfavorable outcomes. Overconfidence typically leads to the underweighting of
1
Murata and Yoshimura (2015) is the closest study to ours. They study industrial accidents (nuclear power plant, aviation, fire, and trans
portation, etc.) that occurred in Japan, and investigate the presence of cognitive biases or irrational behaviors in the processes leading to these
accidents. They do not however tackle the finance industry as we do here.
2
This data includes portfolio holdings, trading activities and stock market data obtainable from commercial databases or mandatory filings (like
form 13f in the US).
2
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 1
Glossary and Definitions of Biases.
Type of Bias Definition Literature
1. Organizational Biases
1.a) Psychological contract breach Failure to meet expectations about reciprocity, loyalty and protection Rousseau (1989)
between an employer and employee. Often manifested as negligence and
diluted responsibility.
1.b) Work stress and/or result While some stress may energize people, beyond a certain level it generates Ganster and Rosen (2013)
pressure adverse health reactions to excessive pressures and demands placed at
work.
1.c) Opportunistic behavior Pursuit of self-interest with guile. Including blatant forms of lying, stealing, Williamson (1985)
and cheating, as well as subtle forms of deceit, like incomplete or distorted
disclosure of information.
1.d) Incentives structures and/or Distributive, procedural and interactional justice. Pay for performance/ Cropanzano and Greenberg (1997);
organizational justice perception competency, equity/fairness, rewards and promotions. Durham and Bartol (2000)
1.e) Groupthink Abiding by a decision without critical evaluation of alternative opinions Janis (1982)
and actively suppressing dissenting viewpoints to show loyalty to the
group.
1.f) Power, politics and conflict Power and conflict: us vs. them mentality, turf wars. Politics can be defined Wagner and Hollenbeck (2010); Ferris
as activities in which individuals or groups engage in order to acquire et al. (1989)
power to advance their own interests.
1.g) Managing emotional labor Employees experiencing discordance between felt and required emotions, Bono and Vey (2005)
pretend to feel the required emotion (surface acting), or change their
emotions to match their organization’s displayed rules.
2. Heuristic Biases
2.a) Framing and/or anchoring Framing: drawing different conclusions from the same information, Tversky and Kahneman (1981, 1992)
depending on how that information is presented. Anchoring: the tendency
to rely too heavily on one trait or piece of (not-necessarily-relevant)
information when making decisions.
2.b) Availability bias and /or Halo Availability: The tendency to overestimate the likelihood of events easier to Tversky and Kahneman (1974);
effect retrieve from memory, which can be influenced by how recent the Rosenzweig (2014)
memories are or how unusual or emotionally charged they may be. Halo
effect: causes someone who likes one outstanding characteristic of an object
or person to extend this positive evaluation also on other features of that
object or person.
2.c) Illusion of truth The tendency to believe information to be correct after repeated exposure, Dechêne et al. (2010)
as a person is more likely to believe a familiar statement than an unfamiliar
one.
2.d) Magical thinking and/or Illusion Magical thinking: belief in different superstitions, reliance on lucky Levesque (2011); Tambiah (1990);
of control numbers, and other similar attitudes that are sometimes important aspects Langer (1975)
of decision-making and constructing beliefs. Illusion of control: the
tendency to overestimate one’s degree of influence over other external
events.
2.e) Representativeness bias Judging likelihood on the basis of similarity and resemblance to another Tversky and Kahneman (1974)
situation.
2.f) Gambler’s fallacy and short-series Gambler’s fallacy: an unjustified belief that even in small samples the Shefrin (2000)
problem number of outcomes should be in line with the probability distribution.
Short-series problem: where people underestimate the possibility of
relatively long series of results generated completely at random.
2.g) Regression to the mean problem People tend to expect reversals more often than trend continuations in Shefrin (2010)
forecasted sequences of data.
2.h) Extrapolation bias People try to spot trends in random processes (e.g. in stock prices) and De Bondt (1993, 1998)
expect past price changes to continue.
2.i) Cognitive conservatism It consists in overweighing prior beliefs and failing to correctly update Edwards (1968)
probability estimates when faced with new information.
2.j) Confirmation bias Tendency to search for or interpret information in a way that confirms Wason (1966); Lord et al. (1979)
one’s own preconceived view of the world.
2.k) Ex-post facto rationalization/ Ex-post facto rationalization: people construct ex post a credible Nisbett and Wilson (1977); Fischoff
Hindsight bias justification for the choices they made and/or the outcome they got. (1982a); Hawkins and Hastie (1990)
Hindsight bias: consists in people being erroneously convinced that a
specific unexpected event could have been predicted in retrospect.
2. L) Ambiguity aversion The avoidance of options for which missing information makes the Ellsberg (1961); Fox and Tversky (1995);
probability distribution of outcomes seem "unknown". Rode et al. (1999)
2.m) Endowment/Disposition effects Endowment effect: It consists in people attaching more value to things they Thaler (1980); Knetsch and Sinden
currently have than to identical objects that are not in their possession. (1984); Knetsch (1989); Shefrin and
Disposition effect: The fact that people often demand much more to give up Statman (1985)
an object or an asset than they would be willing to pay to acquire it.
2.n) Herding behavior Tendency of a person to imitate others’ behaviors and actions, either Burke et al. (2010); Choe et al. (1999);
because of lack of autonomous decision-making capability, or in order to Scharfstein and Stein (1990)
chase a trend or momentum. In other instances, reputational concerns and
(continued on next page)
3
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 1 (continued )
Type of Bias Definition Literature
‘sharing-the-blame’ effect are some of the factors that can drive money
managers to herd.
3. Emotional Biases
3.a) Fear Negative feeling associated with a situation where there is a perceived Lerner and Keltner (2000; Lerner and
threat, danger, and a high risk of harm or loss. Fear of failure, rejection, of Keltner, 2001)
being wrong, of taking action, disappointment, etc…
3.b) Greed A selfish and insatiable desire to get more material possessions without Balot (2001)
concern of leaving others in dire straits as a result of actions taken
accordingly.
3.c) Fight or flight behavior The brain’s amygdala registers the existence of a threat and transmits Cannon (1915)
signals to the heart and lungs to increase heart rate, blood pressure, and
breathing. Adrenaline is then produced to prepare either for a
confrontational stance or for running away from a conflict.
3.d) Affect heuristics Basing a decision on an emotional reaction rather than a calculation of risks Slovic et al. (2002)
and benefits.
3.e) Personality disorders Egotism, hubris, narcissism, selective attribution bias (taking credit when Tyrer (2014); Taylor and Doria (1981);
outcome is positive), escalation of commitment (Persisting in a decision Fox et al. (2009)
even though the costs outweigh the gains).
3.f) Overconfidence bias Defined in two distinct ways: (1) overestimation of one’s actual Hoffrage (2004); Moore and Schatz
performance or (2) expressing unwarranted certitude in the veracity of (2017)
one’s beliefs.
3.g) Short-termism Preference for actions aiming at high performance in the near term, which Mullins (1991); Marginson and McAulay
are likely to have detrimental consequences for the long term. (2008)
3.h) Loss aversion and risk-taking Loss brings material pain and mental suffering and people are willing to Kahneman and Tversky (1979)
behavior take more risk to avoid potential losses than to capture potential gains.
Regret is a negative emotion resulting from making a choice whose
outcomes proved
Disadvantageous. People exhibiting regret aversion avoid taking decisive
actions because they fear that, in hindsight, they will regret it.
3.i) Regret and disappointment Bell (1982), 1985; Gul (1991)
Disappointment is experienced when the results of a choice fall short of a
decision maker’s expectations. Disappointment may, but does not have to,
accompany regret. People will also make decision based on avoiding
disappointment.
A person repeatedly exercising self-control during straining circumstances
involving a suppression of that person’s true feelings, opinions, freedom of Baumeister et al. (1998); Hagger et al.
3.j) Ego depletion
choice or core identity leads to impaired performance on subsequent tasks (2010)
that require sustaining that self-control.
People invest in stocks of corporations they are familiar with or that are in
close geographical proximity to where they live. These biases are
3.k) Familiarity and/or home bias Huberman (2001); Yonker (2013)
generalizable to CEOs decision-making situations, for instance when
preserving jobs in their local areas.
probabilities of rare events and risk-seeking behavior. The confirmation bias leads to errors based on parsing out information that
contradicts a person’s preexisting worldview. Aversion to a sure loss involves risk-seeking behavior in an attempt to avert that loss.
Regret aversion leads people to be unduly timid, out of a concern that if the decision turns out badly, they will experience strong
negative emotions, or adverse consequences, due to their second-guessing the decision.
The list of documented biases is large and is regularly updated. For instance, as of the writing of this article, Wikipedia lists 204
biases that are behavioral, cognitive and social. In 2017, Wikipedia (2017) listed 176 biases and Dimara et al. (2018) report that 13 of
these biases were essentially duplicates. Dimara et al. (2018) end-up with a list of 154 biases.
In our study, we cover the most documented cognitive biases. Pohl (2016) defines a cognitive bias as a cognitive phenomenon
which: 1) reliably deviates from reality, 2) occurs systematically, 3) occurs involuntarily, 4) is difficult or impossible to avoid, and 5)
appears rather distinct from the normal course of information processing. We distinguish between cognitive heuristic biases, which are
about how the brain processes reality and applies decision rules (e.g. availability bias), vs. emotional biases that are based on what a
person feels (e.g. fear or overconfidence).
The Wikipedia list does not include what we call here organizational biases. As pointed out by Goodman et al. (2011) organiza
tional errors or incidents merit research in their own right as an important organizational-level phenomenon. The behaviors these
authors refer to are akin to organizational biases. We define organizational biases here as those patterns of behavior, individual or
group-based, which are the result of organizational dynamics, and which impede the good functioning of the organization. These
biases correspond to psychological reactions often listed in the industrial/occupational and work psychology literature.
We originally selected a list of 43 representative biases covering all of these categories. The selection was operated according to the
salience and popularity of biases featured in the literature (Costa et al., 2017). For instance, in the work psychology literature, the
issues of psychological contract breach and ego depletion are two of the most studied patterns (Conway and Briner, 2009; Dang, 2017).
In behavioral finance, emotional biases such as overconfidence (Meikle et al., 2016) and heuristic biases such as the representativeness
4
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
or confirmation biases are some of the most cited (Malmendier, 2018). We group together biases that often appear as conjoined in the
literature. For instance, the framing and anchoring biases are often paired-up. In terms of organizational biases, expectations of loyalty
and reciprocity often appear together (Chang, 2018). We bundle these latter two under the heading Psychological Contract Breach. We
also put stress and “result pressure” in one category, as they often overlap.
Table 1 gives a complete glossary and references for the 32 biases that constitute our final selection.
The heavy manufacturing and chemical industries are often exposed to high risks of severe accidents such as chemical explosions or
toxic leaks. There, it is commonplace to find engineers specialized in industrial risk analysis. These people are trained to use many
different tools such as Process Hazard Analyses (PHA), Hazard and Operability Studies (HAZOP), Workplace Risk Assessment and
Control (WRAC), Root Cause Analysis (RCA), and Failure Modes and Effects Analysis (FMEA). Root Cause Analysis (RCA) is the process
used to systematically detect and analyze the possible causes of a problem in order to determine corrective actions.
Root Cause Analysis is an iterative process. The investigator must assess the impact of each new piece of gathered evidence and
establish its place in the causal chain that explains the incident. There are three types of causes in the data collection or discovery
process.
Presumptive causes are causes that arise early in the discovery process and reflect deeper causes yet-to-be-discovered. Contributing
causes are important enough to be highlighted as needing corrective action in order to fix the processes or product malfunctions under
investigation. However, these may not be the deepest-rooted causes. Finally, root causes are the most basic reasons for the problem,
which if corrected, will prevent recurrence of that problem (Ammerman, 1998).
There could be root causes that involve several business processes. For instance, the problem may arise from both a piece of
machinery breaking down combined with employee stress or fatigue, without which the severity of the incident may have been
lessened.
There exists several analysis tools that help perform RCA (See Exhibit 1 borrowed from Pojasek (2000)).
Table 2
Cases Description and Measure of Severity of Impact of Event (Evaluated cost in relation to the company’s net result. When the assessment is n.e.,
not directly estimable in monetary terms, severity can still be roughly estimated).
CASES Sector Est. Cost in Est. Company Net Est. Severity
M Euros Result (M Euros) (Strong / Weak)
5
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
One version of RCA is “The Five-Whys” method, which we adopt here in this study. This method has been popularized by business
gurus like Simon Sinek. This tool works by asking people involved in the incident “why?” at least five times. Why did A it happen? The
person answers B. Then the investigator asks why did B happen? The investigator is following a thread in the storyline of the incident.
Once it becomes difficult for the witnesses to provide a further answer, a probable root cause of the problem has likely been identified.
We implement the RCA methodology in a field investigation of 19 major incidents that occurred in 19 large European financial
institutions (9 banks and 10 insurance companies) over a two-year period. Table 2 below shows the type of incident and the estimated
severity of the incident in each case.
The field investigation consisted of a series of 116 onsite interviews gathered between June 2011 and October 2013 in 19 large
financial institutions. The sample of interviewees was non-random and a mix of judgment and snowball sampling, as the interviewees
were selected based on their interaction with the incident under study, and the leads that managers and interviewees would provide as
to which person to interview next. Out of the total, 33 interviews were semi-structured interview (multiple face-to-face encounters plus
phone conversations) with key persons (risk officers or key operational managers). The remaining 83 interviews involved secondary
sources. While these were less formal, they were still in-depth interviews. Table 3 below breaks down the number of in-depth in
terviews for each case.
Even though the top management at these firms was fully onboard with the study, operational managers were sometimes reluctant
to volunteer for the interviews. It took up to a year in some cases to convince the staff prior to launching the interview process. When
that was the case, a presentation was made to a group of key managers about three months prior to undertaking the actual field study to
explain its purpose and relieve as much as possible the concerns about confidentiality and fears of potential reprisals by top
management.
Trust was earned in our field study because the interviewer is someone who had worked as an insider in the industry and had
interacted with these companies in the past. Even though the interviewer did not personally know the interviewees, his track record
brought legitimacy to the interviewing process. Given his understanding of how these organizations work and their political power
structure, the interviewer was able to redirect the conversation, when necessary, and uncover more clues.
The interviewer was also able to read non-verbal cues such as when the interviewee changed voice tone, or avoided eye contact
during critical moments of the interview. The interviewer then shifted the conversation and teased out the reasons for the discomfort
associated with the incident. Most of the in-depth interview conversations were recorded unless the interviewee asked to speak off the
record.
The study happened in three stages. The first stage was the identification of the incident that would become the topic of the
investigation. This selection was done by the firm’s top management.3 The second stage was to interview key operational people
involved with the incident. The third stage was to interview risk managers and other secondary sources who were assigned to the
divisions impacted by the incidents.
Table 4 below breaks down the timing of interviews and displays a list of the people who were interviewed. They are identified by
their corporate title and key summary data regarding the financial or insurance organization they work for.
Root Cause Analysis generally occurs after the incident under investigation. It works by pulling together the facts surrounding the
incident. This process of collecting information might encounter several pitfalls (Miller et al., 1997). First, the information about the
event might be distorted, as it is recollected, and retold in a way that the person appears to have little or no responsibility into the
incident, or even recast him or herself as the hero of the situation (Dodier, 1995). Second, because the researcher knows the final
outcome, he/she might be tempted to draw tenuous inferences and oversimplify the incident’s narrative. Lastly, out-of-context in
formation might be ignored because the investigator does not venture to explore extraneous explanations of behaviors beyond the
reconstructed narrative of the incident.
These distortions are difficult to avoid in an ex-post field research. Journé (2005) suggests an observation in-situ as the events are
progressing. Unfortunately, this type of continuous observation, ‘waiting’ for the bad event to occur, was not testable in the context of
this study, when for instance the causes of a financial loss may have taken months or years to build to a crescendo before the actual loss
gets triggered.
The interview questions followed the RCA Five-Whys pattern described previously. The interviewee was asked to recount his/her
3
Thus, the sample was not selected following a “double-blind” procedure.
6
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 3
Number of interviews for each of the 19 Cases.
Case # 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Total
version of what happened, and then asked “why” they thought what happened did happen. This pattern leads to uncovering deeper
causes. In asking what happened, the interviewer can also infer who should be interviewed next, to corroborate the previously testified
version of the event.
The final results were shared with top management, while respecting anonymity and confidentiality of the respondents and
responses.
4.2. The root cause tree diagram technique and behavioral biases
The technique of RCA in its tree diagrammatic form originates with the work of Cuny and Krawsky (1970).4
The 19 cases covered here were analyzed using the root cause tree diagram technique. A tree diagram is read from right to left and
splits into more branches, when moving to the left. When a branch is located to the left of another, the new branch corresponds to a
newly identified cause. When a branch ends, in the leftward direction at a terminal node, does so at what is called a root cause. Each
branch is then numbered and these numbers serve as reference points in the layers of explanations that build the narrative of the
incident.
All cases presented here have complete tree structures. A tree is considered complete when it satisfies two criteria: 1) all causes have
been identified. That is, the Five-Whys’ method is exhausted as there is no more information or new lead that is extractable from asking
“why” one more time. 2) Information has a high level of trustworthiness. This is checked using corroborating evidence such as
corporate documents, when made available, as well as other cross-referenced testimonials.
In a second step, both authors independently coded each root cause diagram for each of the 19 cases, in order to identify if any of
the 32 biases selected (Table 1) were present in the branches of these trees. This coding was done ex-post.
Each branch was gauged on its own merit, abstracting as much as possible from the whole situation, and thus preventing the
temptation to cast the final incident under a particular behavioral bias light. Via this method, we were able to establish a corre
spondence mapping between causes of various orders and behavioral biases.
Table 5 below summarizes all categories of causes found in all the 19 cases. Overall, we isolate 379 total causes. The behavioral
causes are shown there as internal causes mainly of types 1 (managerial style), 2 (professional mistakes), 3 (human causes) and 4
(malice). Behavioral internal causes represent more than 63 % of the total of causes and more than 68 % of all root causes.
Next, we illustrate how we used the root cause tree diagrammatic technique by focusing on a specific example.
In Case # 2, the incident is a substantial financial loss associated with a money market fund. A money market fund is an open-ended
mutual fund that invests in short-term debt securities. Such fund usually serves as a short-term investment vehicle for corporations
looking to earn a small but positive return on cash, while they wait to put this cash to a corporate use in a not-so-distant future.
Here, the tree has 18 branches. Like in the 18 other cases, colors are used for coding the various levels of branches. Blue branches
are first order causes. Black colored branches are all root causes branches with terminal nodes. Green branches are intermediary order
causes. Finally, the red colored branches can be either intermediary or root causes depending on the presence of a terminal node.
In March 2003, a new division head was recruited to revive and grow GBAS Asset Management, the financial services branch of a
large retail bank. His mission was to expand the customer base and increase assets under management. In order to satisfy his mandate
the new head decided to create a money market fund targeted to institutional investors.
The fund aimed at generating above normal returns by taking more risk than usual for this type of fund. The fund was vested in
variable rates debt instruments. Very soon after its creation, this new fund was ranked as one of the top ten funds in France. By 2008,
however, in the midst of the financial crisis, the fund started losing value because of its high-risk exposure (branches 3 and 6). Instead
of liquidating the risky positions early and book immediate losses (branch 8), the managers and their superiors as well as the sales
division decided to wait, expecting the market to turn around.
The bank decided to partially compensate client losses, in order to maintain its good reputation (branch 15 bis). But many clients
started panicking (branch 4), withdrew funds and lost money in the process (branch 2). The credit market in which the fund was
invested collapsed due to the financial crisis (branch 7).
4
This pioneer study was sponsored by the French National Institute of Research and Safety for Work Accidents (INRS in French). The authors
focused principally on analyzing causes and consequences to increase workers safety. Similar investigative techniques include the Ishikawa diagram
or causes and effects diagram (Ishikawa, 1976) or the 5M approach (Ayres, 2009). These works resemble closely the mind or heuristic maps
methodology originated by Tony Buzan (1977).
7
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 4
Log of Interviews, Interviewee and Firm Characteristics and Qualitative Interview Type.
Nb Date Age Gender Education Job Position Responsibility Firm Global Language Interview
level Size Reach Type
8
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 4 (continued )
Nb Date Age Gender Education Job Position Responsibility Firm Global Language Interview
level Size Reach Type
7/5/
2012
35 7/6/ 42 M 1 CEO Advisor 3 1 3 2 1
2012
36 7/6/ 51 M 1 CEO non-life Country 3 1 3 1 1
2012
37 7/9/ 39 F 1 HR Manager 4 1 3 2 3
2012
38 7/9/ 46 M 1 General secretary Group 3 1 3 1 3
2012
39 7/9/ 45 M 1 CEO subsidiary 3 1 3 1 3
2012
40 7/9/ 49 M 1 Marketing Director Group 3 1 3 2 3
2012
41 7/9/ 47 M 1 CFO Group 3 1 3 1 3
2012
42 7/9/ 43 M 1 CFO Country 3 1 3 1 3
2012
43 7/10/ 38 F 1 HR Manager 3 1 3 1 2
2012
44 7/10/ 53 M 1 HR Director Country 2 1 3 1 3
2012
45 7/10/ 46 M 1 COO Group 3 1 3 1 3
2012
46 7/10/ 35 M 1 CRO P&C Group 3 1 3 1 3
2012
47 7/10/ 38 F 1 L&D Manager 6 1 3 2 3
2012
48 7/10/ 39 M 1 CRO Country 4 1 3 2 3
2012
49 7/10/ 44 M 1 CRO L&S Group 4 1 3 1 3
2012
50 7/16/ 53 M 1 Marketing Director Country 2 1 3 1 1
2012
51 8/1/ 58 M 1 Corporate Head Lawyer 3 1 3 1 1
2012 Country
52 9/4/ 47 M 1 Executive Committee 2 2 3 1 2
2012 Member
53 9/7/ 49 M 1 CRO Country 4 1 3 1 2
2012
54 9/13/ 39 F 1 CRO’s right hand man 4 1 3 1 2
2012
55 9/26/ 63 M 1 CEO 1 4 3 1 3
2012
56 10/1/ 45 M 1 CRO non-Life France 4 1 3 1 3
2012
57 10/3/ 37 M 1 Risk Manager Group 5 1 3 2 3
2012
58 10/3/ 49 M 1 Head of Operations Group 3 1 3 1 3
2012
59 10/3/ 42 M 1 CFO Country 3 1 3 1 3
2012
60 10/3/ 46 M 1 General Secretary Group 3 1 3 1 3
2012
61 10/4/ 49 M 1 CEO Group 1 1 3 1 2
2012
62 10/5/ 66 H 1 Audit Expert 5 2 1 1 2
2012
63 10/8/ 43 H 1 Commercial Director 4 2 2 1 2
2012 Private Bank
64 10/9/ 52 F 1 HR Director Group 3 1 3 1 3
2012
65 10/12/ 51 F 1 Head of Risk France 5 1 3 1 2
2012
66 10/12/ 42 H 1 Foundation Director Group 3 1 3 1 1
2012
67 10/19/ 46 H 1 CRO 3 1 3 1 2
2012
(continued on next page)
9
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 4 (continued )
Nb Date Age Gender Education Job Position Responsibility Firm Global Language Interview
level Size Reach Type
10
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 4 (continued )
Nb Date Age Gender Education Job Position Responsibility Firm Global Language Interview
level Size Reach Type
Why was there such a high portion of the fund invested in these risky bonds? The head internal audit and control had only been in
the job for 18 months (branch 16 bis) and did not properly assess the content of the portfolio (branch 12). There was no standard
control or regulatory ratios associated with the use of these instruments (branch 16). These were rated with the highest quality: AAA or
AA, prior to the crisis. The fund manager was not very experienced (branch 18 bis) and bought too many of these assets compared to
the national average (branch 13).
The division head had put a lot of pressure on the manager to generate good performance (branch 17). The manager’s bonus
depended in a large part on the fund performance (branch 18). Another reason for the loss was that the brokers did not really try to
keep their clients in the fund (branch 5). As the head of sales force stated “our sales people consider that it is their client who are
helping them to put food on the table, and hence they owe their loyalty to the clients first”. (Branch 10). The brokers also had an
incentive to move their clients to other products, due to the commission-based business (branch 11). In the final instance, the total
losses generated by the incident were in the tens of millions of euros.
The RCA method revealed unexpected causes after several rounds of the Five-Whys method were implemented. For instance,
managerial responsibility (branch 17) or breach in organizational processes (branches 10 and 16). The analysis revealed also the
presence of what we have referred to as external causes in Table 5. For instance in branches 7 or 9 an external cause is the markets
panicking. Internal causes were also revealed like the fact that internal audit did not react fast enough (branch 12).
In a second pass of our analysis, we examined all the material causes detected in the above RCA analysis and applied our 32 biases
grid to it in order to find possible matchings. We found that Psychological Contract Breach was present in the situation corresponding
to branch 10 for instance. Work Stress/Result Pressure was present in branch 17. Biases linked to the incentives structure were found in
branches 11, 17 and 18. Fig. 1 below shows that 14 biases were documented in 11 branches.
The goal of this section is to illustrate the power of the RCA method when applied to the banking and insurance industries. Our field
study of 19 incidents has produced a total sample of 379 causes (Table 5) that we analyze here.
11
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 5
Summary of Causes Categories for the 19 Cases.
Ranking of Causes Internal Causes Subtotal External Causes Subtotal Other Causes TOTAL
Internal causes are those which depend on factors tied to the inside operations of the company or actions undertaken by people owning or employed
in the company (shareholders, managers, employees, hired consultants…).
External causes are exogenous to the firm; i.e. outside of their control.
Other causes are mixed causes or those not classifiable as Internal or External.
Internal causes are of 4 types.
Type 1: causes tied to organizational and managerial systemwide actions (organisational processes or managerial style).
Type 2: causes tied to technicity and arts of doing things (including tactical and strategic choices, professional mistakes, etc…).
Type 3: human causes (Link between the firm and its employee as an individual, including topics such as accountability, fear, greed, interpersonal ties,
conflicts, tiredness, lack of motivation, perception issues, boredom…).
Type 4: causes associated with malice. Including, theft, fraud, willful negligence, as long as these originate from members of the organization.
External causes are of 3 types.
Type 1: causes tied to actions or decisions made by outside partners (clients, suppliers, media, etc…).
Type 2: exogenous causes over which the firm has no control (markets, demographical trends, natural catastrophies, regulatory changes, etc…).
Type 3: causes associated with malice done by external partners (theft, fraud, piracy, etc…).
First Order Causes are those which are closest to the "crisis" event itself in terms of triggering the event. These are presumptive causes that are
invoked first by interviewees.
Intermediate Order Causes are causes that are neither root causes nor are they first order causes. In other words they are contributing causes deeper
than first order causes.
Root Order Causes are causes for which the corresponding tree branch does not have an antecedent. In that case, the Five Why’s method has revelead
no deeper cause.
Firstly, it is important to note that 2 out of the 32 biases did not appear in our collected sample of responses. These are Gambler’s
Fallacy and Short-Series Problem and second Regression to the Mean Problem. These two biases have been typically related to in
dividual investors’ dynamic trading behavior, which does not apply to the incidents in our sample. This left us with 30 biases to
analyze.
In order to test the RCA tool, we introduce five general hypotheses related to the patterns we might expect to observe, depending on
the characteristics of the banking and insurance organizations and the business mindset found in these industries.
5.1. Hypotheses
It has been estimated that up to 90 % of all workplace accidents have human error as causes (Feyer and Shouldiamson, 1998). As
seen previously, Shefrin (2016) imputes most of the major corporate scandals’ causes in the last thirty years or so to human psychology
and behavior. In the same vein, Andrew Lo (2016) asserts that: “Human behavior is a factor in virtually every type of corporate
malfeasance; hence, it is only prudent to take steps to manage those behaviors most likely to harm the business.” From these obser
vations, our first basic hypothesis states that insurance and banking organizations behave like any organizations in other industries in
terms of human causes or misconducts.
Hypothesis 1. Human factors (and not technical factors) are at the root of most significant incidents in banking and insurance
organizations.
Additionally, and even though their business models and cultures are different, banks and insurance companies currently possess a
12
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
pyramidal hierarchical structure, as many other organizations. Hence, while there might be “business-specific” or idiosyncratic
operational risks, there are also types of risks that are purely organizational, independently of the business purpose.
Hypothesis 2. Banks and insurance companies are first-and-foremost organizations and thus are primarily impacted by
organizational-type biases as root causes of significant harmful incidents.
Idiosyncratic behavioral risks should be influenced by the business model of insurance companies vs. that of banks (Thimann,
2014). By the nature of their activity, managers in the insurance industry need to follow a conservative approach with capital in
vestments, as large portion of an insurance firm’s liabilities consists of technical provisions to cover potential claims. By comparison,
banks tend to use higher financial leverage than insurance companies. Bankers should thus exhibit a much higher risk appetite than
insurers, especially in investment banking or asset management activities.5 Associated with these risk-taking behaviors, we should
observe the presence of two key emotions typically associated with financial activities: fear and greed. These two emotions should be
important factors behind the causes of incidents there. This leads us to formulate the next two hypotheses:
Hypothesis 3. Emotional types of biases (Fear, Greed, Overconfidence, etc…) will be more often at the root of incidents and be more
critical for banks rather than insurance organizations.
Hypothesis 4. Insurance organizations are more prone to exhibit conservative (heuristic) biases (such as Availability, Familiarity,
Confirmation, Cognitive Conservativism) at the root of incidents and these will be more critical by comparison with banks.
Our final hypothesis is simply that the severity of the impact of an incident depends on the root cause bias and its ability to spread in
an organization and impact people in key strategic positions.6
Hypothesis 5. Root cause biases will have differentiated impacts according to their nature. Some are more critical for high severity
incidents and others for low severity incidents.
We construct several indicator variables in order to test the above hypotheses. We first construct a measure of the average proximity
of a bias to root causes over the whole sample of tree diagrams.7 We also construct a criticality index8 . The criticality index is
normalized to a maximum value of 10 over the sample of 30 biases (relative scale). A high criticality level is then achieved when the
bias is close to the root causes and/or shows up with a high frequency in the tree branches.
Our empirical focus here is on analyzing the frequency of occurrence of major causes and biases.9 We use a simple frequentist
approach to test the majority of our hypotheses. The one exception is Hypothesis 2 for which we use the non-parametric Wilcoxon,
Mann and Whitney test (Wilcoxon, 1945; Mann and Whitney, 1947) to derive a Ranksum probability that measures the likelihood that
a bias is close to a root cause.
Examining Table 5, we find that 55 % of root causes are linked to human factors (not involving managerial processes or technicity).
This confirms our Hypothesis 1. As previously mentioned about 68 % of all root causes in our sample are linked to behavioral and
organizational biases, which ultimately are human factors.
A basic (first pass) test of Hypothesis 2 is reported in Table 6. Table 6 below shows preliminary results based on the frequency of
presence of a bias in the 19 RCA trees. The top five biases are shown for the whole sample and for the banking and insurance sub
samples. The number one bias in the complete sample is Psychological Contract Breach. However, when we split the sample by sector,
we find that Stress/Result Pressure comes in first for banks and the Availability/Halo bias is the top one for insurance companies. These
results, however, are not reporting on root causes.
A direct test of Hypothesis 2 is reported in Table 7 below. Table 7 shows the proximity measure as well as the Ranksum probability
of closeness to root causes. The Ranksum probability is derived from the non-parametric Wilcoxon, Mann and Whitney test (Wilcoxon,
5
The French Institute of Actuaries (Institut des Actuaires, 2014) states that “risk appetite corresponds to how much risk an organization is willing
to take on within the framework of its strategy.”
6
However, it is not yet clear at this stage, what the theoretical relationship is between general categories of biases and severity of impact of the
incident.
7
For each tree, we determine whether a branch is a first order cause, an intermediary cause or a root cause. We use the same color codes as seen in
the Case 2 example above, to distinguish between these various types of causes. Among the 19 tree diagrams, we report a maximum of three in
termediate (deeper) causes per tree. This implies that there is a maximum of five distinct levels of causes in any given tree in our sample. For each
bias, we calculate the mean position along branches of a tree by averaging all the levels in which the bias is found. We then compute a ‘normalized’
mean position as the ratio of the mean level divided by the total number of levels in a tree. The proximity measure is then calculated as the average
of this normalized mean position over the 19 cases. The closer the final number is to 1, the closer a given bias is to root causes.
8
It is defined as for each bias and each tree as the product of proximity of given bias to root causes x frequency of branch occurrences of that bias in any
given tree. In criticality analysis, the criticality of a failure mode is generally calculated by multiplying the probability times a score of gravity of the
factor considered (Haimes, 2009).
9
Future research will analyze relationship between causes, biases and other firm and market-specific data.
13
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 6
Top 5 Biases by Frequency of Presence in 19 RCAs Trees and per Industry (Frequency is shown next to each bias and is measured by as the
presence of at least one instance of the bias in the RCA Tree for each of the 19 case).
Ranking Whole sample Banks Insurance
Psychological contract
Trust, power, politics and conflict
Opportunistic behavior
Work stress and/or result pressure
Availability bias/halo effect
Fear
Cognitive conservatism
Framing and anchoring
Greed
Incentives structure and/or distributive and procedural justice
1945; Mann and Whitney, 1947). It represents the probability that the normalized mean position of a given bias is closer to the roots
than all the other biases. We run this test for all categories of biases put together and then within the bias’ specific category (i.e.
Organizational biases, Heuristic and Cognitive biases and lastly Emotional biases).
Whereas the mean proximity to roots is a good measure of how crucial a given bias is, the Ranksum probability is arguably a better
measure as it takes into account the shape of the distribution of biases over the 19 cases and not just the first moment of the distri
bution. In Table 7, we find that the two measures do not give drastically different results. For instance, for the whole sample, Psy
chological Contract Breach comes in first and Power, Politics and Conflict comes in second position.
Table 7 also reports on the criticality index for each bias. Examining the criticality scale, we observe that the ranking changes
slightly as compared to the above Ranksum analysis. Power, Politics and Conflict now comes in first position, Opportunistic Behavior
in second and Work Stress/Result Pressure in third, followed closely by Psychological Contract Breach in fourth position.
Overall, the top four biases closest to root causes belong to the category of organizational biases, which confirms our Hypothesis
2.10
Regarding Hypothesis 3, a preliminary analysis in Table 6 shows that an emotional bias usually associated with financial activities
like Fear comes in third position for banks. Greed comes in fourth position for banks and only in fifth position for insurance companies.
In Fig. 2, we show the top 10 biases by proximity to root causes in the banking vs. insurance subsamples. On the one hand, we find a
big difference for several biases, when we split the sample as compared to when we use the whole sample. Here, Stress/Result Pressure
and the Fear and Greed biases are much closer to root causes for the banking industry than for the insurance subsample. On the other
hand, the Availability/Halo and Personality Disorders as well as the Confirmation biases are closer to root causes in the insurance
industry vs. banks.
Fig. 3 shows the top 10 biases by criticality, again according to industries. The results are similar to those of Fig. 2. Here, Cognitive
Conservatism is also more critical for the insurance subsample than for banks. These findings tend to confirm Hypotheses 3 and 4.11
Finally, we examine the relationship between proximity of biases to roots (their criticality as well) and the severity of the impact
(Hypothesis 5). The severity of the impact for each 19 incidents was previously summarized in Table 2. In Fig. 4 below, we report the
top biases in terms of proximity to root causes and document whether these biases are close to roots when the impact is severe vs. when
it is not. We find that Psychological Contract Breach is the closest to root causes ranks when the severity of impact high. Still close to
roots for high severity cases, we find Trust, Power, Politics and Conflict, as well as Stress/ Result Pressure and the Availability/Halo
and Opportunistic Behavior. A striking observation is that Greed and Incentive Structure/Distributive Justice are really close to root
causes for high severity incidents but have no bearing on low severity incidents. We observe that the reverse is true for Fear.
When examining Fig. 5 below, in terms of criticality, the results are quite similar to above, except that Stress/Result Pressure now
has a larger criticality for low severity incidents. A few other biases, such as the Overconfidence bias, show up as being more critical in
10
Additionally, and beyond Hypothesis 2, when examining the results for each category of biases, we find that the Ranksum probability is highest
for Psychological Contract Breach in the Organizational biases category, the Availability/Halo bias is first in the category of Heuristic and Cognitive
biases and finally Fear come in first position in the category of Emotional biases.
11
In addition to and beyond Hypothesis 3, we observe that the type of organizational bias also varies by industry. Power and Politics has maximal
criticality for the insurance subsample, Psychological Contract Breach is also more critical for the insurance companies than for banks. By comparison,
Stress/Result Pressure as well as Opportunistic Behavior are much more critical for banks.
14
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Table 7
Proximity of Biases to Root Causes, Criticality and Wilcoxon Test for 19 Cases (Ranksum probability is the probablility tested using The Wil
coxon, Mann and Whitney test. Here we report the average sample probability that the distribution of a specific bias is closer to the root causes than
for all other biases, or biases corresponding to a specific category). The criticality index is equal to the sample average of (proximity of bias to root x
frequency of occurrences of bias in root causes tree), which is then scaled up from 0 to 10, where 10 represents the maximum criticality obtained over
the 30 biases. Cat 1 = Organizational Biases; Cat 2 = Heuristic Biases; Cat 3 = Emotional Biases).
Bias Proximity: 0 to Criticality: 0 to Ranksum Ranksum Ranksum Ranksum
1 Scale 10 scale Probability vs. Other Probability Cat 1 Probability Cat 2 Probability Cat 3
biases
high severity incidents. Framing and Anchoring and the Confirmation biases have a large criticality for low severity incidents.12
While our findings confirm Hypothesis 5, we recognize that there is currently no theory that we can rely on to guide us regarding
the relationship we should expected to see here. Moreover, no clear empirical pattern is emerging, which could shed light on the
possible relationship between biases and the high severity vs. low severity impact of incidents.
In the preface of his book, Shefrin (2016) defines behavioral risk management as the “application of ideas from industrial and
organizational psychology to analyze workplace risk”. Shefrin (2016) offers a synthesis of these ideas as applied to the financial sector.
His approach is grounded in the behavioral finance literature, and he surveys a list of well-known biases that were prevalent during
corporate organizational breakdowns and well-known mediatized scandals in the financial sector.
Shefrin (2016) openly recognizes that his book does not offer a systematic approach to diagnosing biases. Most of the recom
mendations found there are inferred from anecdotal stories and not easily generalizable. By contrast, Lo (2016) draws from traditional
asset risk management protocols and argues that similar processes can be implemented for managing behavioral risk. Lo (1999)
considers the process summarized by the acronym SIMON (Select, Identify, Measure, Optimize, Notice). In that process, the “Select”
step is about listing the various risks and the “Identify” step is about choosing what optimization objective to pursue.
From a pure risk management standpoint, Lo’s (2016) insight appears to be a special case of a risk management plan broadly
utilized in other industries. Leck (2014) illustrates this general approach with a 6-step plan: (1) Identifying the hazards/potential risks,
12
Due to the small size of sample, we could not analyze the relationship between proximity, criticality and severity by industry.
15
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Fig. 2. Top 10 biases by Proximity to Root Causes– Banking vs. Insurance (Scale 0 to 1).
(2) Assessing the risks, (3) Determining risk control measures, (4) Making risk control decisions, (5) Implementing risk controls, and
(6) Supervising and revising processes if needed. By comparison, the Basel Committee regulators (Basel committee on Banking Su
pervision, 2006) define (operational) risk management as a 4-step process to (1) identify, (2) evaluate, (3) monitor and (4) control or
mitigate all material risks and assess the bank’s overall capital adequacy in relation to its risk profile. While the number of steps varies,
the logic remains the same in both cases.
In the context of this research, we extend the “Identify” step of these risk-management processes to include behavioral risks as
Shefrin (2016) and Lo (2016) recommend. It is crucial to detect the underlying causes and human motivations that are the sources of
these event-risks, and which organizations can correct. In our pilot study, we have demonstrated the power of the RCA method to
achieve the goal of detection of biases.
16
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Fig. 4. Top 10 Biases according to Proximity to Root Causes (Scale 0 to 1) and by Severity of Impact.
Fig. 5. Top 10 Biases according to Criticality (Scale 0 to 10) and by Severity of Impact.
Essentially our analysis has much relevance to operational risk management in banking and insurance institutions, as this type of
risk is nowadays closely monitored by regulators (Franzetti, 2011). The international regulations Basel II and III for banks, and Sol
vency II for insurance companies, have both defined operational risk as “The risk of direct or indirect loss resulting from inadequate or
failed internal processes, people and systems or from external events” (Basel committee on Banking Supervision, 2006). This definition
excludes business, strategic and reputational risks, but includes legal risk. The main risk-events are classified in seven categories in
Basel II (Franzetti, 2011).
Exhibit 2
17
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
(continued )
Basel operational risk category Examples of operational risk events
Clients, Products and Business Fiduciary breaches, misuse of confidential customer information, breach of privacy, improper trading activities on the
Practice bank’s account, money laundering and sales of unauthorized products.
Damage to Physical Assets Natural disaster loss, vandalism, terrorism.
Business Disruption and System Hardware and software failures, telecommunications and utility outages.
Failure
Execution, Delivery and Process Data entry errors, unapproved access given to client accounts, collateral management failures, incomplete legal
Management documentation, non-client counterparty mis-performance and vendor disputes.
The inclusion of operational risk in these industries’ risk management processes has represented a major challenge. Before the onset
of these new regulations, and unlike credit risk and market risks, senior management rarely viewed operational risk management as
strategically significant. In many institutions, the management of operational risks was and still is spread out across different parts of
the organization.
Recently, financial institutions have started paying more attention to types of risks such as misconduct risk. Carney (2016),
Governor of the Bank of England, asserts that “the incidence of financial sector misconduct has risen to a level that has the potential to
create systemic risks by undermining trust in both financial institutions and markets.” Chaly et al. (2017) state that “root cause an
alyses of many recent cases of misconduct in the financial sector, however, suggest that misconduct is not just the product of a few
individuals or bad processes, but is the result of wider organizational breakdowns. Often, large numbers of employees and managers
were either complicit in improper conduct, encouraged it, or turned a blind eye to troubling behavior.” It is interesting that RCA was
used in these particular instances, but not as a systematic diagnostic tool, as we advocate here.
In the UK, in the wake of the 2008 financial crisis, the Financial Conduct Authority, a financial regulatory body, was established in
April 2013, taking over responsibility for conduct and relevant prudential regulation from the Financial Services Authority and su
pervising about 58,000 financial businesses.
In a EY report (Jackson, 2015), 89 % of the 51 large banks surveyed report increased board and senior management attention to
misconduct risk. Almost all banks have increased the focus on non-financial risk, and many are now looking at it in a more granular
way — by sub-risk types such as misconduct, compliance, reputation, money laundering and information systems.
Banks and insurance companies must nowadays hold sufficient capital as a buffer in the case of unexpected losses due to these
newly measured risks. It is clear that even if these institutions are tempted to take the path of least resistance to model and calculate
these types of risk internally, regulators expect a higher level of understanding of operational risk than what the financial industry is
used to providing.
The size of fines and remediation costs associated with these event-risks means that losses from non-financial risks have been high
for many firms, particularly for global systemically important financial institutions (G-SIFIs). Since the global financial crisis, regu
lators have assessed conduct-related punitive fines in excess of $320 billion and banks have been forced to spend hundreds of billions
more on vastly expanded governance, risk and compliance functions (Kupfer et al., 2018).
As a result, most banks have now begun enhancing operational controls and processes to identify control weaknesses. In many
firms, this is an intensification of existing processes. Some banks are also developing new tools and techniques to understand and track
these intrinsic risks more effectively.
In 2014, the Basel Committee on Banking Supervision conducted a survey of about 60 large banks regarding the implementation of
operational risk management programs (Basel Committee on Banking Supervision, 2014). Overall, while banks implemented some of
the operational risk identification and assessment tools, others were not fully implemented or were not being effectively used for risk
management purposes.13
We are advocating here the systematic use of the RCA method as a powerful diagnostic tool to help with the process of identifying
key causes of incidents as well as key behavioral risks. One of the main benefits for banks and insurance companies will be to draw a
more complete risk map. Thus, they will be in a better position to control these risks and send a positive signal to regulators. This, in
turn may help them reduce the level of their regulatory buffer capital.
The list of risks given by the regulators (Exhibit 2) is based on particular acts of misconducts that represent possible causes of
incidents. Using the RCA method, we are able to separate out causes that are of first and intermediate order from those which are at the
root of the problem. The advantage of isolating root causes is that, once they are identified and mitigated, they, by definition will
prevent the problem from happening again.
Financial firms are increasingly interested in forward-looking risk assessments and prevention. In a recent McKinsey report (Baer
et al., 2017), the authors discuss the future of operational risk management stating that a new trend is that of ‘debiasing’ or raising
awareness of these biases in order to reduce their impact within the organization.
Our analysis identified (Hypothesis 1) that the vast majority of root causes emerge from the category of internal causes and in
13
While many banks implemented distinct, multi-tiered operational risk management tools (i.e. Risk & Control Self-Assessments (RCSAs), scenario
analysis or business process mapping), other banks noted that they implemented only one multi-use tool (i.e. a scenario-based RCSA, a process-based
RCSA etc…) The study’s conclusion was that considerable management effort is required to ensure the bank-wide implementation of key tools.
These include key risk and performance indicators; external data collection and analysis; and comparative analysis as well as action plans generated
in coordination with operational risk management tools.
18
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
particular human causes linked to lack of accountability, fear, greed, interpersonal ties, conflicts, tiredness, lack of motivation, and
perception issues. The second type of root causes were causes linked to organizational processes or managerial styles. Root causes
linked to malicious intent represented a small percentage of the total number of root causes (about 5%).
By drilling down to a deeper level of understanding, we did examine the presence of several types of biases (organizational,
heuristic and emotional) in the RCA cases. These behavioral biases can activated root causes of incidents. Thus, detecting these biases
may be crucial for identifying and controlling misconduct risk.
We find here that even though organizational biases such as Psychological Contract Breach appear to dominate at the level of the
entire sample (Hypothesis 2), the patterns that emerge are different for the banking vs. the insurance industry. Stress/Result Pressure
comes in first as a root cause bias for banks and the Availability/Halo bias is the top one for insurance companies. In the latter case, it is
mostly the Availability bias that is present, which indicates that decision processes may exhibit a form of conservatism, as is the case
when people rely on information that is readily familiar, without possibly engaging in out-of-bounds tests scenarios.
When examining the probability of a bias to be close to root causes (Table 7) Psychological Contract Breach comes on top and Trust,
Power, Politics and Conflict comes in second position for the whole sample. These findings confirm our Hypothesis 2.
A big difference is observed in Fig. 2 between the banking and insurance industry. Stress/Result Pressure, Fear and Greed biases are
much closer to root causes for the banking industry than for the insurance subsample. By comparison, the Availability/Halo and
Personality Disorders as well as the Confirmation biases are closer to roots for the insurance industry. These findings confirm our
Hypotheses 3 and 4.
When we cross correlate our criticality index (Fig. 5) with the severity of the incident, we find that the Psychological Contract
Breach bias’ high criticality is associated with higher severity of impact whereas Stress/Result Pressure’s high criticality is associated
with low impact incidents. The Incentive structure/and or Distributive and Procedural Justice bias, while not highly ranked overall,
shows up as being more critical for high severity impacts. The Availability and Greed biases also are more critical for high severity
incidents, whereas it is the opposite for Fear (Hypothesis 5).
Some surprises were the absence of a significance impact of certain biases that dominate the literature in social psychology such as
Ego Depletion (which is only ranked in the bottom half by all our indicators). Other heuristic biases, strongly documented by Shefrin
(2016) did not seem to have much of an impact in terms of root causes. These were Conformism/Group Think and Herding Behavior,
which were ranked in the bottom third of importance by most of our indicators.
7. Conclusion
Behavioral risk management is a new field that centers on organizational and psychological biases. While such biases have been
documented in the finance industry for several decades, there is still a lack of diagnostic tools in these organizations’ audit and control
processes. We recommend the systematic adoption of such a tool (Root Cause Analysis) in the finance and insurance industries. This
tool is regularly used in other industries (chemical and heavy manufacturing) for diagnosing incidents. We extend the RCA meth
odology to detect behavioral biases. We conduct a large scope field investigation of 19 major isolated incidents in 19 large European
banks and insurance companies over a two-year period. We illustrate the power of this method to unearth fundamental (or root) causes
of incidents and the behavioral biases that motivate these causes. The suppression of these biases would eliminate the possible
recurrence of the problem.
While this article is a pilot application of RCA to the finance industry, we put forth here testable hypotheses and document some
preliminary findings. We find that the vast majority of root causes are human causes linked to lack of accountability, fear, greed,
interpersonal ties, conflicts, and tiredness, lack of motivation, and perception issues. The second type of root causes are causes linked to
organizational processes or managerial styles. Root causes linked to malicious intent represented a small percentage of the total
number of root causes (about 5%).
Emotional types of biases are more often at the root of incidents for banks rather than insurance organizations. Stress/Result
Pressure, Fear and Greed biases are much closer to root causes for the banking industry vs. the insurance subsample. By contrast, the
Availability/Halo, Personality Disorders as well as the Confirmation biases are closer to root causes for the insurance industry. Overall,
Cognitive Conservatism appear to be more critical for insurance companies than for banks as root causes of incidents.
The top bias closer to root causes is the Psychological Contract Breach overall. The Psychological Contract Breach bias’ high
criticality is associated with higher severity of impact. Interestingly, Greed and Incentive Structure/Distributive Justice are also root
causes on high severity incidents and not on low severity incidents, whereas it is the reverse for Fear. When examining the two in
dustries separately, the patterns are different for the banking industry vs. the insurance industry. Stress/Result Pressure comes in first
as a root cause bias for banks. However, Stress/Result Pressure is associated with low impact incidents for banks. The Availability/Halo
bias is the top one for insurance companies and more critical for high severity incidents.
One of the main benefits for banks and insurance companies of identifying these biases is to be able draw a more complete risk map
and thus be in a better position to control these risks and send a positive signal to regulators. In turn, this may help them reduce the
level of their regulatory buffer capital.
We have not focused in this article on the mitigating processes that could be put in place for avoiding future incidents. Once these
biases are identified, our managerial recommendation is to implement efficient mitigating measures. In particular, debiasing is a
method that aims at eliminating the detrimental effects of cognitive biases on peoples’ judgments and decision making. Debiasing
interventions focus on numerous types of biases and heuristics, and have included for instance the hindsight bias (Sanna and Schwarz,
2003) and the overconfidence bias (Fischoff, 1982b).
Broadly speaking, debiasing interventions fall into four major categories: cognitive, technological, affective and motivational
19
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
(Larrick, 2004; Ludolph and Schulz, 2018). Cognitive strategies attempt to mitigate biases directly via teaching and training (e.g.
promoting insights and awareness) (Croskerry et al., 2013). Technological approaches use techniques that either elicit appropriate
decision making through aids or other means that are external to the individual (e.g., cognitive forcing strategies, cognitive aids,
restructuring tasks to fit with individuals’ natural thought processes). Affective strategies either help individuals identify the
importance of emotions or elicit emotional reactions to support decision making. Finally, motivational strategies seek to provide
external incentives (e.g., monetary rewards) or social benefits by holding people accountable for their decisions (Montibeller and Von
Winterfeldt, 2015).
Our pioneer field study already reveals some interesting patterns, which need to be further explored and can give rise to more
refined hypotheses development and theories. Future research can also analyze series of incidents within organizations, rather than
isolated incidents as we have done here.
Another extension of this research will be to examine the specific role of organizational culture in encouraging or steering certain
behavioral biases (Van Hoorn, 2017). In particular, we intend to add to the RCA method a new dimension that accounts for the
phenomenon of social amplification of risk. The social amplification of risk framework (Kasperson et al., 1988) aims at examining how
risk events interact with psychological, social, institutional, and cultural processes in ways that amplify or attenuate risk perceptions
and concerns, thereby shaping risk behavior.
Christophe Faugere: Conceptualization, Writing - original draft, Writing - review & editing, Formal analysis, Methodology.
Olivier Stul: Data curation, Investigation, Conceptualization, Methodology, Writing - original draft.
Supplementary material related to this article can be found, in the online version, at doi:https://doi.org/10.1016/j.ribaf.2021.
101382.
References
Ammerman, M., 1998. The Root Cause Analysis Handbook: A Simplified Approach to Identifying, Correcting, and Reporting Workplace Errors. Productivity Press, NY.
Aren, S., Aydemir, S.D., Şehitoğlu, Y., 2016. Behavioral biases on institutional investors: a literature review. Kybernetes 45 (10), 1668–1684.
Ayres, M., 2009. Guidebook for Airport Safety Management Systems. Applied Research Associates and International Safety Research Inc.
Baer, T., Heiligtag, S., Samandari, H., 2017. The Business Logic in Debiasing. McKinsey on Risk, 3, June, 10–17.
Balot, R.K., 2001. Greed and Injustice in Classical. Princeton University Press, Athens. Princeton.
Basel committee on Banking Supervision, 2006. International Convergence of Capital Measurement and Capital Standards. Bank for International Settlements.
Basel Committee on Banking Supervision, 2014. Review of the Principles for the Sound Management of Operational Risk. Bank for International Settlements.
Baumeister, R.F., Bratslavsky, E., Muraven, M., Tice, D., 1998. Ego depletion: Is the active self a limited resource? J. Pers. Soc. Psychol. 74, 1252–1265.
Bell, D., 1982. Regret in decision making under uncertainty. Oper. Res. 30, 961–981.
Bell, D., 1985. Disappointment in decision making under uncertainty. Oper. Res. 33, 1–27.
Ben-David, I., Graham, J.R., Harvey, C.R., 2007. Managerial Overconfidence and Corporate Policies (No. w13711). National Bureau of Economic Research.
Bono, J.E., Vey, M.A., 2005. Toward understanding emotional management at work: a quantitative review of emotional labor research. In: Härtel, C.E., Zerbe, W.J.,
Ashkanasy, N.M. (Eds.), Emotions in Organizational Behavior. Erlbaum, Mahwah, NJ, pp. 213–233.
Burke, C.J., Tobler, P.N., Schultz, W., Baddeley, M., 2010. Striatal BOLD response reflects the impact of herd information on financial decisions. Front. Hum. Neurosci.
4, 48.
Buzan, T., 1977. Making the Most of Your Mind. Pan Books.
Cannon, W.B., 1915. Bodily Changes in Pain, Hunger, Fear, and Rage. Appleton-Century-Crofts, New York.
Carney, M., 2016. Building a Resilient and Open Global Financial System to Support Sustainable Cross-Border Investment. Financial Stability Board (August 30,
2016).
Chaly, S., Hennessy, J., Menand, L., Stiroh, K., Tracy, J., 2017. Misconduct Risk, Culture, and Supervision. Federal Reserve Bank of New York.
Champoux, J.E., 2011. Organizational Behavior: Integrating Individuals, Groups, and Organizations. Routledge, New York, NY.
Chang, C., 2018. A multi-study investigation of the role of psychological needs in understanding behavioural reactions to psychological contract breach. LSE and
Political Science, Ph.D. Thesis.
Choe, H., Bong-Chan, K., Stulz, R.M., 1999. Do foreign investors destabilize stock markets? The Korean experience in 1997. J. financ. econ. 54 (2), 227–264.
Conway, N., Briner, R., 2009. Fifty years of psychological contract research: what do we know and what are the main challenges. In: Hodgkinson, G.P., Ford, J.K.
(Eds.), International Review of Industrial and Organizational Psychology, 2009, Vol. 24. John Wiley & Sons, Ltd.
Costa, D., Carvalho, F., Moreira, B.C., do Prado, J., 2017. Bibliometric analysis on the association between behavioral finance and decision making with cognitive
biases such as overconfidence, anchoring effect and confirmation bias. Scientometrics. https://doi.org/10.1007/s11192-017-2371–2375.
Cropanzano, R., Greenberg, J., 1997. Progress in organizational justice: tunneling through the maze. In: Cooper, C.L., Robertson, I.T. (Eds.), International Review of
Industrial and Organizational Psychology, 12. Wiley, London, pp. 317–372.
Croskerry, P., Singhal, G., Mamede, S., 2013. Cognitive debiasing 2: impediments to and strategies for change. BMJ Qual. Saf. 22, ii65–ii72.
Cuny, X., Krawsky, G., 1970. Pratique de l’analyse d’accidents du travail dans la perspective socio-technique de l’ergonomie des systèmes. Trav. Hum. 217–228.
Dang, J., 2017. An updated meta-analysis of the ego depletion effect. Psychol. Res. 82 (4), 645–651. https://doi.org/10.1007/s00426-017-0862-x.
De Bondt, W.F.M., 1993. Betting on trends: intuitive forecasts of financial risk and return. Int. J. Forecast. 9, 355–371.
De Bondt, W.F.M., 1998. A portrait of the individual investor. Eur. Econ. Rev. 42, 831–844.
Dechêne, A., Stahl, C., Hansen, J., Wänke, M., 2010. The truth about the truth: a meta-analytic review of the truth effect. Personal. Soc. Psychol. Rev. 14 (2), 238–257.
Dimara, E., Franconeri, S., Plaisant, C., Bezerianos, A., Dragicevic, P., 2018. A task-based taxonomy of cognitive biases for information visualization. IEEE Trans. Vis.
Comput. Graph. https://doi.org/10.1109/TVCG.2018.2872577.
Dodier, N., 1995. Les hommes et les machines. Éditions Métallié.
Durham, C.C., Bartol, K.M., 2000. Pay for performance. In: Locke, E.A. (Ed.), Principles of Organizational Behavior. Blackwell, Oxford, England, pp. 150–165.
20
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Edwards, W., 1968. Conservatism in human information processing. In: Klienmutz, B. (Ed.), Formal Representation of Human Judgment. John Wiley & Sons, New
York, pp. 17–52.
Ellsberg, D., 1961. Risk, ambiguity, and the Savage axioms. Q. J. Econ. 75 (3), 643–669.
Ferris, G.R., Russ, G.S., Fandt, P.M., 1989. Politics in organizations. In: Glacalone, R.A., Rosenfield, P. (Eds.), Impression Management in the Organization. Erlbaum,
Hillsdale, NJ, pp. 143–170.
Feyer, A.M., Shouldiamson, A.M., 1998. Human factors in accident modelling. In: Stellman, J.M. (Ed.), Encyclopaedia of Occupational Health and Safety, fourth
edition. International Labour Organisation, Geneva.
Fischoff, B., 1982a. For those condemned to study the past: heuristics and biases in hindsight. In: Kahneman, D., Slovic, P., Tversky, A. (Eds.), Judgment under
Uncertainty: Heuristics and Biases. Cambridge University Press, Cambridge, UK, pp. 80–98.
Fischoff, B., 1982b. Debiasing. In: Kahneman, D., Slovic, P., Tversky, A. (Eds.), Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press,
Cambridge, England, pp. 422–444.
Fox, C., Tversky, A., 1995. Ambiguity aversion and comparative ignorance. Q. J. Econ. 110 (3), 585–603.
Fox, S., Bizman, A., Huberman, O., 2009. Escalation of commitment: the effect of number and attractiveness of available investment alternatives. J. Bus. Psychol. 24
(4), 431–439.
Franzetti, C., 2011. Operational Risk Modelling and Management. Chapman & Hall/CRC finance series, CRC Press Taylor & Francis Group, NW.
Ganster, Daniel C., Rosen, Christopher C., 2013. Work stress and employee health: a multidisciplinary review. J. Manage. 39 (5), 1085–1122.
Goodman, P.S., Ramanujam, R., Carroll, J.S., Edmondson, A.C., Hofmann, D.A., Sutcluffe, K.M., 2011. Organizational errors: directions for future research. Res.
Organiz. Behav. 31, 151–176.
Gul, F., 1991. A theory of disappointment aversion. Econometrica 59 (3), 677–686.
Hagger, M.S., Wood, C., Stiff, C., Chatzisarantis, N.L.D., 2010. Ego depletion and the strength model of self-control: a meta-analysis. Psychol. Bull. 136, 495–525.
Haimes, Y.Y., 2009. Risk Modeling, Assessment, and Management, 3rd ed. Wiley, New York.
Hawkins, S., Hastie, R., 1990. Hindsight: biased judgments of past events after outcomers are known. Psychol. Bull. 107, 311–327.
Heath, L., Tindale, R., Edwards, J., Posavac, E., Bryant, F., Henderson-King, E., Suarez-Balcazar, Y., Myers, J., 1994. Applications of Heuristics and Biases to Social
Issues. Plenum Press.
Hoffrage, U., 2004. Overconfidence. In: Pohl, R.F. (Ed.), Cognitive Illusions: a Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Psychology
Press.
Huberman, G., 2001. Familiarity breeds investment. Rev. Financ. Stud. 14, 659–680.
Institut des Actuaires, 2014. L’ORSA quelques exemples de pratiques actuarielles. Groupe de Travail ORSA. (Mars).
Ishikawa, K., 1976. Guide to Quality Control, Asian Productivity Organization. Nordica International Ltd. Hong Kong.
Jackson, P., 2015. Risk Management Survey of Major Financial Institutions, EYG No. EK0383 1507-1575001 NY.
Janis, 1982. Groupthink, 2d ed. Houghton Mifflin, Boston.
Journé, B., 2005. Etudier le management de l’imprévu: méthode dynamique d’observation in situ. Finance Contrôle Stratégie 8 (4), 63–91.
Kahneman, D., Tversky, A., 1979. Prospect theory: an analysis of decision under risk. Econometrica 47 (2), 263–291.
Kasperson, R.E., Renn, O., Slovic, P., Brown, H.S., Emel, J., Goble, R., Kasperson, J.X., Ratick, S., 1988. The social amplification of risk: a conceptual framework. Risk
Anal. 8, 177–187.
Knetsch, J.L., 1989. The endowment effect and evidence of nonreversible indifference curves. Am. Econ. Rev. 79 (5), 1277–1284.
Knetsch, J.L., Sinden, J.A., 1984. Willingness to pay and compensation demanded: experimental evidence of an unexpected disparity in measures of value. Q. J. Econ.
99 (4), 507–521.
Kupfer, J., Scott, S., Chiou, A., 2018. Culture & Conduct Risk in the Banking Sector. Starling Report (April, 2018).
Langer, E., 1975. Illusion of control. J. Pers. Soc. Psychol. 32, 311–328.
Larrick, R.P., 2004. Debiasing. Blackwell Handbook of Judgment and Decision Making, pp. 316–338.
Latino, Robert J., Latino, Kenneth C., Latino, Mark A., 2006. Root Cause Analysis: Improving Performance for Bottom-Line Results, Fourth Edition. CRC Press Taylor &
Francis Group, Boca Raton.
Leck, H., 2014. The importance of risk management for managers. In: Cluj-Napoca: Babes Bolyai UniversityManagerial Challenges of the Contemporary Society.
Proceedings; Cluj-Napoca, 7, pp. 119–124, 1.
Lerner, J.S., Keltner, D., 2000. Beyond valence: toward a model of emotion specific influences on judgment and choice. Cogn. Emot. 14, 473–493.
Lerner, J.S., Keltner, D., 2001. Fear, anger, and risk. J. Pers. Soc. Psychol. 81, 146–159.
Lerner, J.S., Li, Y., Valdesolo, P., Kassam, K.S., 2015. Emotion and decision making. Annu. Rev. Psychol. 66, 33.1–33.25.
Levesque, R.J.R., 2011. Magical thinking. In: Levesque, R.J.R. (Ed.), Encyclopedia of Adolescence. Springer, New York, NY.
Lo, A.W., 1999. The three P’s of total risk management. Financ. Anal. J. 55 (January/February (1)), 13–26.
Lo, A.W., 2016. The Gordon Gekko effect: the role of culture in the financial industry, in the financial services industry: the role of culture, governance, and financial
reporting. Econ. Policy Rev. 22 (1), 17–42.
Lord, C.G., Lepper, M., Ross, L., 1979. Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. J. Pers. Soc.
Psychol. 37, 2098–2110.
Ludolph, R., Schulz, P.J., 2018. Debiasing health-related judgments and decision making: a systematic review. Med. Decis. Mak. 38 (1), 3–13.
Lütje, T., Menkhoff, L., 2007. What drives home bias? Evidence from fund managers’ views. Int. J. Financ. Econ. 12 (1), 21–35.
Malmendier, U., 2018. Behavioral corporate finance. In: Bernheim, D., DellaVigna, S., Laibson, D. (Eds.), Handbook of Behavioral Economics. Elsevier.
Mann, H.B., Whitney, D.R., 1947. On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18, 50–60.
Marginson, D., McAulay, L., 2008. Exploring the debate on short-termism: a theoretical and empirical analysis. Strateg. Manage. J. 29 (3), 273–292.
Meikle, N.L., Tenney, E.R., Moore, D.A., 2016. Overconfidence at work: does overconfidence survive the checks and balances of organizational life? Res. Organ.
Behav. 36, 121–134.
Miller, C.C., Cardinal, L.B., Glick, W.H., 1997. Retrospective reports in organizational research: a reexamination of recent evidence. Acad. Manag. J. 40 (1), 189–204.
Millward, L., 2005. Understanding Occupational and Organizational Psychology. Sage Publications, Thousand Oaks, California.
Montibeller, G., Von Winterfeldt, D., 2015. Cognitive and motivational biases in decision and risk analysis. Risk Anal. 35 (7), 1230–1251.
Moore, D.A., Schatz, D., 2017. The three faces of overconfidence. Soc. Personal. Psychol. Compass 11 (8), 1–12.
Mullins, D.W., 1991. Foreword. In: Jacobs, M.T. (Ed.), Short-Term America: The Causes and Cures of our Business Myopia. Harvard Business School Press, Boston, MA.
Murata, A., Nakamura, T., 2014. Basic study on prevention of human error -how cognitive biases distort decision making and lead to crucial accidents-. Proceedings of
the 5th International Conference on Applied Human Factors and Ergonomics AHFE 2014 136–141.
Murata, A., Yoshimura, H., 2015. Statistics of a variety of cognitive biases in decision making in crucial accident analyses. Procedia Manuf. 3, 3898–3905.
Nisbett, R., Wilson, T., 1977. Telling more than we can know: verbal reports on mental processes. Psychol. Rev. 84, 231–259.
Pohl, R.F., 2016. Cognitive Illusions: Intriguing Phenomena in Thinking, Judgement and Memory. Psychology Press.
Pojasek, R.B., 2000. Asking “why” five times. Environ. Qual. Manag. 10 (Autumn (1)), 79–84.
Rode, C., Cosmides, L., Hell, W., Tooby, J., 1999. When and why do people avoid unknown probabilities in decisions under uncertainty? Testing some predictions
from optimal foraging Theory. Cognition 72 (3), 269–304.
Rosenzweig, P.M., 2014. The Halo Effect and the Eight Other Business Delusions that Deceive Managers. Free Press, New York, NY.
Rousseau, D.M., 1989. Psychological and implied contracts in organizations. Empl. Responsib. Rights J. 2, 121–139.
Sanna, L.J., Schwarz, N., 2003. Debiasing the hindsight bias: the role of accessibility experiences and (mis) attributions. J. Exp. Soc. Psychol. 39 (3), 287–295.
Scharfstein, D.S., Stein, J.C., 1990. Herd behavior and investment. Am. Econ. Rev. 80, 465–479.
Shefrin, H., 2000. Beyond Greed and Fear: Understanding Behavioral Finance and the Psychology of Investing. Harvard Business School Press, Boston, MA.
21
C. Faugere and O. Stul Research in International Business and Finance 56 (2021) 101382
Shefrin, H., 2010. How psychological pitfalls generated the global financial crisis. In: Siegel, L.B. (Ed.), Voices of Wisdom: Understanding the Global Financial Crisis.
Research Foundation of CFA Institute, Charlottesville, VA.
Shefrin, H., 2016. Behavioral Risk Management: Managing the Psychology That Drives Decisions and Influences Operational Risk. Palgrave Macmillan.
Shefrin, H., Statman, M., 1985. The disposition to sell winners too early and ride losers too long: theory and evidence. J. Finance 40 (3), 777–790.
Slovic, P., Finucane, M., Peters, E., MacGregor, D.G., 2002. The affect heuristic. In: Gilovich, T., Griffin, D., Kahneman, D. (Eds.), Heuristics and Biases: The
Psychology of Intuitive Judgment. Cambridge University Press, pp. 397–420.
Suto, M., Toshino, M., 2005. Behavioural biases of Japanese institutional investors: fund management and corporate governance. Corp. Gov. Int. Rev. 13 (4), 466–477.
Tambiah, S., 1990. Magic, Science, Religion, and the Scope of Rationality. Cambridge University Press, Cambridge, UK.
Taylor, D.M., Doria, J.R., 1981. Self-serving bias and group-serving bias in attribution. J. Soc. Psychol. 113 (2), 201–211.
Thaler, R., 1980. Toward a positive theory of consumer choice. J. Econ. Behav. Organ. 39, 36–90.
Thimann, C., 2014. How Insurers Differ from Banks: Implications for Systemic Regulation. Policy Portal:. Published online on VOX, CEPR https://voxeu.org/article/
how-insurers-differ-banks-implications-systemic-regulation.
Tversky, A., Kahneman, D., 1974. Judgment under uncertainty: heuristics and biases. Science 185 (4157), 1124–1131.
Tversky, A., Kahneman, D., 1981. The framing of decisions and the psychology of choice. Science 211 (4481), 453–458.
Tversky, A., Kahneman, D., 1992. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 5 (4), 297–323.
Tyrer, P., 2014. Personality disorders in the workplace. Occup. Med. (Chic Ill) 64, 566–568.
Van Hoorn, A., 2017. Organizational culture in the financial sector: evidence from a cross-industry analysis of employee personal values and career success. J. Bus.
Ethics 146 (2), 451–467. https://doi.org/10.1007/s10551-015-2932-6.
Wagner, J.A., Hollenbeck, J.R., 2010. Organizational Behavior: Securing Competitive Advantage, 1st ed. Routledge, 270 Madison Ave, New York, NY 10016.
Wason, P., 1966. Reasoning. In: Foss, B. (Ed.), New Horizons in Psychology. Penguin, Harmondsworth, pp. 131–151.
Wikipedia, 2017. List of Cognitive Biases — Wikipedia, the Free Encyclopedia (Accessed 23 July 2017) [Online]. Available: https://en.wikipedia.org/w/index.php?
title=List of cognitive biases&oldid=791032058.
Wilcoxon, F., 1945. Individual comparisons by ranking methods. Biometrics 1, 80–83.
Williamson, O.E., 1985. The Economic Institutions of Capitalism. The Free Press, New York.
Yonker, S., 2013. Do Managers Give Hometown Labor an Edge? US Census Bureau Center for Economic Studies Paper No. CES-WP- 13-16.
22