2004 Gremler CIT

Download as pdf
Download as pdf
You are on page 1of 25

The Critical Incident Technique

in Service Research
Dwayne D. Gremler
Bowling Green State University

The Critical Incident Technique (CIT) has been used in a The amount of service research has exploded in the past
variety of service contexts in recent years to explore ser- three decades, and a variety of methods and techniques
vice research issues and has been instrumental in advanc- have been employed to study marketing and management
ing our understanding of these issues. Despite the issues in service contexts. One particular approach, the
popularity of this methodology, no published research syn- Critical Incident Technique (CIT), has been used fre-
thesis systematically examines this research. The primary quently in this research in recent years. Although the CIT
purpose of this study is to review the use of the CIT method method appeared in the marketing literature as early as
in service research to (a) help current and future research- 1975 (Swan and Rao 1975), the major catalyst for use of
ers employing the CIT method to examine their method- the CIT method in service research appears to have been a
ological decisions closely and (b) suggest guidelines for Journal of Marketing study conducted by Bitner, Booms,
the proper application and reporting of the procedures in- and Tetreault (1990) that investigated sources of satisfac-
volved when using this method. The study provides an tion and dissatisfaction in service encounters. Since Bitner
overview of the CIT method, reports the results of a re- and her colleagues’ seminal article, more than 140 CIT
search synthesis conducted of 141 CIT studies appearing studies have appeared in marketing (or marketing-related)
in service marketing and management publications, dis- literature. A review of these studies suggests this method-
cusses implications for service research, and suggests ology has been used in a variety of ways to explore ser-
guidelines for researchers employing this method. vices marketing and management issues. Despite the
recent popularity of this methodology, to date, no
Keywords: Critical Incident Technique; service research; published research synthesis has systematically examined
synthesis; method this research.
A research synthesis of this kind serves to integrate and
systematically critique past research (Cooper 1998) and
Qualitative methods, like their quantitative cousins, can can help current and future researchers employing the CIT
be systematically evaluated only if their canons and proce- method to examine their methodological decisions
dures are made explicit. closely. Other research synthesis studies, such as
—Corbin and Strauss (1990, p. 4) Gardner’s (1985) study of mood states or Tripp’s (1997)
analysis of services advertising research, have provided
researchers with a comprehensive summary of findings

The author gratefully acknowledges Susan Kleine, David Altheide, and three anonymous reviewers for their constructive comments
on previous drafts of this article, as well as Amy Rodie and Bob Ames for their assistance in collecting Critical Incident Technique (CIT)
studies, Candy Gremler for her data entry and proofreading efforts, and the faculty at Maastricht University for providing the inspiration
for this study. Correspondence concerning this article should be addressed to Dwayne D. Gremler, Department of Marketing, College of
Business Administration, Bowling Green State University, Bowling Green, OH 43403; e-mail: [email protected].
Journal of Service Research, Volume 7, No. 1, August 2004 65-89
DOI: 10.1177/1094670504266138
© 2004 Sage Publications
66 JOURNAL OF SERVICE RESEARCH / August 2004

across studies and made suggestions to encourage further In initially describing the CIT method, Flanagan
investigations. Therefore, the primary purpose of this (1954) provided a very detailed description of the purpose
study is to review the use of the CIT method in service re- of the method and the processes to be used in conducting
search and then propose guidelines for the application and CIT research, and very few changes have been suggested
reporting of the method in future studies. to the method since his seminal Psychological Bulletin ar-
The article is organized as follows. First, a brief over- ticle. In particular, once the stories (critical incidents) have
view of the CIT method and a discussion of both the been collected, content analysis of the stories takes place.1
strengths and drawbacks of this method are presented and In this data analysis, two tasks have to be tackled: the deci-
its contribution to service research illustrated. Second, the sion about a general frame of reference to describe the in-
procedures employed to collect and analyze CIT studies cidents and the inductive development of main and
included in this study are described. Third, the results of subcategories. In performing these tasks, the researcher
the research synthesis are reported. The article concludes considers the general aim of the study, the ease and accu-
with a discussion of implications for service researchers racy of classifying the incidents, and the relation to previ-
and proposes a framework for conducting CIT studies and ously developed classification schemes in this area
reporting their results. (Neuhaus 1996). Information contained in the stories is
carefully scrutinized to identify data categories that sum-
marize and describe the incidents (Grove and Fisk 1997;
OVERVIEW OF THE CRITICAL Stauss 1993). The main categories of classification can ei-
INCIDENT TECHNIQUE ther be deduced from theoretical models or formed on the
basis of inductive interpretation (Stauss 1993). Generally,
CIT, a method that relies on a set of procedures to col- the goal of the content analysis is a classification system to
lect, content analyze, and classify observations of human provide insights regarding the frequency and patterns of
behavior, was introduced to the social sciences by factors that affect the phenomenon of interest.
Flanagan (1954) 50 years ago. Initially, Flanagan con-
ducted a series of studies focused on differentiating effec- Strengths and Advantages
tive and ineffective work behaviors; in the beginning, his of the CIT Method
research teams observed events, or “critical incidents,”
and over time reports provided by research subjects were The CIT method has been described by service re-
used in place of direct observation. Since its introduction, searchers as offering a number of benefits. First, the data
the CIT method has been used in a wide range of disci- collected are from the respondent’s perspective and in his
plines. Chell (1998) provided the following description of or her own words (Edvardsson 1992). The CIT method
the CIT method: therefore provides a rich source of data by allowing re-
spondents to determine which incidents are the most rele-
The critical incident technique is a qualitative inter- vant to them for the phenomenon being investigated. In so
view procedure which facilitates the investigation of doing, the CIT is a research method that allows respon-
significant occurrences (events, incidents, pro- dents as free a range of responses as possible within an
cesses, or issues) identified by the respondent, the overall research framework (Gabbott and Hogg 1996).
way they are managed, and the outcomes in terms of
With the CIT method, there is no preconception or idio-
perceived effects. The objective is to gain under-
standing of the incident from the perspective of the syncratic determination of what will be important to the
individual, taking into account cognitive, affective, respondent (de Ruyter, Perkins, and Wetzels 1995); that is,
and behavioral elements. (p. 56) the context is developed entirely from the respondent’s
perspective (Chell 1998). Thus, the CIT method reflects
Bitner, Booms, and Tetreault (1990) defined an incident as the normal way service customers think (Stauss 1993) and
an observable human activity that is complete enough to does not force them into any given framework. During an
allow inferences and predictions to be made about the per- interview, respondents are simply asked to recall specific
son performing the act. A critical incident is described as events; they can use their own terms and language (Stauss
one that makes a significant contribution, either positively and Weinlich 1997). The CIT method produces unequivo-
or negatively, to an activity or phenomenon (Bitner, cal and very concrete information as respondents have the
Booms, and Tetreault 1990; Grove and Fisk 1997). Criti- opportunity to give a detailed account of their own experi-
cal incidents can be gathered in various ways, but in ser- ences (Stauss and Weinlich 1997). Thus, CIT is an attrac-
vice research, the approach generally asks respondents to 1. For detailed descriptions of the Critical Incident Technique (CIT)
tell a story about an experience they have had. method, see Chell (1998) and Stauss (1993).
Gremler / CRITICAL INCIDENT TECHNIQUE 67

tive method of investigation because it does not restrict of actioning improvements and highlighting the manage-
observations to a limited set of variables or activities ment implications” (Chell and Pittaway 1998, p. 24).
(Walker and Truly 1992). Critical incidents can also be easily communicated to
Second, this type of research is inductive in nature customer-contact personnel, particularly when describing
(Edvardsson 1992). Consequently, the CIT method is es- what behaviors to do and not do in order to satisfy custom-
pecially useful (a) when the topic being researched has ers (Zeithaml and Bitner 2003).
been sparingly documented (Grove and Fisk 1997), (b) as Finally, the CIT method is particularly well suited for
an exploratory method to increase knowledge about a use in assessing perceptions of customers from different
little-known phenomenon, or (c) when a thorough under- cultures (Stauss and Mang 1999). In their study, de Ruyter,
standing is needed when describing or explaining a phe- Perkins, and Wetzels (1995) characterized the CIT method
nomenon (Bitner, Booms, and Tetreault 1990). CIT can be as a “culturally neutral method” that invites consumers to
particularly effective when used in developing the concep- share their perceptions on an issue, rather than indicate
tual structure (i.e., hypotheses) to be used and tested in their perceptions to researcher-initiated questions. In par-
subsequent research (Walker and Truly 1992). The CIT ticular, they contend CIT is a less culturally bound tech-
method does not consist of a rigid set of principles to fol- nique than traditional surveys—there is no a priori
low, but it can be thought of as having a rather flexible set determination of what will be important.
of rules that can be modified to meet the requirements of
the topic being studied (Burns, Williams, and Maxham Drawbacks and Limitations
2000; Hopkinson and Hogarth-Scott 2001; Neuhaus of the CIT Method
1996). CIT does not rely on a small number of predeter-
mined components and allows for interaction among all Although the benefits of using the CIT method are con-
possible components in the service (Koelemeijer 1995); siderable, the method has also received some criticism by
indeed, the CIT method is effective in studying phenom- scholars. For example, the CIT method has been criticized
ena for which it is hard to specify all variables a priori (de on issues of reliability and validity (Chell 1998). In partic-
Ruyter, Kasper, and Wetzels 1995). In summary, the CIT is ular, respondent stories reported in incidents can be misin-
an inductive method that needs no hypotheses and where terpreted or misunderstood (Edvardsson 1992; Gabbott
patterns are formed as they emerge from the responses, al- and Hogg 1996). Similarly, problems may also arise as a
lowing the researcher to generate concepts and theories result of ambiguity associated with category labels and
(Olsen and Thomasson 1992). coding rules within a particular study (Weber 1985).
Third, the CIT method can be used to generate an accu- CIT is a naturally retrospective research method. Thus,
rate and in-depth record of events (Grove and Fisk 1997). the CIT method has been criticized as having a design that
It can also provide an empirical starting point for generat- may be flawed by recall bias (Michel 2001). Similarly, the
ing new research evidence about the phenomenon of inter- CIT method may result in other undesirable biases, such as
est and, given its frequent usage in a content analytic consistency factors or memory lapses (Singh and Wilkes
fashion, has the potential to be used as a companion re- 1996). Indeed, the CIT method relies on events being re-
search method in multimethod studies (Kolbe and Burnett membered by respondents and requires the accurate and
1991). truthful reporting of them. An incident may have taken
Fourth, the CIT method can provide a rich set of data place some time before the collection of the data; thus, the
(Gabbott and Hogg 1996). In particular, the respondent ac- subsequent description may lead the respondent to
counts gathered when using this approach provide rich de- reinterpret the incident (Johnston 1995).
tails of firsthand experiences (Bitner, Booms, and Mohr The nature of the CIT data collection process requires
1994). CIT can be adapted easily to research seeking to un- respondents to provide a detailed description of what they
derstand experiences encountered by informants (Burns, consider to be critical incidents. However, respondents
Williams, and Maxham 2000), particularly in service con- may not be accustomed to or willing to take the time to tell
texts. The verbatim stories generated can provide powerful (or write) a complete story when describing a critical inci-
and vivid insight into a phenomenon (Zeithaml and Bitner dent (Edvardsson and Roos 2001). Because the technique
2003) and can create a strong memorable impression on requires respondents to take time and effort to describe sit-
management when shared throughout an organization. uations in sufficient detail, a low response rate is likely
The CIT method provides relevant, unequivocal, and very (Johnston 1995).
concrete information for managers (Stauss 1993) and can Generally speaking, however, CIT has been demon-
suggest practical areas for improvement (Odekerken- strated to be a sound method since Flanagan (1954) first
Schröder et al. 2000). CIT has been described as “a power- presented it. Relatively few modifications have been sug-
ful tool which [yields] relevant data for practical purposes gested to the method in the 50 years since it was intro-
68 JOURNAL OF SERVICE RESEARCH / August 2004

duced, and minimal changes have been made to Flan- tomers were studied. In a recent study, Bitner and col-
agan’s proposed approach. leagues used the CIT method to examine self-service en-
counters where there is no employee involved in service
The Role of the CIT Method delivery (Meuter et al. 2000). The findings from this study
in Service Research suggest a different set of factors are sources of satisfaction
and dissatisfaction when service is delivered through tech-
Service researchers have found CIT to be a valuable nology-based means. As these studies suggest, the CIT
tool, as the analysis approach suggested by the CIT method is flexible enough to allow service encounters to
method often results in useful information that is more rig- be extensively studied in a variety of ways.
orously defined than many other qualitative approaches. It In addition to Bitner’s own programmatic research, the
allows researchers to focus on a very specific phenomenon findings from the 1990 CIT study have stimulated much
because it forces them to define the “specific aim” of their additional research by other scholars. Three studies illus-
study and helps identify important thematic details, with trate this point.2 For example, Arnould and Price’s (1993)
vivid examples to support their findings. Two studies, examination of the “extended service encounter” subse-
Bitner, Booms, and Tetreault’s (1990) study of service en- quently built on Bitner’s research by investigating a con-
counters and Keaveney’s (1995) study of service switch- text in which an extraordinary service experience can
ing, illustrate the impact the CIT method has had on occur in service encounters that may continue for several
service research. days. Bitner’s research on service encounters has focused
Bitner, Booms, and Tetreault’s (1990) study focusing primarily on customers’ cognitive responses and/or as-
on service encounters provides an example of the value of sessments of service encounters; van Dolen et al. (2001)
the CIT method to service research. Their analysis of 700 have extended service encounter research by focusing on
critical service encounters in three industries, examined understanding affective consumer responses in service en-
from the perspective of the customer, led to the identifica- counters by examining the emotional content in narratives
tion of three types of employee behaviors (ultimately la- of critical incidents. Kelley, Hoffman, and Davis’s (1993)
beled recovery, adaptability, and spontaneity) as sources study developed a typology of retail failures and recover-
of satisfaction and dissatisfaction in service encounters. ies, a direct result of wanting to extend the work of Bitner,
Their study was one of the first to identify specific em- Booms, and Tetreault (1990) in the area of service recov-
ployee behaviors associated with customer satisfaction ery. All three studies were stimulated by findings resulting
and dissatisfaction. Prior to their research, much of what from Bitner’s use of the CIT method to study service
scholars understood about such evaluations was limited to encounters.
global assessments of satisfaction or abstract concepts Keaveney’s (1995) study on service switching also il-
(e.g., service quality). The CIT method allowed the lustrates the contribution that the use of the CIT method
authors to capture vivid details and resulted in the identifi- has made to service research. In her study, Keaveney em-
cation of important themes that a literature search, quanti- ployed the CIT method to understand reasons service cus-
tative research, or even depth interviews would not have tomers switch providers. Her analysis of more than 800
illuminated—particularly at a time when scholars knew critical behaviors of service firms (critical incidents) led to
very little about service encounters. the identification of eight distinct categories of reasons
On the basis of the knowledge gained from the 1990 why customers switch providers. Prior to her CIT study,
study, Bitner and her colleagues have developed a pro- most research attempting to identify causes of service
grammatic stream of research on service encounters by switching focused on issues related to dissatisfaction. Al-
creatively applying the CIT method in a variety of ways. though some causes Keaveney identified are fairly pre-
For example, Gremler and Bitner (1992) extended the dictable dissatisfaction-related issues (e.g., core service
generalizability of the 1990 study by investigating service failure, service encounter failure, recovery failure), other
encounters across a broad range of service industries; their causes fall outside the satisfaction-dissatisfaction para-
findings indicate that the initial set of employee behaviors digm (i.e., customers were satisfied, but they still
that lead to satisfaction or disssatisfaction in service en- switched). Had Keaveney stayed within the satisfaction
counters is robust across contexts. In a later study, Bitner, paradigm, as most of the researchers studying consumer
Booms, and Mohr (1994) employed the CIT method to ex- switching had been doing prior to then, she would never
amine the service encounter from the perspective of the
2. A search of citations of the Bitner, Booms, and Tetreault (1990)
firm—specifically, the customer-contact employee. Do- article on the Social Sciences Citation Index revealed more than 230 ref-
ing so expanded the initial framework by identifying a erences to the study to date. It is clearly beyond the scope of this article to
fourth group of behaviors (employee response to problem point out all of the research triggered by this study. The three studies listed
here illustrate the extent to which the findings from the initial CIT study
customers, labeled coping) not identified when only cus- on service encounters stimulated further research.
Gremler / CRITICAL INCIDENT TECHNIQUE 69

have identified convenience, competition, involuntary employed by perusing CIT studies collected from the
switching, and pricing as four major causes of switching above sources to identify other CIT studies referenced
not related to dissatisfaction. Although each of those four (e.g., book chapters). 4
issues had been discussed in the literature, until her study, The initial collection of studies referencing the CIT
they had not been considered in one research project. method numbered 168. To be included in the sample for
However, as Keaveney (1995) pointed out, all of these is- further analysis, a study had to meet three criteria. First,
sues need to be considered if service switching behavior is the study had to be conducted in a marketing or marketing-
to be understood. Thus, Keaveney’s application of the CIT related context. Second, the study had to actually collect
method has opened the door for a much broader and more CIT data as part of the study and not merely discuss the
comprehensive switching behavior paradigm. merits of using the CIT method. Third, the study had to
As these two studies indicate, the CIT method provides provide some discussion of how the CIT method was em-
a valuable means for service researchers to rigorously ployed. Of the 168 CIT studies identified, 19 studies de-
study a phenomenon and identify issues not previously scribed how to use or apply the CIT method (or suggested
considered. Given the recent popularity and potential use- the method be used) but did not actually do so, and another
fulness of the CIT method in service research, a research 8 studies referenced “critical incidents” or CIT but did not
synthesis was undertaken to assess the nature of past explicitly discuss how the CIT method was employed.
applications of the technique. These 27 studies were excluded from the sample. The re-
sulting sample of 141 studies includes 106 journal articles,
27 papers published in conference proceedings, and 8
RESEARCH SYNTHESIS METHOD book chapters.5 The diversity of journal articles and other
publications indicates the extent to which the CIT method
The following paragraphs describe the sample of stud- has been used in service research in the past three decades.
ies included in this research synthesis as well as how the Although Swan and colleagues (Swan and Combs 1976;
studies were coded and analyzed. Swan and Rao 1975) introduced CIT to the marketing lit-
erature in the mid-1970s, the method was not widely used
Sample in marketing until the 1990s. To illustrate, nearly all of the
studies included in the sample for analysis (125 out of 141)
Studies that referenced the CIT method and were pub- were published after 1990, the year of Bitner, Booms, and
lished in marketing-related journals from 1975 through Tetreault’s (1990) seminal work. This article seems to
2003 were considered for inclusion in the data set.3 A have served as a springboard for the use of the CIT method
search of leading marketing, consumer behavior, services in service research; indeed, 101 of the 125 studies pub-
marketing, and services management journals was under- lished after 1990 cite the Bitner, Booms, and Tetreault
taken to identify studies employing the CIT method. The (1990) article. Table 1 displays the distribution, by year, of
initial set of journals included the Journal of Marketing, the articles included in the sample. The major sources for
the Journal of Marketing Research, the Journal of Con- CIT studies (those publishing at least five CIT studies) in-
sumer Research, the Journal of the Academy of Marketing clude six journals (the International Journal of Service In-
Science, the Journal of Retailing, the Journal of Business dustry Management; the Journal of Marketing; the
Research, the European Journal of Marketing, the Journal Journal of Satisfaction, Dissatisfaction, and Complaining
of Service Research, the International Journal of Service Behavior; the Journal of Services Marketing; Managing
Industry Management, the Journal of Services Marketing, Service Quality; and The Service Industries Journal) and
the Journal of Satisfaction, Dissatisfaction, and Com- two conference proceedings (Association for Consumer
plaining Behavior, and the Service Industries Journal. Research and American Marketing Association).
Conference proceedings of the American Marketing As-
4. Although a concerted effort was made to include every CIT study
sociation, the Association for Consumer Research, Qual- published in marketing (or marketing-related) outlets during the past
ity in Services (QUIS), and Frontiers in Services Mar- three decades, additional research may have been unintentionally omit-
keting were also considered. Other published CIT studies ted. However, the studies included in the research synthesis can be pre-
sumed to constitute a representative and comprehensive sampling of CIT
were identified through computerized (Internet) searches studies in service research during the 1975-2003 period.
using ABI/Inform, Uncover, and Business Source Premier 5. All identified CIT studies were included in the sample if they met
electronic databases. Finally, a “snowball” technique was the criteria for inclusion; no screening of the studies was made based on
the quality of the manuscript or the publication outlet. Thus, any explicit
3. The original intent of this study was to focus on CIT studies con- (or implicit) assessments or criticisms of the application of the CIT
ducted in marketing. However, as noted later in the article, nearly all of the method in service research on the issues explored in this study must be
CIT studies in marketing included in this study were conducted in service cautiously made, as there was no attempt made to include only the
contexts. For that reason, most of the article refers to the use of the CIT “better” studies. A complete list of the 141 CIT studies included in the
method in service research. analysis is provided in the appendix.
70 JOURNAL OF SERVICE RESEARCH / August 2004

TABLE 1 Hausknecht 1988; Singh and Wilkes 1996), (b) assisting in


Distribution of Critical Incident Technique the development of a quantitative survey instrument (e.g.,
(CIT) Manuscripts Published by Year Martin 1996; Miller, Craighead, and Karwan 2000) or in a
Year Number of Manuscripts Percentage of Sample dramatic script (Harris, Harris, and Baron 2003), or (c)
creating realistic scenarios for an experiment (e.g.,
1975-1979 2 1.4 Swanson and Kelley 2001). In many of these studies, re-
1980-1984 3 2.1
1985-1989 6 4.3 spondents are asked to think of a particular event and to
1990 5 3.6 write down specifics (i.e., tell a story) related to this event.
1991 1 0.7 However, the primary focus in these studies is the analysis
1992 7 5.0 of a subsequent (non-CIT) data set; consequently, the re-
1993 5 3.6
searchers generally provide a limited discussion about the
1994 6 4.3
1995 13 9.2 CIT data and data collection procedures, and there is no re-
1996 12 8.5 port of any analysis of the respondents’ stories. These
1997 7 5.0 “combination” studies are included in the discussion of
1998 14 9.9 study contexts and research topics but are excluded from
1999 14 9.9
the analysis and discussion of the content analytic CIT
2000 12 8.5
2001 13 9.2 studies presented later.
2002 9 6.4
CIT studies employing interpretive methods. CIT stud-
2003 12 8.5
Total 141 100.0 ies in marketing contexts have typically not employed in-
terpretive or postmodern approaches (Hopkinson and
Hogarth-Scott 2001). Indeed, of the 141 studies, only 7
employ an interpretive approach exclusively in analyzing
Classification of CIT Studies the data. These 7 studies generally employ an interpretive
methodology to identify themes emerging from analysis
CIT data can be used both quantitatively and qualita- of the critical incidents. Examples of such studies include
tively and, indeed, have been used both ways in service re- Guiry (1992), Hedaa (1996), and Mattsson (2000). An ad-
search. Chell and Pittaway (1998) briefly described both ditional four studies analyze the CIT data using both con-
uses: tent analysis and interpretive methods. In these studies, a
content analysis approach is used to reveal what events oc-
Used quantitatively it can assess the type, nature and curred in the critical incidents, and the interpretive meth-
frequency of incidents discussed which when linked odology is then used as a means of interpreting and
with other variables . . . can provide important in- understanding the experience (cf. Guiry 1992). Examples
sights into general relationships. Used qualitatively of such studies include Mick and DeMoss (1990) and
the CIT provides more discursive data which can be Ruth, Otnes, and Brunel (1999). Because such a small
subjected to narrative analysis and be coded and cat- number of CIT studies employ interpretive methods, an
egorized according to the principles of grounded assessment of the application of the interpretive method in
theory. (p. 26) these studies is not included in this study except in the
analysis of study contexts and research topics; however,
Given the different ways CIT-generated data are used, the issue of employing interpretive methods to analyze
each of the 141 CIT studies included in the sample was CIT data is addressed in the Recommendations section.
classified as one of three general types: (a) studies in
which data generated from the CIT method are not directly CIT studies employing content analytic methods. Most
analyzed but rather are combined with another method CIT studies identified typically treat respondent stories as
(e.g., a survey or an experiment), (b) studies analyzing the reports of facts. As a result, analysis typically focuses on
CIT data primarily in an interpretive fashion, and (c) CIT the classification of such reports by assigning incidents
studies employing content analytic methods. into descriptive categories to explain events using a con-
tent analysis approach (cf. Hopkinson and Hogarth-Scott
CIT studies combined with other methods. In 19 stud-
2001) instead of using interpretive approaches. A total of
ies, the CIT method is employed primarily to produce data
115 CIT studies were classified in this manner.6 Because
that are not the primary focus of the study; that is, it is used
an overwhelming majority of the empirical CIT studies
in combination with another empirical method. To illus-
trate, in these studies, the researchers use data generated 6. The four studies that include both content analysis and interpreta-
from the CIT method for such purposes as (a) creating a tive methods are included in the subsequent discussion of CIT studies that
frame of reference for the respondent (e.g., Folkes 1984; employ a content analytic approach.
Gremler / CRITICAL INCIDENT TECHNIQUE 71

published in marketing have primarily employed a content the method has wide-reaching applicability in studying a
analysis approach in analyzing the data, the research syn- broad assortment of service research issues.
thesis focuses on these studies when investigating issues
concerned with CIT data analysis procedures. Research Topics

Coding and Analysis of Studies The 141 CIT studies have explored a range of issues.
The most frequently researched issue is customer evalua-
To assess the CIT studies, 51 variables were identified tions of service (n = 43 or 31%), including issues related to
and include such issues as study contexts, research topics, service quality, customer satisfaction, and service encoun-
sampling, and data analysis methods. Many variables were ters. Service failure and recovery is the second most popu-
borrowed, when applicable, from Kolbe and Burnett’s lar research topic (n = 28 or 20%), followed by service
(1991) synthesis of content analysis research. After the delivery (n = 16 or 11%). Thirteen studies (9%) focus on
variables were identified, the author analyzed the 141 arti- service employees, and 10 studies (7%) illustrate or dem-
cles separately and coded each of the 51 variables, when onstrate the use of the CIT method in service research. The
applicable, for every study. Once the studies were coded, other 31 studies (22%) encompass a variety of topics, in-
an independent judge coded the articles separately. Dis- cluding word-of-mouth communication, channel conflict,
agreements in coding for any variables were resolved by fairness, customer delight, salesperson knowledge, and
discussing key terms and jointly reviewing the articles critical service features, to name a few. (See Table 2 for a
until an agreement was reached. more complete list.)

CIT STUDY CONTEXTS CONTENT ANALYTIC CIT STUDIES


AND RESEARCH TOPICS
As indicated earlier, 115 of the 141 studies in the sam-
Two areas of interest in examining the CIT studies in- ple employ content analytic procedures in analyzing the
clude identification of the specific contexts in which CIT CIT data. Thus, it seems to be particularly relevant to as-
has been used as well as the research topics investigated. sess the procedures typically used when analyzing CIT
The following discussion examines both study contexts data in this fashion. Kassarjian (1977), in a classic article
and research topics in all 141 CIT studies before narrow- on content analysis, called for such research to be espe-
ing the focus of the discussion to the 115 CIT studies that cially concerned with sampling, objectivity, reliability,
employ content analytic methods. and systematization issues. Following the guidelines pro-
posed by Kassarjian and employed by Kolbe and Burnett
Study Contexts (1991) in their synthesis of content analysis research, the
CIT studies were assessed and coded in each of these four
A variety of contexts are reported across the 141 CIT areas. Thus, Kolbe and Burnett’s (1991) operation-
studies; nearly all (n = 134 or 95%) can be considered ser- alization of these issues is used, when appropriate, as an
vice contexts (i.e., where the primary or core product of- organizing framework for assessing the 115 content ana-
fering is intangible). Examples of such services include lytic CIT studies.
hospitality (including hotels, restaurants, airlines, amuse-
ment parks), automotive repair, retailing, banking, cable Sampling
television, public transportation, and education. In more
than half of the studies (n = 78 or 55%) one context (or in- Sampling addresses the issues of the data collection
dustry) is used. Nineteen studies (13%) report using be- method, respondent selection, respondent characteristics,
tween two and four contexts, and 44 studies (31%) report sample size, the number of usable incidents collected, and
soliciting incidents from five or more contexts. Most of incident valence. Each of these issues is discussed in the
the CIT studies (n = 117 or 83%) are set in business-to- following paragraphs.
consumer contexts. Fifteen studies (11%) collect incidents
Data collection method. A variety of methods have
in business-to-business contexts, whereas 9 studies (6%)
been used to collect data for the 115 CIT studies employ-
focus on internal services. Eleven CIT studies (8%) are
ing content analytic procedures. Using students as inter-
cross-national in nature, exploring a research issue in more
viewers is the most frequently reported method (n = 33 or
than one country. Overall, an extensive variety of service
29%); of those 33 studies, 30 studies report the number of
contexts have been reported in the CIT studies, suggesting
students serving as data collectors (the average number of
72 JOURNAL OF SERVICE RESEARCH / August 2004

TABLE 2
Research Topics Investigated by Critical Incident Technique (CIT) Studies
Combination Interpretive Content Analysis
Research Topic Studiesa Studies Studies Row Total

Customer Evaluations of Service


Service quality 2 — 13
Customer satisfaction 2 1 10
Service encounters — — 3
Service encounter satisfaction 1 — 7
Customer dissatisfaction — — 2
Customer attributions — — 2
Total 5 1 37 43
Service failure and recovery
Service (or product) failure 1 — 6
Service recovery 3 1 9
Service failure and recovery 1 — 2
Customer complaint behavior 2 — 3
Total 7 1 20 28
Service delivery
Service delivery — — 6
Service experience 1 — 4
Customer participation in service delivery — 1 4
Total 1 1 14 16
Service employees
Employee behavior 2 — 2
Customer/employee interactions — — 2
Internal services 1 — 6
Total 3 0 10 13
Illustration/demonstration/assessment of CIT method in service research
Total 0 0 10 10
Other issues
Entrepreneurial marketing, relationship dissolution, customer acquisition,
interpersonal influence in consumption, services internationalization,
self-gifts, word-of-mouth communication, channel conflict, customer
welcomeness, assessment of industry grading schemes, customer
repurchase, customer-to-customer interactions, fairness in service
delivery, customer switching behavior, customer delight, salesperson
knowledge, relationship strength, critical service features, customer
costs of service quality
b
Total 3 8 24 31

Column total 19 11 115 141b

a. The primary empirical focus in the combination studies is analysis of non-CIT data. That is, CIT data are collected to be used in combination with another
research method. In these studies, no attempt is made by the researchers to describe the CIT data or data collection procedures nor to report any analysis of
the respondents’ stories.
b. Four CIT studies were classified as being both interpretive and content analysis studies, as both methods were employed in these studies. Thus, the total
for these rows is adjusted in order to avoid double-counting these studies.

data collectors is 29), and nearly all of those studies (n = do not indicate how the critical incident data were col-
29) report training the students. Among the remaining lected.7
studies, 27 studies (23%) report that the authors served as
Respondent selection. A total of 30 studies (26%) re-
interviewers and/or data collectors, 12 studies (10%) de-
port some type of probability sample (e.g., simple random,
scribe mailing a research instrument to respondents, and
14 studies (12%) report using a variety of other methods 7. The findings presented in this research synthesis are limited to the
(e.g., collection of data via the Internet). Six studies (5%) details of the procedures and methods reported in the CIT studies. Au-
analyze secondary data and thus did not collect data di- thors may have initially provided additional information in earlier ver-
sions of their manuscripts that was later removed as a result of the review
rectly from respondents. The remaining 23 studies (20%) process.
Gremler / CRITICAL INCIDENT TECHNIQUE 73

systematic, or proportional) in selecting respondents. The valence of the incidents collected is either neutral or
Among the other selection methods employed, 26 studies not reported in 10 studies.
(23%) report using a convenience sample, 19 of the studies
that used student data collectors (17%) employed a snow- Objectivity
ball technique, 16 studies (14%) were administered to stu-
dents, and 14 studies (12%) used purposive (judgmental) Kolbe and Burnett (1991) described objectivity as re-
sampling. The respondent selection method is not delin- ferring to the process by which analytic categories are de-
eated in 10 studies (9%). Thus, although the method of se- veloped and used by researchers and those interpreting the
lecting respondents in the CIT studies varies, most of the data. They suggest that “precise operational definitions
studies do not report using a probability sample. and detailed rules and procedures for coding are needed to
facilitate an accurate and reliable coding process. Detailed
Respondent characteristics. The gender of respondents
rules and procedures reduce judges’ subjective biases and
is reported in 63 studies (55%). When reported, the ratio of
allow replication by others” (Kolbe and Burnett 1991,
females to males in the samples is approximately equal;
p. 245). Following the guidelines of Kolbe and Burnett,
the average rate of females in these studies is 50%. Only 19
objectivity in the 115 content analytic CIT studies is as-
studies report more than 60% of the sample being from
sessed by investigating reports about the judges coding the
one gender. Respondent age is less frequently reported
incidents as well as reports of the rules and procedures
(n = 54 or 47%); across these studies, the average age is
used in incident classification procedures in the studies.
34.5. The respondent’s level of education is reported in 22
studies (19%), whereas ethnicity characteristics of the Number of judges. The number of judges used to cate-
sample are reported in only 12 studies (10%). Thus, gener- gorize the CIT data is mentioned in 85 studies (74%). Gen-
ally speaking, most CIT studies include minimal descrip- erally speaking, a majority of the CIT studies (n = 73 or
tion of the respondents providing the critical incidents. 63%) report two or three judges (sometimes referred to as
coders) were used to analyze, and ultimately categorize,
Sample size and number of usable incidents. The distri-
the critical incidents. The number of judges across all of
bution of sample sizes—that is, the number of respon-
the CIT studies ranges from 1 to 8, with one exception (one
dents—varies considerably across the 115 CIT studies,
study employed 55 student judges); an average of 2.6
ranging from 9 to 3,852; the average number of respon-
judges were used in the studies (not including the outlier).
dents per study is 341. Nearly half of the studies (n = 56 or
The number of judges is not reported in 30 studies (26%).
49%) include more than 200 respondents. Fourteen stud-
ies do not report the number of respondents. The distribu- Judge training. Trained judges are important when
tion of the number of usable critical incidents reported in content analytic methods are used; as they become famil-
the studies also varies considerably, ranging from 22 to iar with the coding scheme and operational definitions,
2,505; the average number of incidents per study is 443. A intrajudge and interjudge coding reliability would be ex-
majority of the studies (n = 69 or 60%) report using at least pected to increase (Kolbe and Burnett 1991). Following
250 incidents. Interestingly, four studies do not indicate the approach of Kolbe and Burnett, studies in which the
the number of incidents collected—even though the criti- authors served as judges (n = 40 or 35%) were classified as
cal incident is the unit of analysis in each study. “no training” studies, although it is likely they did indeed
receive some sort of instruction prior to coding the inci-
Number of incidents requested and incident valence.
dents. Given this criterion, judge training is explicitly re-
About half of the studies (n = 58 or 50%) indicate each re-
ported in just nine studies (8%); however, the finding that
spondent was asked to provide a single incident; 34 studies
only 8% of the studies appear to have trained their judges
(30%) had respondents provide two incidents, and 9 stud-
may simply reflect a failure to report these procedures.
ies (8%) asked respondents to provide more than two inci-
dents. Fourteen studies (12%) do not report the number of Judge independence. Another salient issue when evalu-
incidents requested from respondents. Across all of the ating the judges used in content analytic investigations is
studies, including both studies asking for a single incident the extent to which autonomous assessments of the data
or those requesting more than one incident, 83 studies are made. In less than half of the 115 CIT studies (n = 51 or
(72%) collected a mix of both positive and negative critical 44%) the authors indicate that those serving as judges
incidents. In 21 studies (18%), respondents were asked to (many of them coauthors) categorized incidents without
provide only negative incidents,8 and in a single study, re- prior knowledge of other judges’coding; 64 studies (56%)
spondents were asked to provide only positive incidents. do not report if the judges categorizing the incidents did so
8. Negative incidents are typically those situations where the respon- independently. Again, the relatively low percentage of
dent’s experience is dissatisfying, unpleasant, trying, difficult, embar- studies describing judge independence may simply reflect
rassing, troublesome, or irritating. a failure in reporting this information.
74 JOURNAL OF SERVICE RESEARCH / August 2004

Rules and procedures. As with any research, in order to future researchers to adequately replicate and extend past
be subject to validation and replication by other research- studies (cf. Kolbe and Burnett 1991). Another concern is
ers, CIT studies using content analytic procedures should that in most of the 115 studies, the authors report using the
provide thorough descriptions of the rules and procedures same data set to develop and verify classification schemes.
used to categorize the critical incidents. However, only 12 A more prudent approach would be to use one data set to
of the studies (10%) provide a detailed description of the develop a classification scheme and a second, independent
operational definitions used to classify incidents into cate- set of critical incidents to validate and confirm the scheme
gories. Another 20 studies (18%) cite previous research as (cf. Strauss 1993).
the source of the study’s rules and procedures; the remain- As indicated earlier, minimal changes have been sug-
ing 83 studies (72%) do not provide a detailed description gested to the CIT method since Flanagan (1954) initially
of the rules and procedures employed. These results sug- outlined his suggested procedures, and many of the CIT
gest service researchers using the CIT method generally studies analyzed here appear to be generally following
do not report many details concerning the rules and these procedures. However, service researchers employ-
procedures for categorizing incidents. ing content analytic methods with CIT data could clearly
do more in terms of reporting their analysis procedures.
Classification scheme details. Of the 115 CIT studies
Reporting procedures are discussed further in the
that employ content analytic methods, 105 (91%) report
Recommendations section.
developing or using some sort of classification scheme to
analyze and categorize the incidents. In those 105 studies,
an average of 5.4 major categories (“major” as labeled or Reliability
implied by the authors) are identified and subsequently
Reliability is concerned with consistency; it is a matter
used to sort the data; the number of major categories
of whether a technique, applied repeatedly to the same ob-
ranges from 2 to 53. In 64 of the studies (56%), minor cate-
ject, would yield the same result each time. In CIT studies
gories (or subcategories) are used; the average number of
employing content analytic methods, assessments of reli-
subcategories is just under 16 (ranging from 3 to 56).
ability generally focus on judges’ (or coders’) abilities to
Classification scheme pretesting. Definition checks consistently classify incidents into specified categories.
and pretesting of categories should contribute to the reli- Reliability in such studies could include discussions of
ability of the CIT coding process when employing content both intrajudge and interjudge reliabilities. However,
analytic methods (Kolbe and Burnett 1991). However, intrajudge reliability, which is concerned with how con-
very few CIT studies report any pretesting of the classifi- sistent a given judge is in making categorical decisions
cation scheme in judge training or elsewhere; in most of over time (Weber 1985), is reported in only five CIT stud-
these studies, the pretesting of a classification scheme oc- ies (in those studies, the average intrajudge reliability is
curred in a previously published study. Only 16 studies .884). Thus, the discussion here focuses on interjudge reli-
(14%) indicate that a set of incidents were placed into a ability—the degree to which two or more judges agree that
holdout sample and either (a) used to develop a classifica- a given observation should be classified (coded) in a par-
tion scheme or (b) after the classification scheme was fi- ticular way (cf. Perreault and Leigh 1989). Reliability is
nalized were subsequently used to verify the scheme. For assessed by investigating the reliability indices used and
these studies, the average size of the holdout sample is the magnitude of the statistics reported in the studies.
112. These results suggest that, generally speaking, the
Reliability index usage. Reliability indices attempt to
same data set is used to both develop and verify classifica-
determine the probability that different judges would
tion schemes.
achieve similar results when coding and classifying criti-
Summary assessment of objectivity in CIT studies. Re- cal incidents. Overall, 71 of the CIT studies (62%) report
ports of the content analytic processes deployed in CIT some sort of interjudge reliability statistic to provide sup-
studies are important because doing so provides details port for suggesting that different judges have arrived at the
about issues affecting the overall quality of the CIT judg- same result. Although a variety of interjudge reliability in-
ment and coding process. One concern raised from the dices are used in evaluating the reliability of CIT incident
findings is that despite the importance of such reporting, assessment (see Table 3), clearly the most common reli-
service researchers generally provide minimal, if any, de- ability index used is the coefficient of agreement (the total
scriptions of the rules and procedures for analyzing the number of agreements divided by the total number of cod-
CIT data. The absence of this information does not neces- ing decisions); 45 studies (39%) report this statistic. The
sarily mean appropriate steps are omitted; however, there second most commonly reported statistic is Perreault and
is reason for concern regarding the judging precision of Leigh’s (1989) reliability index Ir (which takes into ac-
those analyzing critical incidents as well as the ability of count the number of categories); this statistic is reported in
Gremler / CRITICAL INCIDENT TECHNIQUE 75

TABLE 3 The most commonly reported statistic in these CIT studies


Reliability Indices Reported in Critical is the percentage of agreement, and the average percentage
Incident Technique (CIT) Studies of agreement in these studies is relatively high (.850), par-
Number ticularly considering that the lowest reported statistic is
Reliability Index of Studies Averagea the one recorded for each study. However, one weakness
of this statistic is that the number of coding decisions in-
Percentage of agreement 45 .850
Perreault and Leigh’s (1989) Ir 22 .857 fluences the reliability score (Perreault and Leigh 1989);
Cohen’s (1960) Kappa 2 .745 as the number of categories decreases, the probability that
Ronan and Latham’s (1974) Index 2 .880 judges would reach agreement by chance increases (Kolbe
Cronbach’s alpha 2 .920 and Burnett 1991). As Kolbe and Burnett (1991) pointed
Cramer’s V 1 .791
out in their examination of content analysis research, “Re-
Holsti’s (1969) coefficient of reliability 1 .820
Spiegelman, Terwilliger, and Fearing’s liability does not occur simply because the agreement co-
(1953) reliability statistic 1 .980 efficient exceeds .80” (p. 249). However, in service
Absolute agreement 5 1.000
c
research CIT studies, the generally accepted, although in-
b d
Two reliability indices reported 9 .849 formal, rule of thumb for a lower limit for suggesting that
Not reported 44 —
judges’ coding decisions are reliable appears to be a value
a. In several studies, reliabilities are reported for data subsets or for vari- of .80.
ous major categories. For such studies, the lowest reliability reported was As indicated earlier, an alarming 38% of the studies do
recorded.
b. One study reports three reliability indices.
not report any type of reliability statistic. Perhaps there are
c. The reliability statistic reported for the Absolute Agreement index is two explanations for such omissions: (a) The calculated
1.000 as the requirement for judges is that they must agree on the categori- reliability statistics were not high enough to convince the
zation of all incidents when coding them.
d. This number represents the average value of all the reliability indices
reader (reviewer) of the reliability of the results and thus
reported across all of the studies. the authors did not report them, or (b) the authors did not
feel that calculating and reporting reliability statistics is
essential in presenting the results of the study. Either way,
22 studies (19%). Other reliability statistics mentioned in it is difficult for the reader to assess whether the applica-
the CIT studies include Ronan and Latham’s (1974) reli- tion of the method would yield the same result every time.
ability index (n = 2); Cohen’s (1960) kappa (n = 2); Clearly service researchers conducting CIT studies using
Cronbach’s alpha reliability test (n = 2); Cramer’s V (n = a content analytic approach need to do better in reporting
1); Holsti’s (1969) coefficient of reliability (n = 1); and reliability statistics.
Spiegelman, Terwilliger, and Fearing’s (1953) reliability
statistic (n = 1). Eight studies report two reliability statis-
Systematization
tics, and 1 study presents three reliability indices. Five
studies required the judges reach agreement on categoriz-
Systematization in content analysis research, as de-
ing an incident and are thus labeled absolute agreement. A
scribed by Kassarjian (1977) and Holsti (1969), means
surprisingly large number of studies (n = 44 or 38%) do not
that inclusion and exclusion of content or categories is
report any reliability index or statistic.
done according to consistently applied rules. Systematiza-
Reliability index values. As indicated in Table 3, the av- tion can also refer to the extent to which the research pro-
erage (lowest) coefficient of agreement percentage9 across cedures documented in the selected group of studies
the 45 studies reporting it is .850, and the average Perreault examine scientific problems (through hypothesis and the-
and Leigh (1989) reliability index (Ir) across the 22 studies ory testing and research designs) (Kassarjian 1977; Kolbe
including it is .857. The averages of the less commonly and Burnett 1991). In the present study, systematization is
used reliability indices listed in Table 3 are all above .740, assessed by investigating the following issues: specificity
and most are above .800. Throughout the studies reporting of the phenomenon being investigated, the overall purpose
reliability statistics, the authors generally appear to believe of the study, and triangulation of the CIT method with
a strong case can be made for good interjudge reliability other research methods.
within the study.
Specificity. Bitner, Booms, and Tetreault (1990) de-
Summary assessment of reliability in CIT studies. Reli- fined an incident as “an observable human activity that is
ability is a key component in content analytic methods. complete enough in itself to permit inferences and predic-
tions to be made about the person performing the act” and
9. Reliability is reported in a variety of ways, such as for data sub- a critical incident as “one that contributes to or detracts
sets, for various major categories within a study, or between different sets from the general aim of the activity in a significant way”
of judges. As a result, the lowest reliability score reported in a study is the
statistic used for this research synthesis. (p. 73). In his original discussion of the CIT method,
76 JOURNAL OF SERVICE RESEARCH / August 2004

Flanagan (1954) described a critical incident as “extreme a previously existing classification scheme. In the remain-
behavior, either outstandingly effective or ineffective with ing 63 studies, hypothesis testing is the primary purpose of
respect to attaining the general aims of the activity” 20 studies (17%), whereas theory development is the pri-
(p. 338). Thus, CIT researchers should be expected to mary purpose of 6 studies and testing of a conceptual
identify precisely what a critical incident is in the given model is the primary purpose of 2 studies; in the remaining
context. Indeed, in 31 studies (27%), the authors clearly 35 studies (30%), the authors indicate that the primary
specify what behaviors or events constitute a critical inci- purpose of employing the CIT method is simply to answer
dent, and in another 11 studies (10%), the authors refer to a the research questions proposed.
previous study for the definition of a critical incident. In
Methodological triangulation. Methodological trian-
most studies, however, the authors are not explicit in defin-
gulation refers to the use of different research methods to
ing what constitutes a critical incident. In particular, 9
investigate a phenomenon (Denzin 1978). Such triangula-
studies (8%) refer to a generic definition of a critical inci-
tion was observed in about one third of the 115 content an-
dent (such as Flanagan’s) but do not specify how this
alytic CIT studies, as 35 studies (30%) employ a second
would relate to the issue they are studying, 46 studies
research method (generally a quantitative method) with a
(40%) are ambiguous in explicitly describing what the au-
second set of data to compliment the use of the CIT
thors consider to be a critical incident (although some
method in understanding the phenomenon of interest. This
studies imply what is considered to be a critical incident in
finding suggests that many researchers employing content
discussions of how the data were collected), and 18 studies
analytic methods on CIT data do not rely solely on a single
(16%) provide no description at all as to what constitutes a
method in an attempt to understand the phenomenon of in-
critical incident.
terest. Thus, the CIT has been used as a companion
In addition to defining what a critical incident is for a
research method in several studies (cf. Kolbe and Burnett
given context, it is important to determine and report the
1991).
criteria for whether an incident should be included in a
study. In 29 studies (25%) authors explicitly describe the Summary assessment of systematization in CIT studies.
criteria they used for including (or excluding) an incident, Service researchers using the CIT method generally do not
and in another 13 studies (11%) authors refer to criteria identify precisely what a critical incident is in the given
presented in an earlier study. However, a majority of stud- context, nor do they provide much detail regarding the cri-
ies (n = 73 or 63%) do not provide any discussion of such teria for whether an incident should be included in a study.
criteria, suggesting that either (a) all incidents that were Thus, the aspect of systematization that is concerned with
collected are included in the study or (b) the authors do not ensuring that inclusion and exclusion of content or catego-
feel it is important to describe what is required for an inci- ries is done according to consistently applied rules is gen-
dent to be considered appropriate for inclusion in a CIT erally weak in CIT studies in service research. That is,
study. researchers have not been prudent in reporting how they
have defined a critical incident or the criteria for including
Study purpose. The primary purpose of the CIT studies
an incident in a study. This is particularly disappointing,
that employed content analysis techniques varies consid-
given that in most CIT studies the unit of analysis is the
erably. One hundred and five studies were coded as being
critical incident itself. Indeed, in most of these studies, the
driven primarily by research questions or hypotheses; in
authors have not clearly specified the unit of analysis,
particular, 20 studies (18 percent) present formal hypothe-
making it difficult for the reader to assess the extent to
ses, whereas 85 studies (74%) provide research questions
which a systematic approach has been taken in the re-
as the basis for empirical investigation.10 Ten studies were
search project. Furthermore, as reported earlier, 72% of
written primarily to illustrate the use or applicability of the
the CIT studies do not provide a detailed description of the
CIT method and were coded as having neither hypotheses
rules and procedures employed in categorizing the critical
nor explicit research questions. Among the 105 studies, 42
incidents. Thus, most CIT studies employing a content an-
(37% overall) focus primarily on developing or testing a
alytic approach do not “conform to the general canons of
classification scheme; of these, 29 studies (25%) have the
category construction” for content analysis studies (Holsti
intent of developing a classification scheme for better un-
1969, p. 4).
derstanding of the phenomenon being investigated,
As suggested earlier, systematization is also concerned
whereas the other 13 (11%) are conducted primarily to test
with the extent to which the research procedures examine
10. Hypotheses are formal statements of predicted relationships be-
scientific problems (through hypothesis testing and re-
tween two variables (Kerlinger 1986). CIT studies that include such state- search designs). The large number of CIT studies listing
ments were coded as proposing hypotheses; studies that propose (or research questions as the basis for empirical investigation
imply) research questions or make general predictions without the speci-
ficity of hypotheses were classified as proposing research questions.
is not surprising given the inductive, exploratory nature of
Gremler / CRITICAL INCIDENT TECHNIQUE 77

the method. Indeed, the contribution of many of these Content analytic CIT studies. In this research synthesis,
studies appears to be in their ability to (a) describe relevant the 115 CIT studies using content analytic approaches
phenomena for further research, particularly when no the- were assessed on issues of sampling, objectivity, reliabil-
oretical underpinnings exist, and (b) suggest hypotheses ity, and systematization, following the guidelines of Kolbe
for future investigation; perhaps they might be best labeled and Burnett (1991). In terms of sampling, the review of
theory-building or hypothesis-generating studies (Kolbe these studies suggests that critical incident data have been
and Burnett 1991). Overall, the CIT method appears to collected in a variety of ways, often employing students as
have been used primarily for theory development in data collectors, and generally include a relatively large
service research. number of incidents from a relatively large number of re-
spondents. However, most of the studies either fail to per-
form, or at least report, objectivity issues as operation-
DISCUSSION alized by Kolbe and Burnett (1991). For example, about
half of the CIT studies provide minimal information about
Research Synthesis Summary details of the process used to analyze the critical incidents
and the rules and procedures they developed for categoriz-
Acceptance of CIT method in service research. Clearly ing incidents, making it difficult for other researchers to
the CIT method has been accepted as an appropriate replicate and validate earlier findings. Another area of
method for use in service research, as evidenced by the concern is that most authors report using the same data set
large number of CIT studies published during the past both to develop and to verify classification schemes. Reli-
three decades. The method itself appears to be a credible ability statistics are provided in a little over half of the
approach for service researchers to use; indeed, virtually studies, with percentage of agreement and Perreault and
none of the 168 studies in the original set have identified Leigh’s (1989) Ir being the two most commonly reported
any substantial problems with the method itself. These statistics; however, an alarming 38% of the studies do not
CIT studies have been undertaken in numerous contexts to report any type of reliability statistic, making it difficult to
investigate a wide range of services marketing and man- assess whether the application of the CIT method to the
agement issues. Many of these studies have included ex- data collected would yield the same result every time. Fi-
tensive discussions that explain the technique and justify nally, the aspect of systematization concerned with ensur-
its usage—not surprising given the relative newness of the ing that inclusion and exclusion of content or categories is
usage of the method in service research. However, as fu- done according to consistently applied rules is generally
ture service researchers craft their manuscripts (and re- weak in CIT studies, as reports of how service researchers
viewers review them), it is time to transition from define a critical incident or the criteria for including an
explaining what the CIT method is and defending its usage incident in a study are few.
to providing more detailed discussions of the operational Although the CIT method appears sound, perhaps there
procedures (e.g., data collection, data analysis) being used should be some concern about how the CIT method has
in the studies. The CIT method has clearly been accepted been used by service researchers. In particular, scholars
as legitimate, so discussions in methodology sections should be concerned about reproducibility of the findings
should focus more on operational procedures and less on from CIT studies because many of them do not include
justifying it as being an appropriate method of inquiry. sufficient descriptions of their methodological proce-
dures. Clearly CIT studies conducted in service contexts
Research contexts and topics. The findings from the
need to be more thorough in reporting procedures, espe-
141 studies included in this research synthesis suggest the
cially in terms of providing details about the unit of analy-
CIT method has been useful in exploring a wide range of
sis (i.e., what is a critical incident in the given context?),
service research issues. However, despite this wide-reaching
the criteria for including critical incidents in a data set, is-
applicability in studying an assortment of service issues,
sues affecting the overall quality of the CIT judgment and
the CIT method has been primarily used in business-to-
coding process, and reliability assessment and statistics.
consumer contexts. The topics receiving most of the atten-
tion in the CIT studies include service quality, satisfaction,
and service failure and recovery. Given the apparent Past Criticisms of Use of CIT
soundness of the method, CIT appears to be a particularly Method in Service Research
relevant and appropriate method for conducting service re-
Some scholars have noted additional concerns about
search and should be considered in studying a broader
how the method has been applied (or misapplied) in ser-
range of issues (e.g., service loyalty, customer perceived
vice research, such as issues related to sampling, the type
value, or service convenience) and for use in other disci-
of critical incidents typically collected, and the explor-
plines beyond services marketing.
78 JOURNAL OF SERVICE RESEARCH / August 2004

atory nature of CIT studies. These concerns are addressed Strandvik’s concerns; service researchers should consider
in the following paragraphs. these issues when designing future studies employing the
CIT method.
Sampling issues. When used in service research, CIT
samples have been criticized for being too small and too Exploratory approach. Another criticism of service re-
heavily based on student populations (Bell et al. 1999). search using a CIT approach relates to the nature of studies
However, the findings reported earlier suggest a relatively in which the method has been used. As indicated earlier,
large number of respondents are generally included in CIT CIT studies are generally of an exploratory nature (Bell
studies, resulting, on average, in a relatively large number et al. 1999) and are often employed as an exploratory
of incidents per study. In addition, although students method to increase knowledge about a little-known phe-
served as interviewers in about 29% of the CIT studies us- nomenon (Bitner, Booms, and Tetreault 1990). Although
ing content analytic methods, only 14% of the studies the findings here concur that CIT studies in service con-
were administered to students. Thus, the findings here texts are frequently used in an exploratory mode, a major
suggest that criticisms that CIT studies in service research contribution of many of these studies is to provide the
have small samples and are often based on student groundwork for theory development. The two studies de-
populations are not warranted. scribed earlier (Bitner, Booms, and Tetreault 1990;
Keaveney 1995) provide examples of such research. A
Types of critical incidents. Many CIT studies specifi-
large number of studies in the sample (nearly one third)
cally instruct respondents to think of situations that are in
implicitly address this concern by using both the CIT
some fashion “critical” or are exceptional customer en-
method and another research method within the same
counters (Stauss and Weinlich 1997). That is, only the
study in an attempt to better understand the phenomenon
most critical, most memorable events are sought when us-
of interest.
ing the CIT method; “usual” or “ordinary” incidents are
generally not reported (Stauss 1993), and service re-
searchers typically use the CIT method to study only the
RECOMMENDATIONS
“extremes” (Johnston 1995). This criticism appears valid,
as those studies providing descriptions of the critical inci-
dents collected generally indicate that only exceptional Contextual Recommendations
events are requested from respondents. Indeed, Flanagan’s
Additional contexts. Most of the CIT studies in market-
(1954) original discussion of the CIT method called for in-
ing have taken place in service contexts. More than a de-
vestigation of extreme (i.e., “critical”) events. However,
cade ago, Walker and Truly (1992) suggested the CIT
the collection of such events can actually be an asset for a
method should be used beyond just services in such con-
study, depending on the research questions being consid-
texts as sales management, marketing management, chan-
ered. For example, in investigations of customer outrage
nels, negotiation and bargaining, and consumer behavior.
and delight (e.g., Verma 2003), surprise (Derbaix and
However, with the exception of consumer behavior, CIT
Vanhamme 2003), service failure and service recovery
does not appear to have been readily applied (or accepted)
(e.g., Hoffman, Kelley, and Rotalsky 1995; Kelley,
to date in these contexts. Similarly, the use of the CIT
Hoffman, and Davis 1993; Lewis and Spyrakopoulos
method to investigate issues in business-to-business con-
2001), and customer switching behavior (Keaveney
texts, cross-national contexts, and internal services con-
1995), CIT appears to be a particularly useful method in
texts has been minimal. Given the contributions made by
examining such “extreme” events.
many of the studies using the CIT method, researchers
Similarly, service researchers using the CIT method
might consider using the method in the future to study a
have also been criticized for collecting “top-of-the-mind
variety of issues in such contexts.
memories of service interactions that are socially accept-
able to report” (Edvardsson and Strandvik 2000, p. 83). Dyadic studies. Many CIT studies focus on issues con-
With the exception, perhaps, of the interpretive CIT stud- cerned with the interaction between customers and em-
ies, this concern may be valid. That is, respondents are of- ployees (e.g., customer evaluations of service, service
ten not asked to elaborate on how negative or positive an failure and recovery, service delivery, service encounters).
incident has been or on how much it has influenced a rela- However, the CIT data collected in these studies almost al-
tionship. Also, multiple instances of a certain critical inci- ways capture a single, rather than dyadic, perspective. In-
dent for a particular individual or the reporting of multiple deed, even those few CIT studies that include both
incidents occurring in the same context are generally not customer and employee perspectives capture distinct
collected (Edvardsson and Strandvik 2000). The findings events, rather than different perspectives of the same inci-
presented here are consistent with Edvardsson and dent. Much insight might be gained from looking at criti-
Gremler / CRITICAL INCIDENT TECHNIQUE 79

cal incidents from a dyadic perspective. For example, Application Recommendations


Price and Arnould’s (1999) study on commercial friend-
ships included data from both customer and service pro- Interpretive approaches. As indicated earlier, an over-
vider perspectives, allowing them to gain a more thorough whelming majority of CIT studies in service research em-
understanding of how such friendships form. Perhaps us- ploy content analytic methods when analyzing CIT data;
ing the CIT method to capture both the customer’s and the only 11 of the 141 studies in the sample employ an inter-
employee’s view of the same incident would provide pretive approach in analyzing the CIT data. As a result,
additional insights on other service interaction issues (cf. critical incidents are typically analyzed with minimal
Edvardsson 1992). contextualization and very little interpretation or explana-
tion from the respondent. Service scholars tend to treat the
Physical evidence. Most of the 141 CIT studies in the respondent’s story as a “report,” and the emphasis is on
sample deal with interpersonal interactions or the service analysis of the “facts” presented; an examination of the re-
delivery process and thus address two of Booms and spondent’s account of why the events took place or why
Bitner’s (1981) three additional Ps for services marketing: the events are worth reporting is generally excluded
people and process. Issues relating to physical evidence, (Hopkinson and Hogarth-Scott 2001). Thus, even though
Booms and Bitner’s third P, have received minimal atten- the critical incidents are described from the respondent’s
tion from those using the CIT method (cf. Edvardsson and perspective (a documented strength of the method), most
Strandvik 2000). However, the environment where the ser- CIT research attempts to explain events through the re-
vice is delivered (i.e., servicescape), one aspect of physical searcher’s analysis.
evidence, can also influence the service customer’s experi- Service researchers employing the CIT method in fu-
ence. For example, in a recent study using the CIT method, ture studies should consider taking a more ethnographic or
Hoffman, Kelley, and Chung (2003) suggested that a sig- narrative approach in analyzing the data to gain insight
nificant percentage of service failures are related specifi- from interpreting respondents’ experiences. To illustrate,
cally to the servicescape. Meuter et al.’s (2000) study of the focus in most CIT studies is generally on customer
self-service technology uses the CIT method to under- cognition; collection of emotions related to an incident are
stand how a service provider’s equipment can have an im- rarely recorded (Edvardsson and Strandvik 2000; van
pact on a customer’s experience in the absence of service Dolen et al. 2001). Employing an interpretive approach
personnel. As these two studies illustrate, the CIT method may help researchers better understand emotions in the
can be valuable in examining the impact that the context of the critical incidents. An interpretive approach
servicescape, as well as other types of physical evidence, might also be used in analyzing an incident within a series
has on a customer’s service experiences and should be of incidents rather than in isolation (cf. Edvardsson and
considered for usage in future studies. Strandvik 2000). Two studies that incorporate an interpre-
“Critical” critical incidents in customer-firm relation- tive approach (in addition to the standard content analysis
ships. Edvardsson and Strandvik (2000) have raised an in- approach) are those by Mick and DeMoss (1990) and
teresting question: Is a critical incident critical for a Ruth, Otnes, and Brunel (1999). Chell (1998) provided
customer-firm relationship? Generally, CIT studies as- guidelines for researchers who desire to take a more
sume that the incidents reported are considered critical to interpretive approach in analyzing CIT data.
the respondents; however, the magnitude or seriousness of Variations of the CIT method. CIT studies generally fo-
an incident is often not assessed—at least not in terms of cus on single events or short-term interactions (Edvards-
how the respondent perceives it (Edvardsson and Strand- son and Roos 2001); incidents are analyzed in isolation,
vik 2000). The reported incidents may indeed stand out as and the customer-firm relationship is seldom considered.
being particularly memorable to the respondents, but Multiple instances of a certain type of critical incident are
whether or not an incident is critical to their relationship generally not captured, nor are occurrences of multiple
with a firm is contextually dependent, depending on such different incidents by the same respondent in the same
factors as the customer, the service provider, the history of context (Edvardsson and Strandvik 2000). Other critical
interactions with the firm, and the overall health of the re- incident–based methodologies have been suggested re-
lationship (cf. Edvardsson and Strandvik 2000). Indeed, cently to address these shortcomings, such as the Sequen-
Edvardsson and Strandvik have contended that the criti- tial Incident Technique (SIT) (Stauss and Weinlich 1997),
cality of critical incidents may differ over time and be- the Critical Incident in a Relational Context (CIRC)
tween customers. Thus, future CIT research might try to method (Edvardsson and Strandvik 2000), the Criticality
determine which events are truly critical to the long-term Critical Incident Technique (CCIT) (Edvardsson and
health of the customer-firm relationship. Roos 2001), or the Switching Path Analysis Technique
80 JOURNAL OF SERVICE RESEARCH / August 2004

(SPAT) (Roos 2002). Such variations of the CIT method their case, service encounters. Unfortunately, some of the
may be more appropriate in assessing relationship issues studies included in the research synthesis sample appear to
by looking at several critical incidents, and thus various in- have used the CIT method without clearly thinking about
teractions, during an extended period of time.11 Other vari- whether it is the most appropriate approach to use in ad-
ations of CIT have also been suggested for studying dressing the given research questions. Successful use of
service phenomena. For instance, LaForge, Grove, and the CIT method begins with determining the general aim
Stone (2002) have introduced the Sales Introspection of the study.
Technique, a method resembling the CIT method, as an In CIT research, the study design—the second phase
approach for training a sales force, and Harris, Harris, and listed in Table 4—needs to be thoughtfully planned.
Baron (2003) suggested using the CIT method in the de- Edvardsson (1992) and Bitner, Booms, and Tetreault
velopment of a dramatic script for service contexts. Other (1990) clearly delineated in their research what they con-
creative uses and adaptations of the CIT method should be sider to constitute a critical incident by providing precise
encouraged in future service research. definitions. Similarly, Keaveney (1995) very precisely
identified the unit of analysis in her research. In
Procedural Recommendations Keaveney’s study, the unit of analysis is not the critical in-
cident itself; rather, discrete critical behaviors contained
The CIT research process. As a result of conducting the within an incident are the units of analysis to be analyzed.
research synthesis, 6 studies using a content analytic ap- Careful consideration should also be given to the data col-
proach were identified that can be considered “model” lection instrument; Meuter et al. (2000) and Stauss and
CIT studies in terms of how the method is employed and Weinlich (1997) are two studies that provide detailed de-
reported: Bitner, Booms, and Mohr (1994); Bitner, Booms, scriptions of the questions included in the research instru-
and Tetreault (1990); Edvardsson (1992); Keaveney ments used to collect the critical incidents. Another issue
(1995); Meuter et al. (2000); and Stauss and Weinlich to consider when designing a CIT study is determination
(1997). Such exemplars should be used as a guide for ser- of the appropriate sample of respondents to study, given
vice researchers conducting content analytic CIT research the research questions of interest. Both the Bitner, Booms,
and reporting the methods and results. Although it is be- and Mohr (1994) and Edvardsson (1992) studies provide
yond the scope of this article to provide a complete de- logical arguments as to why the chosen sample is relevant
scription of the CIT research process, a list of five phases to the phenomenon being investigated. In summary, prior
that should be considered when employing the CIT to starting data collection, CIT researchers should deter-
method are included in Table 4.12 The five phases, based in mine how the critical incidents will be identified and then
large part on Flanagan’s (1954) original description of the used to contribute to the general aim of the study.
method, include problem definition, study design, data In terms of data collection—the third phase listed in
collection, data interpretation, and report of the results. Table 4—researchers need to consider how the critical in-
The more thorough studies among the 141 included in the cidents are to be collected. For example, as reported earlier,
sample—particularly the 6 studies listed above—pay often data are collected through trained interviewers—
close attention to these five phases in Table 4. Topics pro- in many cases students. Studies that report carefully train-
vided in the checklist, which includes key issues to con- ing student data collectors include Baker, Kaufman-
sider when designing and executing a CIT study, are Scarborough, and Holland (2002); Bitner, Booms, and
discussed in the following paragraphs, and examples that Mohr (1994); and Edvardsson (1992). Alternatively, criti-
illustrate some of the issues in each phase are provided. cal incident data can be collected through research instru-
When planning a CIT study, problem definition—the ments given directly to respondents (cf. Odekerken-
first phase listed in Table 4—should be carefully consid- Schröder et al. 2000; Stauss and Weinlich 1997) or solic-
ered before deciding to employ the CIT method. The ited through the Internet (cf. Meuter et al. 2000; Warden
Bitner, Booms, and Tetreault (1990) study illustrates how et al. 2003). Whatever the data collection mechanism, the
authors should carefully consider issues related to prob- key challenge in collecting CIT data is to get respondents
lem definition; in their article, they explicitly state their re- to provide sufficient detail about the phenomenon of inter-
search questions and suggest why CIT is an appropriate est. Another data collection issue is data purification; that
method for examining the phenomenon of interest—in is, determining (and then applying) criteria for inclusion
of a critical incident in the final data set. To ensure data
11. See Edvardsson and Roos (2001) and Roos (2002) for detailed
discussions of variants of the CIT method. quality, CIT researchers need to consider what constitutes
12. For those interested in an extensive discussion of the application an appropriate critical incident and identify relevant crite-
of the CIT method, see Flanagan (1954) for the initial description of the ria for excluding inappropriate incidents. Two studies that
method and Chell (1998) and Stauss (1993) for more recent discussions. clearly specify the criteria for incidents to be included in
Gremler / CRITICAL INCIDENT TECHNIQUE 81

TABLE 4
Research Process and Reporting
Checklist for Critical Incident
Technique (CIT) Content Analytic Studies
Phase 1: Problem definition
Determine what the research question is
Determine if CIT is an appropriate method for understanding this phenomenon
Phase 2: Study design
Determine what a critical incident will be defined as
Determine the criteria for determining what is not a critical incident
Determine the unit of analysis
Develop data collection instrument (clear instructions, appropriate story-triggering questions)
Determine appropriate sample (appropriate context(s), appropriate respondents)
Phase 3: Data collection
Train data collectors (if applicable)
Data collectors collect data
Identify usable critical incidents
Identify/develop criteria for incident inclusion (or exclusion)
Phase 4: Data analysis and interpretation
Content analysis of critical incidents
Read, reread incidents
Identify recurring themes
Develop classification scheme
Create descriptions of categories (incidents, behaviors, or other units of analysis)
Sort incidents using classification scheme
Assess intracoder reliability
Have additional judges/coders sort incidents
Assess intercoder reliability
Test classification scheme on a holdout (validation) sample
Phase 5: Results report
(1) Study focus/research question
Explicit identification of focus of study
Description of the research question
Precise definition of what a critical incident is in the given context
Discussion of why CIT is an appropriate method for understanding this phenomenon
(2) Data collection procedures
Data collection method
Description of data collectors (training, background, number of collectors)
Data instrument (instrument instructions, interview questions)
(3) Respondent (sample) characteristics
Description of sample characteristics
Sample size (number of respondents)
Response rate
Compelling rationale for the selection of respondents
Respondent characteristics (gender, age, ethnicity, education, income, other relevant information)
Description of multiple samples (if applicable)
Discussion of number of incidents requested from each respondent
(4) Data characteristics
Type of incidents requested from respondents
Incident valence
Description of context(s) and/or number of contexts
Number of incidents collected
(5) Data quality
Report on number of (usable) incidents
Discuss criteria for incident inclusion (or exclusion)
(6) Data analysis procedures/classification of incidents
Operational definitions of coding
Identification of the unit of analysis
Category development discussion
Classification scheme description (major categories, subcategories)
Discussion of judges/coders (training, independence, number of judges used)

(continued)
82 JOURNAL OF SERVICE RESEARCH / August 2004

TABLE 4 (continued)

Reliability (intrajudge reliability statistics, interjudge reliability statistics)


Content validity of classification system
Discussion of results of applying classification system to holdout (confirmation) sample
(7) Results
Classification scheme—description and discussion of major categories
Classification scheme—description and discussion of subcategories (if applicable)
Connection to existing literature/theory
Suggestions for future research

the study are Bitner, Booms, and Tetreault (1990) and given the large number of critical incidents that are
Keaveney (1995). generally collected.
As an example of the fourth phase of the process, data
analysis and interpretation, the Bitner, Booms, and Mohr Reporting methods and results of content analytic CIT
(1994) study provides an elaborate description about how studies. The success of a research project is judged by its
critical incidents were analyzed, includes the instructions products. Except where results are only presented orally,
and coding rules given to coders of CIT incidents, and the study design and methods, findings, theoretical formu-
presents a detailed description of category definitions. lations, and conclusions of most research projects are
Edvardsson (1992) also provided a thorough description judged through publication. Generally speaking, service
of his analysis of critical incidents. Reliability assessment researchers have not been very prudent in the final phase
is another critical element to consider in this phase and of the CIT process—describing their application of the
should be included in every CIT study using a content CIT method in their publications. For example, more than
analysis approach. Perreault and Leigh’s (1989) Ir statistic 38% of the CIT studies in the sample do not bother to re-
appears to be the best index to use as it takes into account port any type of reliability assessment, and nearly 63% of
the number of coding decisions made and is fairly straight- the studies provide little (if any) description of what con-
forward to calculate. Keaveney’s (1995) study includes as- stitutes a critical incident—the key unit of analysis in most
sessments of both intercoder and intracoder reliability, and of these studies. Service researchers employing CIT need
the Bitner, Booms, and Mohr (1994) study presents sev- to be more diligent in describing their methods, and
eral different intercoder reliability assessments. As indi- reviewers of CIT manuscripts need to be more demanding
cated earlier, careful adherence to rigorously defined rules in requiring such details.
and procedures provides the opportunity for other Perhaps one reason for the insufficient descriptions of
researchers to verify findings from CIT studies. the application of the CIT method in many studies is un-
The results of the research synthesis indicate that certainty about what should be reported. During the past
nearly all of the content analytic CIT studies report using 20 years, structural equation modeling (SEM) has become
the same data set to both develop and verify classification a very popular research method in service research. Con-
schemes. One way to empirically test (or pretest) a classi- sequently, a general (albeit informal) standard has devel-
fication scheme is to employ a holdout sample. Such a oped across the hundreds (thousands?) of SEM studies in
practice entails setting aside a portion of the incidents and terms of what should be presented when describing the
using only the first set of incidents to develop the catego- procedures employed in applying this method, including
ries. Stauss (1993) recommended dividing the total set of discussions related to such topics as respondent character-
incidents into two halves, using one half to create catego- istics, measurement model statistics, and structural model
ries and the other half to determine if the incidents can be statistics. Many service researchers employing the CIT
classified within that category scheme. Three CIT studies method may be somewhat unsure about what information
employing a holdout (or validation) sample in order to em- should be reported, as there is no clear consensus as to
pirically assess a classification scheme developed on an what is appropriate to mention. Researchers employing
earlier data set include Keaveney (1995); Mangold, Miller, the CIT method would be well served by revisiting Flana-
and Brockway (1999); and Michel (2001). Although gan’s (1954) original article and studying it carefully.
Stauss’s suggestion of using a holdout sample when test- The six exemplar studies listed earlier have at least two
ing newly developed classification schemes has not been things in common: They all employ the CIT method well,
followed by most service researchers using the CIT and they all report their methods and results well. The out-
method, it could be done relatively easily—especially line provided as part of Phase 5 (Results Report) in Table 4
Gremler / CRITICAL INCIDENT TECHNIQUE 83

attempts to capture many of the issues these studies report; CIT study published to date addresses all of the issues
in so doing, it provides (a) a template for the CIT research- listed here, Keaveney’s (1995) study includes a detailed
ers to suggest what issues to report upon and (b) a guide for description of many of these issues, such as specific details
readers and reviewers in assessing the methods and contri- on the unit of analysis, category development, and
butions of a CIT study. In particular, the following “ge- reliability statistics.
neric” topics are offered as a suggestion in an attempt to In summary, Table 4 presents a checklist of suggestions
create a standard of what service researchers should report for researchers to consider when designing CIT studies
in CIT studies: and crafting methodology and results discussions. The is-
sues included in the table and described above should
• Study Focus/Research Question serve as a guideline as to what reviewers and editors
• Data Collection Procedures should expect/demand from authors employing the CIT
• Respondent (Sample) Characteristics method.
• Data Characteristics
• Data Quality
• Data Analysis Procedures/Classification of Incidents
CONCLUSION
• Results

Two issues in this list not addressed in the previous discus- The intent of this research synthesis is not to criticize
sion are respondent characteristics and data characteris- past work in service research using the CIT method but
tics. Because the CIT method is highly dependent on the rather to describe the state of practice in the use of the
respondent for generation of incidents or stories, it can be method and to provide some suggestions for future use of
insightful to understand who the respondents are; thus, a the method. It is hoped that this research synthesis will
detailed description of respondents should be included. motivate service researchers employing the CIT method in
Similarly, a thorough description of the CIT data, such as future studies to carefully examine their methodological
the type of incidents requested from respondents and inci- decisions and to provide sufficient detail in discussing
dent valence, should also be reported. Although no one their use of this method.

APPENDIX
CIT Studies Included in the Research Synthesis

Åkerlund, Helena (2002), “Negative Critical Incident Mapping—Suitable as a Tool for Understanding Fading Customer Relationship Processes?” Paper
Presented at the 2nd Nordic Workshop on Relationship Dissolution, Visby, Sweden.
Archer, N. P. and G. O. Wesolowsky (1996), “Consumer Response to Service and Product Quality: A Study of Motor Vehicle Owners,” Journal of Opera-
tions Management, 14 (June), 103-18.
Auh, Seigyoung, Linda Court Salisbury, and Michael D. Johnson (2003), “Order Effects in Satisfaction Modelling,” Journal of Marketing Management, 19
(April), 379-400.
Backhaus, Klaus and Matthias Bauer (2000), “The Impact of Critical Incidents on Customer Satisfaction in Business-to-Business Relationships,” Journal
of Business-to-Business Marketing, 8 (1), 25-54.
Baker, Stacey Menzel, Jonna Holland, and Carol Kaufman-Scarborough (2002), “Perceptions of ‘Welcome’in Retail Settings: A Critical Incident Study of
the Experiences of Consumers with Disabilities,” working paper, Bowling Green State University, Bowling Green, OH.
, Carol Kaufman-Scarborough, and Jonna Holland (2002), “Should I Stay or Should I Go? Marginalized Consumers’ Perceptions of ‘Welcome’ in
Retail Environments,” in Proceedings of the American Marketing Association Marketing and Public Policy Conference: New Directions for Public
Policy, Les Carlson and Russ Lacziack, eds. Chicago: American Marketing Association, 79-81.
Barnes, John W., John Hadjimarcou, and Richard S. Jacobs (1999), “Assessing the Role of the Customer in Dyadic Service Encounters,” Journal of Cus-
tomer Service in Marketing and Management, 5 (2), 1-22.
, Richard S. Jacobs, and John Hadjimarcou (1996), “Customer Satisfaction with Dyadic Service Encounters: The Customer’s Contribution,” in
AMA Summer Educators’Proceedings: Enhancing Knowledge Development in Marketing, Cornelia Droge and Roger Calantone, eds. Chicago: Amer-
ican Marketing Association, 549-54.
Bejou, David, Bo Edvardsson, and James P. Rakowski (1996), “A Critical Incident Approach to Examining the Effects of Service Failures on Customer Re-
lationships: The Case of Swedish and U.S. Airlines,” Journal of Travel Research, 35 (1), 35-40.
 and Adrian Palmer (1998), “Service Failure and Loyalty: An Exploratory Empirical Study of Airline Customers,” Journal of Services Marketing,
12 (1), 7-22.
Bell, James, David Gilbert, and Andrew Lockwood (1997), “Service Quality in Food Retailing Operations: A Critical Incident Analysis,” International Re-
view of Retail, Distribution, and Consumer Research, 7 (October), 405-23.
, , , and Chris Dutton (1999), “‘Getting It Wrong’in Food Retailing: The Shopping Process Explored,” in 10th International Confer-
ence on Research in the Distributive Trades, A. Broadbridge, ed. Stirling, Scotland: University of Stirling.
Bitner, Mary Jo, Bernard H. Booms, and Lois A. Mohr (1994), “Critical Service Encounters: The Employee’s View,” Journal of Marketing, 58 (October),
95-106.
84 JOURNAL OF SERVICE RESEARCH / August 2004

, , and Mary Stanfield Tetreault (1989), “Critical Incidents in Service Encounters,” in Designing a Winning Strategy, Mary Jo Bitner and
Lawrence A. Crosby, eds. Chicago: American Marketing Association, 98-99.
, , and  (1990), “The Service Encounter: Diagnosing Favorable and Unfavorable Incidents,” Journal of Marketing, 54 (January), 71-84.
, Jody D. Nyquist, and Bernard H. Booms (1985), “The Critical as a Technique for Analyzing the Service Encounter,” in Service Marketing in a
Changing Environment, Thomas M. Bloch, George D. Upah, and Valarie A. Zeithaml, eds. Chicago: American Marketing Association, 48-51.
Botschen, Gunther, Ludwig Bstieler, and Arch Woodside (1996), “Sequence-Oriented Problem Identification within Service Encounters,” Journal of
Euromarketing, 5 (2), 19-53.
Burns, Alvin C., Laura A. Williams, and James Trey Maxham (2000), “Narrative Text Biases Attending the Critical Incidents Technique,” Qualitative
Market Research: An International Journal, 3 (4), 178-86.
Callan, Roger J. (1998), “The Critical Incident Technique in Hospitality Research: An Illustration from the UK Lodge Sector,” Tourism Management, 19
(February), 93-98.
 and Clare Lefebve (1997), “Classification and Grading of UK Lodges: Do They Equate to Managers’and Customers’Perceptions?” Tourism Man-
agement, 18 (7), 417-24.
Chell, Elizabeth and Luke Pittaway (1998), “A Study of Entrepreneurship in the Restaurant and Café Industry: Exploratory Work Using the Critical Inci-
dent Technique as a Methodology,” International Journal of Hospitality Management, 17, 23-32.
Chen, Qimei and William D. Wells (2001), “.Com Satisfaction and .Com Dissatisfaction: One or Two Constructs?” in Advances in Consumer Research,
Mary C. Gilly and Joan Meyers-Levy, eds. Provo, UT: Association for Consumer Research, 34-39.
Chung, Beth and K. Douglas Hoffman (1998), “Critical Incidents: Service Failures That Matter Most,” Cornell Hotel and Restaurant Administration
Quarterly, 39 (June), 66-71.
Curren, Mary T. and Valerie S. Folkes (1987), “Attributional Influences on Consumers’ Desires to Communicate About Products,” Psychology and Mar-
keting, 4 (Spring), 31-45.
Dant, Rajiv P. and Patrick L. Schul (1992), “Conflict Resolution Processes in Contractual Channels of Distribution,” Journal of Marketing, 56 (January),
38-54.
Davis, J. Charlene and Scott R. Swanson (2001), “Navigating Satisfactory and Dissatisfactory Classroom Incidents,” Journal of Education for Business, 76
(May/June), 245-50.
de Ruyter, Ko, Hans Kasper, and Martin Wetzels (1995), “Internal Service Quality in a Manufacturing Firm: A Review of Critical Encounters,” New Zea-
land Journal of Business, 17 (2), 67-80.
, Debra S. Perkins, and Martin Wetzels (1995), “Consumer-Defined Service Expectations and Post Purchase Dissastisfaction in Moderately-Priced
Restaurants: A Cross-National Study,” Journal of Consumer Satisfaction, Dissatisfaction, and Complaining Behavior, 8, 177-87.
 and Norbert Scholl (1994), “Incident-Based Measurement of Patient Satisfaction/Dissatisfaction: A Dutch Case,” Journal of Consumer Satisfac-
tion, Dissatisfaction, and Complaining Behavior, 7, 96-106.
 and Ariane von Raesfeld Meijer (1994), “Transactional Analysis of Relationship Marketing: Paradox or Paradigm?” in Relationship Marketing:
Theory, Methods and Applications, Jagdish N. Sheth and Atul Parvatiyar, eds. Atlanta, GA: Center for Relationship Marketing, Emory University, 1-8.
, Martin Wetzels, and Marcel van Birgelen (1999), “How Do Customers React to Critical Service Encounters? A Cross-Sectional Perspective,” To-
tal Quality Management, 10 (8), 1131-45.
Derbaix, Christian and Joëlle Vanhamme (2003), “Inducing Word-of-Mouth by Eliciting Surprise—A Pilot Investigation,” Journal of Economic Psychol-
ogy, 24 (February), 99-116.
Edvardsson, Bo (1988), “Service Quality in Customer Relationships: A Study of Critical Incidents in Mechanical Engineering Companies,” Service Indus-
tries Journal, 8 (4), 427-45.
 (1992), “Service Breakdowns: A Study of Critical Incidents in an Airline,” International Journal of Service Industry Management, 3 (4), 17-29.
 (1998), “Causes of Customer Dissatisfaction—Studies of Public Transport by the Critical-Incident Method,” Managing Service Quality, 8 (3),
189-97.
 and Tore Strandvik (2000), “Is a Critical Incident Critical for a Customer Relationship?” Managing Service Quality, 10 (2), 82-91.
Feinberg, Richard A. and Ko de Ruyter (1995), “Consumer-Defined Service Quality in International Retailing,” Total Quality Management, 6 (March), 61-67.
, Richard Widdows, Marlaya Hirsch-Wyncott, and Charles Trappey (1990), “Myth and Reality in Customer Service: Good and Bad Service Some-
times Leads to Repurchase,” Journal of Satisfaction, Dissatisfaction, and Complaining Behavior, 3, 112-14.
Folkes, Valerie S. (1984), “Consumer Reactions to Product Failure: An Attributional Approach,” Journal of Consumer Research, 10 (March), 398-409.
Frankel, Robert and Scott R. Swanson (2002), “The Impact of Faculty-Student Interactions on Teaching Behavior: An Investigation of Perceived Student
Encounter Orientation, Interactive Confidence, and Interactive Practice,” Journal of Education for Business, 78 (November/December), 85-91.
Friman, Margareta and Bo Edvardsson (2002), “A Content Analysis of Complaints and Compliments,” in Frontiers in Services Marketing, Jos Lemmink
and Ko de Ruyter, eds. Maastricht, the Netherlands: Maastricht University, 37.
 and  (2003), “A Content Analysis of Complaints and Compliments,” Managing Service Quality, 13 (1), 20-26.
Gabbott, Mark and Gillian Hogg (1996), “The Glory of Stories: Using Critical Incidents to Understand Service Evaluation in the Primary Healthcare Con-
text,” Journal of Marketing Management, 12, 493-503.
Gilbert, David C. and Lisa Morris (1995), “The Relative Importance of Hotels and Airlines to the Business Traveller,” International Journal of Contempo-
rary Hospitality Management, 7 (6), 19-23.
Goodwin, Cathy, Stephen J. Grove, and Raymond P. Fisk (1996), “‘Collaring the Cheshire Cat’: Studying Customers’ Services Experience through Meta-
phor,” Service Industries Journal, 16 (October), 421-42.
Gremler, Dwayne D. and Mary Jo Bitner (1992), “Classifying Service Encounter Satisfaction across Industries,” in AMA Winter Educators’ Conference
Proceedings: Marketing Theory and Applications, Chris T. Allen and Thomas J. Madden, eds. Chicago: American Marketing Association, 111-18.
, , and Kenneth R. Evans (1994), “The Internal Service Encounter,” International Journal of Service Industry Management, 5 (2), 34-55.
, , and  (1995), “The Internal Service Encounter,” Logistics Information Management, 8 (4), 28-34.
, Shannon B. Rinaldo, and Scott W. Kelley (2002), “Rapport-Building Strategies Used by Service Employees: A Critical Incident Study,” in AMA
Summer Educators’Conference: Enhancing Knowledge Development in Marketing, William J. Kehoe and John H. Lindgren Jr., eds. Chicago: Ameri-
can Marketing Association, 73-74.
Gremler / CRITICAL INCIDENT TECHNIQUE 85

Grove, Stephen J. and Raymond P. Fisk (1997), “The Impact of Other Customers on Service Experiences: A Critical Incident Examination of ‘Getting
Along,’” Journal of Retailing, 73 (Spring), 63-85.
, , and Michael J. Dorsch (1998), “Assessing the Theatrical Components of the Service Encounter: A Cluster Analysis Examination,” Ser-
vice Industries Journal, 18 (3), 116-34.
Guiry, Michael (1992), “Consumer and Employee Roles in Service Encounters,” in Advances in Consumer Research, John Sherry and Brian Sternthal, eds.
Provo, UT: Association for Consumer Research, 666-72.
Guskey, Audrey and Robert Heckman (2001), “Service Rules: How Customers Get Better Service,” in Association of Marketing Theory and Practice: Ex-
panding Marketing Horizons into the 21st Century, Brenda Ponsford, ed. Jekyll Island, GA: Thiel College, 5-10.
Hare, Caroline, David Kirk, and Tim Lang (1999), “Identifying the Expectations of Older Food Consumers,” Journal of Marketing Practice: Applied Mar-
keting Science, 5 (6/7/8), 213-32.
, , and  (2001), “The Food Shopping Experience of Older Consumers in Scotland: Critical Incidents,” International Journal of Retail
& Distribution Management, 29 (1), 25-40.
Harris, Richard, Kim Harris, and Steve Baron (2003), “Theatrical Service Experiences: Dramatic Script Development with Employees,” International
Journal of Service Industry Management, 14 (2), 184-99.
Hausknecht, Douglas (1988), “Emotional Measure of Satisfaction/Dissatisfaction,” Journal of Consumer Satisfaction, Dissatisfaction, Complaining Be-
havior, 1, 25-33.
Heckman, Robert and Audrey Guskey (1998), “Sources of Customer Satisfaction and Dissatisfaction with Information Technology Help Desks,” Journal
of Market Focused Management, 3 (1), 59-89.
Hedaa, Laurids (1996), “Customer Acquisition in Sticky Business Markets,” International Business Review, 5 (5), 509-30.
Hoffman, K. Douglas and Beth G. Chung (1999), “Hospitality Recovery Strategies: Customer Preference Versus Firm Use,” Journal of Hospitality & Tour-
ism Research, 23 (February), 71-84.
, Scott W. Kelley, and Beth C. Chung (2003), “A CIT Investigation of Servicescape Failures and Associated Recovery Strategies,” Journal of Ser-
vices Marketing, 17 (4), 322-40.
, , and Holly M. Rotalsky (1995), “Tracking Service Failures and Employee Recovery Efforts,” Journal of Services Marketing, 9 (Spring),
49-61.
, , and Laure M. Soulage (1995), “Customer Defection Analysis: A Critical Incident Approach,” in 1995 Summer AMA Educators’Confer-
ence Proceedings: Enhancing Knowledge Development in Marketing, Barbara B. Stern and George M. Zinkhan, eds. Chicago: American Marketing
Association, 346-52.
Houston, Mark B. and Lance A. Bettencourt (1999), “But That’s Not Fair! An Exploratory Study of Student Perceptions of Instructor Fairness,” Journal of
Marketing Education, 21 (August), 84-96.
Huntley, Julie K. (1998), “Critical Cross-Functional Interactions: Foundation for Relationship Quality,” in AMA Summer Educators’Conference Proceed-
ings: Enhancing Knowledge Development in Marketing, Ronald C. Goodstein and Scott B. MacKenzie, eds. Chicago: American Marketing Associa-
tion, 70.
Jackson, Mervyn S., Gerard N. White, and Claire L. Schmierer (1996), “Tourism Experiences within an Attributional Framework,” Annals of Tourism Re-
search, 3 (4), 798-810.
Johnston, Robert (1995), “The Determinants of Service Quality: Satisfiers and Dissatisfiers,” International Journal of Service Industry Management, 6 (5),
53-71.
 (1995), “Service Failure and Recovery: Impact, Attributes, and Process,” in Advances in Services Marketing and Management, Vol. 4, Teresa A.
Swartz, David E. Bowen, and Stephen W. Brown, eds. Greenwich, CT: JAI, 211-28.
 (1997), “Identifying the Critical Determinants of Service Quality in Retail Banking: Importance and Effect,” International Journal of Bank Mar-
keting, 15 (4), 111-16.
Jones, M. A. (1999), “Entertaining Shopping Experiences: An Exploratory Investigation,” Journal of Retailing and Consumer Services, 6 (July), 129-39.
Jun, Minjoon and Shaohan Cai (2001), “The Key Determinants of Internet Banking Service Quality: A Content Analysis,” International Journal of Bank
Marketing, 19 (7), 276-91.
Keaveney, Susan M. (1995), “Customer Switching Behavior in Service Industries: An Exploratory Study,” Journal of Marketing, 59 (April), 71-82.
Kelley, Scott W., K. Douglas Hoffman, and Mark A. Davis (1993), “A Typology of Retail Failures and Recoveries,” Journal of Retailing, 69 (Winter), 429-54.
Kellogg, Deborah L., William E. Youngdahl, and David E. Bowen (1997), “On the Relationship between Customer Participation and Satisfaction: Two
Frameworks,” International Journal of Service Industry Management, 8(3), 206-19.
Koelemeijer, Kitty (1995), “The Retail Service Encounter: Identifying Critical Service Experiences,” in Innovation Trading, Paul Kunst and Jos Lemmink,
eds. London: Paul Chapman, 29-43.
Lewis, Barbara R. and Emma Clacher (2001), “Service Failure and Recovery in UK Theme Parks: The Employees’ Perspective,” International Journal of
Contemporary Hospitality Management, 13 (4), 166-75.
Lewis, Barbara R. and Sotiris Spyrakopoulos (2001), “Service Failures and Recovery in Retail Banking: The Customers’Perspective,” International Jour-
nal of Bank Marketing, 19 (1), 37-47.
Lidén, Sara Björlin and Per Skålén (2003), “The Effect of Service Guarantees on Service Recovery,” International Journal of Service Industry Manage-
ment, 14 (1), 36-58.
Liljander, Veronica (1999), “Consumer Satisfaction with Complaint Handling Following a Dissatisfactory Experience with Car Repair,” in European Ad-
vances in Consumer Research, Vol. 4, Bernard Dubois, Tina Lowrey, L. J. Shrum, and Marc Vanhuele, eds. Provo, UT: Association for Consumer Re-
search, 270-75.
Lindsay, Valerie, Doren Chadee, Jan Mattsson, Robert Johnston, and Bruce Millett (2003), “Relationships, the Role of Individuals and Knowledge Flows
in the Internationalization of Service Firms,” International Journal of Service Industry Management, 14 (1), 7-35.
Lockshin, Larry and Gordon McDougall (1998), “Service Problems and Recovery Strategies: An Examination of the Critical Incident Technique in a
Business-to-Business Market,” International Journal of Retail & Distribution Management, 26 (11), 429-38.
Mack, Rhonda, Rene Mueller, John Crotts, and Amanda Broderick (2000), “Perceptions, Corrections, and Defections: Implications for Service Recovery
in the Restaurant Industry,” Managing Service Quality, 10 (6), 339-46.
86 JOURNAL OF SERVICE RESEARCH / August 2004

Maddox, R. Neil (1981), “Two-Factor Theory and Consumer Satisfaction: Replication and Extension,” Journal of Consumer Research, 8 (June), 97-102.
Malafi, Teresa N., Marie A. Cini, Sarah L. Taub, and Jennifer Bertolami (1993), “Social Influence and the Decision to Complain: Investigations on the Role
of Advice,” Journal of Consumer Satisfaction, Dissatisfaction, and Complaining Behavior, 6, 81-89.
Mallak, Larry A., David M. Lyth, Suzan D. Olson, Susan M. Ulshafer, and Frank J. Sardone (2003), “Culture, the Built Environment and Healthcare Orga-
nizational Performance,” Managing Service Quality, 13 (1), 27-38.
Mallalieu, Lynnea (1999), “An Examination of Interpersonal Influence in Consumption and Non-Consumption Domains,” in Advances in Consumer Re-
search, Eric J. Arnould and Linda M. Scott, eds. Provo, UT: Association for Consumer Research, 196-202.
Mangold, W. Glynn, Fred Miller, and Gary R. Brockway (1999), “Word-of-Mouth Communication in the Service Marketplace,” Journal of Services Mar-
keting, 13 (1), 73-89.
Martin, Charles L. (1996), “Consumer-to Consumer Relationships: Satisfaction with Other Consumers’Public Behavior,” Journal of Consumer Affairs, 30
(1), 146-69.
Mattsson, Jan (2000), “Learning How to Manage Technology in Services Internationalisation,” Service Industries Journal, 20 (January), 22-39.
Meuter, Matthew L., Amy L. Ostrom, Robert I. Roundtree, and Mary Jo Bitner (2000), “Self-Service Technologies: Understanding Customer Satisfaction
with Technology-Based Service Encounters,” Journal of Marketing, 64 (July), 50-64.
Michel, Stefan (2001), “Analyzing Service Failures and Recoveries: A Process Approach,” International Journal of Service Industry Management, 12 (1),
20-33.
Mick, David Glen and Michelle DeMoss (1990), “Self-Gifts: Phenomenological Insights from Four Contexts,” Journal of Consumer Research, 17 (De-
cember), 322-32.
 and  (1990), “To Me from Me: A Descriptive Phenomenology of Self-Gifts,” in Advances in Consumer Research, Marvin Goldberg, Gerald
Gorn, and Richard Pollay, eds. Provo, UT: Association for Consumer Research, 677-82.
 and  (1992), “Further Findings on Self-Gifts: Products, Qualities, and Socioeconomic Correlates,” in Advances in Consumer Research,
John F. Sherry Jr. and Brian Sternthal, eds. Provo, UT: Association for Consumer Research, 140-46.
Miller, Janis L., Christopher W. Craighead, and Kirk R. Karwan (2000), “Service Recovery: A Framework and Empirical Investigation,” Journal of Opera-
tions Management, 18 (4), 387-400.
Moenaert, Rudy K. and William E. Souder (1996), “Context and Antecedents of Information Utility at the R&D/Marketing Interface,” Management Sci-
ence, 42 (November), 1592-1610.
Mohr, Lois A. and Mary Jo Bitner (1991), “Mutual Understanding between Customers and Employees in Service Encounters,” in Advances in Consumer
Research, Vol. 18, Rebecca H. Holman and Michael R. Solomon, eds. Provo, UT: Association for Consumer Research, 611-17.
 and Mary Jo Bitner (1995), “Process Factors in Service Delivery: What Employee Effort Means to Customers,” in Advances in Services Marketing
and Management, Vol. 4, Teresa A. Swartz, David E. Bowen, and Stephen W. Brown, eds. Greenwich, CT: JAI, 91-117.
 and  (1995), “The Role of Employee Effort in Satisfaction with Service Transactions,” Journal of Business Research, 32 (March), 239-52.
Mueller, Rene D., Adrian Palmer, Rhonda Mack, and R. McMullan (forthcoming), “Service in the Restaurant Industry: An American and Irish Comparison
of Service Failures and Recoveries,” Service Industries Journal.
Neuhaus, Patricia (1996), “Critical Incidents in Internal Customer-Supplier Relationships: Results of an Empirical Study,” in Advances in Services Mar-
keting and Management Research and Practice, Vol. 5, Teresa A. Swartz, David E. Bowen, and Stephen W. Brown, eds. Greenwich, CT: JAI, 283-313.
Nyquist, Jody D., Mary Jo Bitner, and Bernard H. Booms (1985), “Identifying Communication Difficulties in the Service Encounter: A Critical Incident
Approach,” in The Service Encounter: Managing Employee/Customer Interaction in Service Businesses, John A. Czepiel, Michael R. Solomon, and
Carol F. Surprenant, eds. Lexington, MA: D. C. Heath, 195-212.
Odekerken-Schröder, Gaby, Marcel van Birgelen, Jos Lemmink, Ko de Ruyter, and Martin Wetzels (2000), “Moments of Sorrow and Joy: An Empirical
Assessment of the Complementary Value of Critical Incidents in Understanding Customer Service Evaluations,” European Journal of Marketing, 34
(1/2), 107-25.
Olsen, Morten J. S. and Bertil Thomasson (1992), “Studies in Service Quality with the Aid of Critical Incidents and Phenomenography,” in QUIS 3: Quality
in Services Conference, Eberhard E. Scheuing, Bo Edvardsson, David Lascelles, and Charles H. Little, eds. Jamaica, NY: International Service Quality
Association, 481-505.
Palmer, Adrian, Rosalind Beggs, and Caroline Keown-McMullan (2000), “Equity and Repurchase Intention Following Service Failure,” Journal of Ser-
vices Marketing, 14 (6), 513-28.
Paraskevas, Alexandros (2001), “Internal Service Encounters in Hotels: An Empirical Study,” International Journal of Contemporary Hospitality Man-
agement, 13 (6), 285-92.
Parker, Cathy and Brian P. Mathews (2001), “Customer Satisfaction: Contrasting Academic and Consumers’ Interpretations,” Marketing Intelligence &
Planning, 19 (1), 38-44.
Ruth, Julie A., Cele C. Otnes, and Frederic F. Burnel (1999), “Gift Receipt and the Reformulation of Interpersonal Relationships,” Journal of Consumer
Research, 25 (March), 385-400.
Schulp, Jan A. (1999), “Critical Incidents in Dutch Consumer Press IV: The Delighted Customer: Customer-Service Provider Interaction in Achieving
Outstanding Service,” in Proceedings of the 9th Workshop on Quality Management in Services, Jan Mattsson, Peter Docherty, and Jos Lemmink, eds.
Brussels: European Institute for Advanced Studies in Management, 1-15.
 (1999), “Critical Incidents in Dutch Consumer Press: Why Dissatisfied Customers Complain with Third Parties,” in Service Quality and Manage-
ment, Vol. 1, Paul Kunst, Jos Lemmink, and Bernd Stauss, eds. Wiesbaden, Germany: Deutscher Universitäts Verlag, 111-59.
Shepherd, C. David and Joseph O. Rentz (1990), “A Method for Investigating the Cognitive Processes and Knowledge Structures of Expert Salespeople,”
Journal of Personal Selling and Sales Management, 10 (Fall), 55-70.
Singh, Jagdip and Robert E. Wilkes (1996), “When Consumers Complain: A Path Analysis of the Key Antecedents of Consumer Complaint Response Esti-
mates,” Journal of the Academy of Marketing Science, 24 (Fall), 350-65.
Snellman, Kaisa and Tiina Vihtkari (2003), “Customer Complaining Behavior in Technology-Based Service Encounters,” International Journal of Service
Industry Management, 14 (2), 217-31.
Spake, Deborah F., Sharon E. Beatty, and Chang-Jo Yoo (1998), “Relationship Marketing from the Consumer’s Perspective: A Comparison of Consumers
in South Korea and the United States,” in Asia Pacific Advances in Consumer Research, Kineta Hung and Kent Monroe, eds. Provo, UT: Association for
Consumer Research, 131-37.
Gremler / CRITICAL INCIDENT TECHNIQUE 87

Spivey, W. Austin and David F. Caldwell (1982), “Improving MBA Teaching Evaluation: Insights from Critical Incident Methodology,” Journal of Market-
ing Education, 4 (Spring), 25-30.
Stan, Simona, Kenneth R. Evans, Jeffrey L. Stinson, and Charles Wood (2002), “Critical Customer Experiences in Professional Business-to-Business Ser-
vice Exchanges: Impact on Overall,” in AMA Summer Educators’ Conference: Enhancing Knowledge Development in Marketing, William J. Kehoe
and John H. Lindgren Jr., eds. Chicago: American Marketing Association, 113-14.
Stauss, Bernd and Bert Hentschel (1992), “Attribute-Based versus Incident-Based Measurement of Service Quality: Results of an Empirical Study in the
German Car Service Industry,” in Quality Management in Services, Paul Kunst and Jos Lemmink, eds. Assen/Maastricht, the Netherlands: Van
Gorcum, 59-78.
 and Paul Mang (1999), “‘Culture Shocks’ in Inter-Cultural Service Encounters?’” Journal of Services Marketing, 13 (4/5), 329-46.
 and Bernhard Weinlich (1997), “Process-Oriented Measurement of Service Quality: Applying the Sequential Incident Technique,” European
Journal of Marketing, 31 (1), 33-55.
Stokes, David (2000), “Entrepreneurial Marketing: A Conceptualization from Qualitative Research,” Qualitative Market Research: An International Jour-
nal, 3 (1), 47-54.
Strandvik, Tore and Margareta Friman (1998), “Negative Critical Incident Mapping,” in QUIS 6 Pursuing Service Excellence: Practices and Insights,
Eberhard E. Scheuing, Stephen W. Brown, Bo Edvardsson, and R. Johnston, eds. Jamaica, NY: International Service Quality Association, 161-69.
 and Veronica Liljander (1994), “Relationship Strength in Bank Services,” in Relationship Marketing: Theory, Methods and Applications, Jagdish
N. Sheth and Atul Parvatiyar, eds. Atlanta, GA: Center for Relationship Marketing, Emory University, 1-4.
Strutton, David, Lou E. Pelton, and James R. Lumpkin (1993), “The Influence of Psychological Climate on Conflict Resolution Strategies in Franchises
Relationships,” Journal of the Academy of Marketing Science, 21 (Summer), 207-15.
Sundaram, D. S., Kaushik Mitra, and Cynthia Webster (1998), “Word-of-Mouth Communications: A Motivational Analysis,” in Advances in Consumer
Research, Joseph W. Alba and J. Wesley Hutchinson, eds. Provo, UT: Association for Consumer Research, 527-31.
Swan, John E. and Linda Jones Combs (1976), “Product Performance and Consumer Satisfaction: A New Concept,” Journal of Marketing, 40 (April), 25-33.
 and C. P. Rao (1975), “The Critical Incident Technique: A Flexible Method for the Identification of Salient Product Attributes,” Journal of the
Academy of Marketing Science, 3 (Summer), 296-308.
Swanson, Scott R. and J. Charlene Davis (2000), “A View from the Aisle: Classroom Successes, Failures and Recovery Strategies,” Marketing Education
Review, 10 (Summer), 17-25.
 and Robert Frankel (2002), “A View from the Podium: Classroom Successes, Failures, and Recovery Strategies,” Marketing Education Review, 12
(Summer), 25-35.
 and Scott W. Kelley (2001), “Attributions and Outcomes of the Service Recovery Process,” Journal of Marketing Theory and Practice, 9 (Fall), 50-65.
 and  (2001), “Service Recovery Attributions and Word-of-Mouth Intentions,” European Journal of Marketing, 35 (1/2), 194-211.
Tjosvold, Dean and David Weicker (1993), “Cooperative and Competitive Networking by Entrepreneurs: A Critical Incident Study,” Journal of Small
Business Management, 31 (January), 11-21.
van Dolen, Willemijn, Jos Lemmink, Jan Mattsson, and Ingrid Rhoen (2001), “Affective Consumer Responses in Service Encounters: The Emotional Con-
tent in Narratives of Critical Incidents,” Journal of Economic Psychology, 22 (June), 359-76.
Verma, Harsh V. (2003), “Customer Outrage and Delight,” Journal of Services Research, 3 (April-September), 119-33.
Wang, Kuo-Ching, An-Tien Hsieh, and Tzung-Cheng Huan (2000), “Critical Service Features in Group Package Tour: An Exploratory Research,” Tourism
Management, 21 (April), 177-89.
Warden, Clyde A., Tsung-Chi Liu, Chi-Tsun Huang, and Chi-Hsun Lee (2003), “Service Failures Away from Home: Benefits in Intercultural Service En-
counters,” International Journal of Service Industry Management, 14 (4), 436-57.
Weatherly, Kristopher A. and David A. Tansik (1993), “Tactics Used by Customer-Contact Workers: Effects of Role Stress, Boundary Spanning, and Con-
trol,” International Journal of Service Industry Management, 4 (3), 4-17.
Webb, Dave (1998), “Segmenting Police ‘Customers’ on the Basis of Their Service Quality Expectations,” The Service Industries Journal, 18 (January),
72-100.
Wels-Lips, Inge, Marleen Van der Ven, and Rik Pieters (1998), “Critical Services Dimensions: An Empirical Investigation Across Six Industries,” Interna-
tional Journal of Services Industry Management, 9 (3), 286-309.
Wong, Amy and Amrik Sohal (2003), “A Critical Incident Approach to the Examination of Customer Relationship Management in a Retail Chain: An Ex-
ploratory Study,” Qualitative Market Research: An International Journal, 6 (4), 248-62.
Youngdahl, William E. and Deborah L. Kellogg (1994), “Customer Costs of Service Quality: A Critical Incident Study,” in Advances in Services Marketing
and Management, Vol. 3, Teresa A. Swartz, David E. Bowen, and Stephen W. Brown, eds. Greenwich, CT: JAI, 149-73.
 and  (1997), “The Relationship between Service Customers’Quality Assurance Behaviors, Satisfaction, and Effort: A Cost of Quality Per-
spective,” Journal of Operations Management, 15 (February), 19-32.

REFERENCES
Bell, James, David Gilbert, Andrew Lockwood, and Chris Dutton (1999),
“‘Getting It Wrong’ in Food Retailing: The Shopping Process Ex-
Arnould, Eric J. and Linda L. Price (1993), “River Magic: Extraordinary plored,” in 10th International Conference on Research in the Distrib-
Experience and the Extended Service Encounter,” Journal of Con- utive Trades, A. Broadbridge, ed. Stirling, Scotland: University of
sumer Research, 20 (June), 24-45. Stirling.
Baker, Stacey Menzel, Carol Kaufman-Scarborough, and Jonna Holland Bitner, Mary Jo, Bernard H. Booms, and Lois A. Mohr (1994), “Critical
(2002), “Should I Stay or Should I Go? Marginalized Consumers’ Service Encounters: The Employee’s View,” Journal of Marketing,
Perceptions of ‘Welcome’in Retail Environments,” in Proceedings of 58 (October), 95-106.
the American Marketing Association Marketing and Public Policy , , and Mary Stanfield Tetreault (1990), “The Service En-
Conference: New Directions for Public Policy, Les Carlson and Russ counter: Diagnosing Favorable and Unfavorable Incidents,” Journal
Lacziack, eds. Chicago: American Marketing Association, 79-81. of Marketing, 54 (January), 71-84.
88 JOURNAL OF SERVICE RESEARCH / August 2004

Booms, Bernard H. and Mary Jo Bitner (1981), “Marketing Strategies Harris, Richard, Kim Harris, and Steve Baron (2003), “Theatrical Service
and Organization Structures for Service Firms,” in Marketing of Ser- Experiences: Dramatic Script Development with Employees,” Inter-
vices: 1981 Special Educators’ Conference Proceedings, James H. national Journal of Service Industry Management, 14 (2), 184-99.
Donnelly and William R. George, eds. Chicago: American Marketing Hausknecht, Douglas (1988), “Emotional Measure of Satisfaction/
Association, 47-51. Dissatisfaction,” Journal of Consumer Satisfaction, Dissatisfaction,
Burns, Alvin C., Laura A. Williams, and James Trey Maxham (2000), Complaining Behavior, 1, 25-33.
“Narrative Text Biases Attending the Critical Incidents Technique,” Hedaa, Laurids (1996), “Customer Acquisition in Sticky Business Mar-
Qualitative Market Research: An International Journal, 3 (4), 178- kets,” International Business Review, 5 (5), 509-30.
86. Hoffman, K. Douglas, Scott W. Kelley, and Beth C. Chung (2003), “A
Chell, Elizabeth (1998), “Critical Incident Technique,” in Qualitative CIT Investigation of Servicescape Failures and Associated Recovery
Methods and Analysis in Organizational Research: A Practical Strategies,” Journal of Services Marketing, 17 (4), 322-40.
Guide, Gillian Symon and Catherine Cassell, eds. Thousand Oaks, , , and Holly M. Rotalsky (1995), “Tracking Service Fail-
CA: Sage, 51-72. ures and Employee Recovery Efforts,” Journal of Services Market-
 and Luke Pittaway (1998), “A Study of Entrepreneurship in the ing, 9 (Spring), 49-61.
Restaurant and Café Industry: Exploratory Work Using the Critical Holsti, Ole R. (1969), Content Analysis for the Social Sciences and Hu-
Incident Technique as a Methodology,” International Journal of Hos- manities. Reading, MA: Addison-Wesley.
pitality Management, 17, 23-32. Hopkinson, Gillian C. and Sandra Hogarth-Scott (2001), “‘What Hap-
Cohen, Jacob (1960), “A Coefficient of Agreement for Nominal Scales,” pened Was . . .’: Broadening the Agenda for Storied Research,” Jour-
Educational and Psychological Measurement, 20 (Spring), 37-46. nal of Marketing Management, 17 (1/2), 27-47.
Cooper, Harris (1998), Synthesizing Research (3rd ed.). Thousand Oaks, Johnston, Robert (1995), “Service Failure and Recovery: Impact, Attrib-
CA: Sage. utes, and Process,” in Advances in Services Marketing and Manage-
Corbin, Juliet and Anselm Strauss (1990), “Grounded Theory Research: ment, Vol. 4, Teresa A. Swartz, David E. Bowen, and Stephen W.
Procedures, Canons, and Evaluative Criteria,” Qualitative Sociology, Brown, eds. Greenwich, CT: JAI, 211-28.
13 (1), 3-21. Kassarjian, Harold H. (1977), “Content Analysis in Consumer Re-
Denzin, Norman K. (1978), The Research Act: A Theoretical Introduc- search,” Journal of Consumer Research, 4 (June), 8-18.
tion to Sociological Methods (2nd ed.). New York: McGraw-Hill. Keaveney, Susan M. (1995), “Customer Switching Behavior in Service
de Ruyter, Ko, Hans Kasper, and Martin Wetzels (1995), “Internal Ser- Industries: An Exploratory Study,” Journal of Marketing, 59 (April),
vice Quality in a Manufacturing Firm: A Review of Critical Encoun- 71-82.
ters,” New Zealand Journal of Business, 17 (2), 67-80. Kelley, Scott W., K. Douglas Hoffman, and Mark A. Davis (1993), “A
, Debra S. Perkins, and  (1995), “Consumer-Defined Ser- Typology of Retail Failures and Recoveries,” Journal of Retailing, 69
vice Expectations and Post Purchase Dissastisfaction in Moderately- (Winter), 429-54.
Priced Restaurants: A Cross-National Study,” Journal of Consumer Kerlinger, Fred N. (1986), Foundations of Behavioral Research (3rd ed.).
Satisfaction, Dissatisfaction, and Complaining Behavior, 8, 177-87. Fort Worth, TX: Holt, Rinehart & Winston.
Derbaix, Christian and Joëlle Vanhamme (2003), “Inducing Word-of- Koelemeijer, Kitty (1995), “The Retail Service Encounter: Identifying
Mouth by Eliciting Surprise—A Pilot Investigation,” Journal of Eco- Critical Service Experiences,” in Innovation Trading, Paul Kunst and
nomic Psychology, 24 (February), 99-116. Jos Lemmink, eds. London: Paul Chapman, 29-43.
Edvardsson, Bo (1992), “Service Breakdowns: A Study of Critical Inci- Kolbe, Richard H. and Melissa S. Burnett (1991), “Content-Analysis Re-
dents in an Airline,” International Journal of Service Industry Man- search: An Examination of Applications with Directives for Improv-
agement, 3 (4), 17-29. ing Research Reliability and Objectivity,” Journal of Consumer
 and Inger Roos (2001), “Critical Incident Techniques: Towards a Research, 18 (September), 243-50.
Framework for Analyzing the Criticality of Critical Incidents,” Inter- LaForge, Mary C., Stephen J. Grove, and Louis H. Stone (2002), “The
national Journal of Service Industry Management, 12 (3), 251-68. Sales Introspection Technique: Principles and Application,” Market-
 and Tore Strandvik (2000), “Is a Critical Incident Critical for a ing Intelligence & Planning, 20 (3), 168-73.
Customer Relationship?” Managing Service Quality, 10 (2), 82-91. Lewis, Barbara R. and Sotiris Spyrakopoulos (2001), “Service Failures
Flanagan, John C. (1954), “The Critical Incident Technique,” Psycholog- and Recovery in Retail Banking: The Customers’ Perspective,” Inter-
ical Bulletin, 51 (July), 327-58. national Journal of Bank Marketing, 19 (1), 37-47.
Folkes, Valerie S. (1984), “Consumer Reactions to Product Failure: An Mangold, W. Glynn, Fred Miller, and Gary R. Brockway (1999), “Word-
Attributional Approach,” Journal of Consumer Research, 10 of-Mouth Communication in the Service Marketplace,” Journal of
(March), 398-409. Services Marketing, 13 (1), 73-89.
Gabbott, Mark and Gillian Hogg (1996), “The Glory of Stories: Using Martin, Charles L. (1996), “Consumer-to Consumer Relationships: Sat-
Critical Incidents to Understand Service Evaluation in the Primary isfaction with Other Consumers’ Public Behavior,” Journal of Con-
Healthcare Context,” Journal of Marketing Management, 12, 493- sumer Affairs, 30 (1), 146-69.
503. Mattsson, Jan (2000), “Learning How to Manage Technology in Services
Gardner, Meryl Paula (1985), “Mood States and Consumer Behavior: A Internationalisation,” Service Industries Journal, 20 (January), 22-39.
Critical Review,” Journal of Consumer Research, 12 (December), Meuter, Matthew L., Amy L. Ostrom, Robert I. Roundtree, and Mary Jo
281-300. Bitner (2000), “Self-Service Technologies: Understanding Customer
Gremler, Dwayne D. and Mary Jo Bitner (1992), “Classifying Service Satisfaction with Technology-Based Service Encounters,” Journal of
Encounter Satisfaction across Industries,” in AMA Winter Educators’ Marketing, 64 (July), 50-64.
Conference Proceedings: Marketing Theory and Applications, Chris Michel, Stefan (2001), “Analyzing Service Failures and Recoveries: A
T. Allen and Thomas J. Madden, eds. Chicago: American Marketing Process Approach,” International Journal of Service Industry Man-
Association, 111-18. agement, 12 (1), 20-33.
Grove, Stephen J. and Raymond P. Fisk (1997), “The Impact of Other Mick, David Glen and Michelle DeMoss (1990), “Self-Gifts: Phenomen-
Customers on Service Experiences: A Critical Incident Examination ological Insights from Four Contexts,” Journal of Consumer Re-
of ‘Getting Along,’” Journal of Retailing, 73 (Spring), 63-85. search, 17 (December), 322-32.
Guiry, Michael (1992), “Consumer and Employee Roles in Service Miller, Janis L., Christopher W. Craighead, and Kirk R. Karwan (2000),
Encounters,” in Advances in Consumer Research, John Sherry and “Service Recovery: A Framework and Empirical Investigation,”
Brian Sternthal, eds. Provo, UT: Association for Consumer Research, Journal of Operations Management, 18 (4), 387-400.
666-72.
Gremler / CRITICAL INCIDENT TECHNIQUE 89

Neuhaus, Patricia (1996), “Critical Incidents in Internal Customer-  and C. P. Rao (1975), “The Critical Incident Technique: A Flexi-
Supplier Relationships: Results of an Empirical Study,” in Advances ble Method for the Identification of Salient Product Attributes,” Jour-
in Services Marketing and Management Research and Practice, Vol. nal of the Academy of Marketing Science, 3 (Summer), 296-308.
5, Teresa A. Swartz, David E. Bowen, and Stephen W. Brown, eds. Swanson, Scott R. and Scott W. Kelley (2001), “Service Recovery Attri-
Greenwich, CT: JAI, 283-313. butions and Word-of-Mouth Intentions,” European Journal of Mar-
Odekerken-Schröder, Gaby, Marcel van Birgelen, Jos Lemmink, Ko de keting, 35 (1/2), 194-211.
Ruyter, and Martin Wetzels (2000), “Moments of Sorrow and Joy: An Tripp, Carolyn (1997), “Services Advertising: An Overview and Sum-
Empirical Assessment of the Complementary Value of Critical Inci- mary of Research, 1980-1995,” Journal of Advertising, 26 (Winter),
dents in Understanding Customer Service Evaluations,” European 21-38.
Journal of Marketing, 34 (1/2), 107-125. van Dolen, Willemijn, Jos Lemmink, Jan Mattsson, and Ingrid Rhoen
Olsen, Morten J. S. and Bertil Thomasson (1992), “Studies in Service (2001), “Affective Consumer Responses in Service Encounters: The
Quality with the Aid of Critical Incidents and Phenomenography,” in Emotional Content in Narratives of Critical Incidents,” Journal of
QUIS 3: Quality in Services Conference, Eberhard E. Scheuing, Bo Economic Psychology, 22 (June), 359-76.
Edvardsson, David Lascelles, and Charles H. Little, eds. Jamaica, Verma, Harsh V. (2003), “Customer Outrage and Delight,” Journal of
NY: International Service Quality Association, 481-505. Services Research, 3 (April-September), 119-33.
Perreault, William D. and Laurence E. Leigh (1989), “Reliability of Walker, Steve and Elise Truly (1992), “The Critical Incidents Technique:
Nominal Data Based on Qualitative Judgments,” Journal of Market- Philosophical Foundations and Methodological Implications,” in
ing Research, 26 (May), 135-48. AMA Winter Educators’Conference Proceedings: Marketing Theory
Price, Linda L. and Eric J. Arnould (1999), “Commercial Friendships: and Applications, Vol. 3, Chris T. Allen and Thomas J. Madden, eds.
Service Provider-Client Relationships in Context,” Journal of Mar- Chicago: American Marketing Association, 270-75.
keting, 63 (October), 38-56. Warden, Clyde A., Tsung-Chi Liu, Chi-Tsun Huang, and Chi-Hsun Lee
Roos, Inger (2002), “Methods of Investigating Critical Incidents,” Jour- (2003), “Service Failures Away from Home: Benefits in Intercultural
nal of Service Research, 4 (February), 193-204. Service Encounters,” International Journal of Service Industry Man-
Ronan, William W. and Gary P. Latham (1974), “The Reliability and Va- agement, 14 (4), 436-57.
lidity of the Critical Incident Technique: A Closer Look,” Studies in Weber, Robert Phillip (1985), Basic Content Analysis. London: Sage.
Personnel Psychology, 6 (Spring), 53-64. Zeithaml, Valarie A. and Mary Jo Bitner (2003), Services Marketing: In-
Ruth, Julie A., Cele C. Otnes, and Frederic F. Brunel (1999), “Gift Re- tegrating Customer Focus across the Firm (3rd ed.). New York:
ceipt and the Reformulation of Interpersonal Relationships,” Journal McGraw-Hill.
of Consumer Research, 25 (March), 385-400.
Singh, Jagdip and Robert E. Wilkes (1996), “When Consumers Com-
plain: A Path Analysis of the Key Antecedents of Consumer Com-
plaint Response Estimates,” Journal of the Academy of Marketing Dwayne D. Gremler is an associate professor of marketing in the
Science, 24 (Fall), 350-65. College of Business Administration at Bowling Green State Uni-
Spiegelman, Marvin, Carl Terwilliger, and Franklin Fearing (1953), “The versity. He received all three of his degrees (B.A., M.B.A., and
Reliability of Agreement in Content Analysis,” Journal of Social Psy- Ph.D.) from Arizona State University. His current research inter-
chology, 37, 175-87.
ests are in services marketing, particularly in issues related to
Stauss, Bernd (1993), “Using the Critical Incident Technique in Measur-
ing and Managing Service Quality,” in The Service Quality Hand- customer loyalty and retention, relationship marketing, word-of-
book, Eberhard E. Scheuing and William F. Christopher, eds. New mouth communication, and service guarantees. His work has
York: American Management Association, 408-27. been published in several journals, including the Journal of the
 and Paul Mang (1999), “‘Culture Shocks’ in Inter-Cultural Ser- Academy of Marketing Science, the Journal of Service Research,
vice Encounters?” Journal of Services Marketing, 13 (4/5), 329-46.
the International Journal of Service Industry Management, Ad-
 and Bernhard Weinlich (1997), “Process-Oriented Measurement
of Service Quality: Applying the Sequential Incident Technique,” Eu-
vances in Services Marketing and Management, and the Journal
ropean Journal of Marketing, 31 (1), 33-55. of Marketing Education. Prior to pursuing an academic career, he
Swan, John E. and Linda Jones Combs (1976), “Product Performance and worked in the computer industry for 10 years as a software
Consumer Satisfaction: A New Concept,” Journal of Marketing, 40 engineer and project manager.
(April), 25-33.

You might also like