Annu. Rev. Psychol. 2001. 52:471–99
Copyright c 2001 by Annual Reviews. All rights reserved
THE SCIENCE OF TRAINING: A Decade
of Progress
Eduardo Salas1 and Janis A. Cannon-Bowers2
1Department of Psychology and Institute for Simulation & Training,
University of Central Florida, Orlando, Florida 32816-1390;
e-mail:
[email protected]
2Naval Air Warfare Center Training Systems Division, Orlando,
Florida 32826-3224; e-mail:
[email protected]
Key Words training design, training evaluation, distance training, team training,
transfer of training
■ Abstract This chapter reviews the training research literature reported over the
past decade. We describe the progress in five areas of research including training theory,
training needs analysis, antecedent training conditions, training methods and strategies,
and posttraining conditions. Our review suggests that advancements have been made
that help us understand better the design and delivery of training in organizations, with
respect to theory development as well as the quality and quantity of empirical research.
We have new tools for analyzing requisite knowledge and skills, and for evaluating
training. We know more about factors that influence training effectiveness and transfer
of training. Finally, we challenge researchers to find better ways to translate the results
of training research into practice.
CONTENTS
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Initial Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Theoretical Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
TRAINING NEEDS ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Organizational Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Job/Task Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ANTECEDENT TRAINING CONDITIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Individual Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Training Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Training Induction and Pretraining Environment . . . . . . . . . . . . . . . . . . . . . . . . .
TRAINING METHODS AND INSTRUCTIONAL STRATEGIES . . . . . . . . . . . . .
Specific Learning Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Learning Technologies and Distance Training . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulation-Based Training and Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Team Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0066-4308/01/0201-0471$14.00
472
473
473
475
475
476
477
478
479
480
481
482
483
484
485
471
472
SALAS
¥
CANNON-BOWERS
POST-TRAINING CONDITIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Training Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transfer of Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FINAL OBSERVATIONS, CONCLUSIONS, AND THE FUTURE . . . . . . . . . . . . .
486
487
488
489
INTRODUCTION
In the 30 years since the first review of training in the Annual Review of Psychology,
things have progressed dramatically in terms of both the science and practice
of training. On the practice side, socio-cultural, technological, economic, and
political pressures have all combined to force modern organizations to take a closer
look at their human capital in general, and training in particular (Thayer 1997,
Howard 1995). In fact, now more than ever, organizations must rely on workplace
learning and continuous improvement in order to remain competitive (London &
Moore 1999). In addition, organizations have shifted their views about training
from a separate, stand-alone event to a fully integrated, strategic component of the
organization. New training-related approaches, including action learning, just-intime training, mentoring, coaching, organizational learning, and managing skill
portfolios are all currently being explored. Finally, modern organizations must cope
with training needs associated with the changing demographics of the population—
both an older and more diverse workforce can be expected as we move into the
new millennium.
It is important to note that improved training comes at a cost. Recent estimates suggest that the investment in training activities in organizations ranges
from $55.3 billion to $200 billion annually (Bassi & Van Buren 1999, McKenna
1990), an investment that has not only created a growing interest in training, but
also in learning technologies and performance-improvement processes, practices,
and services. In fact, there is an increasing concern in organizations that the investment made in training must be justified in terms of improved organizational
performance—increased productivity, profit, or safety; reduced error, enhanced
market share (e.g. Huselid 1995, Martocchio & Baldwin 1997, Salas et al 2000).
The past 30 years have also witnessed tremendous growth in training research.
This trend has been so pronounced that we are led to conclude that there has
been nothing less than an explosion in training-related research in the past 10
years. More theories, models, empirical results, reviews, and meta-analyses are
available today than ever before. Whether this research is having an effect on
training practice—that is, a meaningful impact on how organizations actually
train—is a matter of some debate. We return to this issue after documenting the
many advances in training research over the past decade.
This is the sixth review of training and development to appear in the Annual
Review of Psychology (see Campbell 1971, Goldstein 1980, Wexley 1984, Latham
1988, Tannenbaum & Yukl 1992). In this review, we focus on research published
from 1992 to January 2000. Similar to Tannenbaum & Yukl’s (1992), our review is
THE SCIENCE OF TRAINING
473
selective and descriptive. We focus primarily on the research that is concerned with
the design, delivery, and evaluation of training in work organizations. Our review
is organized as follows. We first discuss the recent theoretical advancements in
training over the past decade. Then we address the relevant research on training
needs analysis, including organization, job/task, and person analysis. Following
this, we address antecedent training conditions (i.e. pretraining variables) that
may enhance or disrupt learning. Next we turn our discussion to research on
training methods and instructional strategies. In this section we discuss recent
developments in simulation-based training, learning approaches, team training,
and the influence of technology on training research and practice. Finally we
review the research on post-training conditions. This includes a discussion of
training evaluation and transfer of training. Wherever appropriate we point out
the research needs and gaps. We conclude with a few observations and trends
and a word about the future. Consistent with previous reviews, this review does
not cover basic issues involved in skill acquisition and learning, organizational
development, socialization in organizations, or educational psychology. We also
do not synthesize the large literature in practitioner-oriented publications.
Initial Observations
Our first observation is that in 30 years of documenting the progress of training
research, it is clear that the field, by any measure, has changed dramatically—
for the better. Now, as recent reviews have documented, training-related theories
abound. As noted earlier, there is also more empirical training-related research
going on—in the field as well as in the lab—than ever before. Researchers are
adopting a systems view of training and are more concerned with the organizational context. There are new models, constructs (e.g. opportunity to perform), and
influences (e.g. technology) in the reported research. The traditional training evaluation paradigm has been expanded, and there are more evaluations being conducted
and reported. There are better tools with which to conduct training evaluations,
and better (and more practical) experimental designs have emerged. The field can
now offer sound, pedagogically based principles and guidelines to practitioners
and instructional developers. Furthermore, the impact of training on performance
and organizational effectiveness is being felt (Bassi & Van Buren 1999), although
clearly more documentation is needed here. Finally, there are more books and
textbooks related to training (e.g. Ford et al 1997, Quinones & Ehrestein 1997,
Noe 1999, Wexley & Latham 2000, Goldstein 1993; plus over 20 more that have
been reviewed in Personnel Psychology). The science of training has progressed
and matured—it is now truly an exciting and dynamic field. As evidence for this
assertion, we summarize some of the latest theoretical advances in the next section.
Theoretical Developments
There have been some influential theories developed about training since 1992.
In fact, the past decade has offered us myriad new and expanded theoretical
474
SALAS
¥
CANNON-BOWERS
frameworks, as well as concepts and constructs. This thinking is deeper, richer,
more comprehensive, and more focused. More importantly, the field has been
energized by these developments so that empirical work has followed. Some of
these frameworks and concepts are broad, general, and integrating. For example,
Tannenbaum and colleagues provided an integrative framework for all the variables that influence the design and delivery of training (see Tannenbaum et al 1993,
Cannon-Bowers et al 1995). The framework outlines in detail the pretraining and
during-training conditions that may influence learning, as well as the factors that
may facilitate the transfer of skills after training. Kozlowski & Salas (1997), drawing from organizational theory, discussed the importance of characterizing the
factors and processes in which training interventions are implemented and transferred in organizations. Moreover, Kozlowski and colleagues (Kozlowski et al
2000) consider organizational system factors and training design issues that influence the effectiveness of vertical transfer processes. Vertical transfer refers to the
upward propagation of individual-level training outcomes that emerge as team- and
organizational-level outcomes. This issue has been largely neglected by researchers
yet is suggested to be crucial to training effectiveness. Similarly, researchers have
begun to understand and outline the barriers and myths that exist in organizations as
they implement training (Dipboye 1997, Salas et al 1999). In other work,
Kraiger et al (1993) provided new conceptualizations of learning and evaluation theory, approaches, and measurement. These authors expanded Kirkpatrick’s
(1976) evaluation typology by incorporating recent notions in cognitive
psychology.
Other conceptual developments are more focused. For example, Ford et al
(1997) invoked “the opportunity to perform” construct as a way to understand
the transfer of training process. Colquitt et al (2000) summarized (qualitatively
and quantitatively) the literature on training motivation and offered a new, integrative model. Cannon-Bowers & Salas (1997) proposed a framework for how to
conceptualize performance measurement in training. Thayer & Teachout (1995)
developed a model to understand the climate for transfer in organizations, as well
as in-training conditions that enhance transfer. Cannon-Bowers et al (1998) advanced a number of conditions, concepts, and interventions that may enhance
practice. Ford and colleagues have looked at individual differences and learner
control strategies (e.g. Ford et al 1998). Training researchers have also examined
variables such as the pretraining context (e.g. Baldwin & Magjuka 1997, Quinones
1995), conscientiousness and training outcomes (e.g. Colquitt & Simmering 1998,
Martocchio & Judge 1997), individual and situational characteristics that influence
training motivation (e.g. Facteau et al 1995, Mathieu & Martineau 1997), and participation in developmental activities (e.g. Noe & Wilk 1993, Baldwin & Magjuka
1997), just to name a few.
In sum, these theoretical advancements have provided a much needed forum in
which to discuss, debate, analyze, and understand better the design and delivery of
training in organizations. Moreover, they have provided an organized framework
in which systematic research could be couched. Our conclusion is that training
THE SCIENCE OF TRAINING
475
research is no longer atheoretical as charged by our predecessors. We believe the
field is healthier because of these influences and all the empirical work that has
followed.
TRAINING NEEDS ANALYSIS
It is well acknowledged that one of the most important steps in training development is conducting a training needs analysis. This first step in training development
focuses on the process of deciding who and what should be trained. A training
needs analysis is primarily conducted to determine where training is needed, what
needs to be taught, and who needs to be trained (Goldstein 1993). This phase
has several outcomes. One is the specification of learning objectives, which in
turn shape the design and delivery of training, as well as the process of criterion
development. Consistent with Tannenbaum & Yukl (1992), we found a limited
amount of empirical work on training needs analysis. In this section we discuss
two components: organizational analysis and job/task analysis. We briefly address
the third phase—person analysis.
Organizational Analysis
The purpose of an organizational analysis is to outline the systemwide components
of the organization that may affect the delivery of a training program (Goldstein
1993). That is, it focuses on the congruence between training objectives with such
factors as organizational goals, available resources, constraints, and support for
transfer. Unfortunately, many training programs fail to reach their goals because
of organizational constraints and conflicts, which could have been identified and
ameliorated before training was implemented. Hence, conducting an organizational analysis is an important first-step training design. The best treatment of this
topic can be found in Goldstein (1993).
Only recently have training researchers begun to pay attention to organizational
analysis. One study, conducted by Rouiller & Goldstein (1993) in a chain of
fast food restaurants, demonstrated that organizational climate (e.g. situational
cues, consequences) was a powerful predictor of whether trainees transferred the
learned skills. A second study conducted in a chain of supermarkets by Tracey
et al (1995) showed that organizational climate and culture were directly related to
posttraining behaviors. Clearly, these two studies illustrate how powerful an effect
the organizational environment can have on whether newly acquired knowledge,
skills, and attitudes (KSAs) are applied on the job (see also the transfer of training
section).
As job requirements change, so does the need to help organizations plan
their human resources activities. A number of issues have emerged that are related to organizational analysis. For example, authors have expressed the need
to understand how the organizational context influences human resources strategies (e.g. Tannenbaum & Dupuree-Bruno 1994) to enhance continuous learning
476
SALAS
¥
CANNON-BOWERS
environments (e.g. London & Moore 1999), to manage knowledge effectively
(e.g. Tannenbaum 1997), and to determine the best organizational strategy
(e.g. who is responsible for training?) for learning and training (e.g. Martocchio &
Baldwin 1997). Obviously, organizational analysis is crucial for ensuring the
success of a training program. However, more research is needed to develop
practical and diagnostic tools to determine the organizational context relative to
training.
Job/Task Analysis
Historically, job/task analysis has been used to identify the information necessary
to create the learning objectives (Goldstein 1993). A job/task analysis results in a
detailed description of the work functions to be performed on the job, the conditions
under which the job is to be performed, and the KSAs needed to perform those tasks.
In the past decade, new research aimed at developing solid methods, approaches,
and knowledge elicitation techniques for determining training needs and requirements have emerged. For example, Arvey et al (1992) explored the use of task
inventories to forecast skills and abilities necessary for a future job. The results
indicated that such forecasting predictions could represent useful information to
training analysis. Wilson & Zalewski (1994) tested an expert system program as
a way to estimate the amount of 11 abilities required for performing a job. The
results indicated that most incumbents preferred the expert system method to the
traditional ability rating scales.
Some attention has been given to better understanding the job/task analysis
process for training purposes. For example, Ford et al (1993) examined the impact
of task experience and individual factors on task ratings of training emphasis. Results indicated that as experience (and self-efficacy) increased over time, it tended
to cause trainees to increase their ratings of training emphasis. Also, a few researchers have focused on outlining a task analysis procedure for teams (e.g. Baker
et al 1998, Bowers et al 1993, Blickensderfer et al 2000). Much more research is
needed, however. Specifically, we need to design and develop a methodology that
helps instructional designers and analysts uncover the team-based tasks and their
related KSAs.
Cognitive Task Analysis Cognitive task analysis refers to a set of procedures for
understanding the mental processing and mental requirements for job performance.
It has received much attention recently (see Dubois et al 1997/1998, Schraagen
et al 2000), fueled primarily by interest in understanding how trainees acquire and
develop knowledge, and how they organize rules, concepts, and associations (see
Zsambok & Klein 1997). In addition, research aimed at uncovering the nature of
expertise and how experts make decisions in complex natural environments has
led to the development of tools such as cognitive task analysis (see Salas & Klein
2000, Schraagen et al 2000).
THE SCIENCE OF TRAINING
477
Cognitive task analysis is based on techniques (e.g. verbal protocols) used by
cognitive scientists to elicit knowledge from subject matter experts. Products of a
cognitive task analysis include information-generating templates for mental model
development, cues for fostering complex decision-making skills, cues for developing simulation and scenarios used during training, and information for designing performance measurement and feedback protocols. For example, Neerincx &
Griffoen (1996) applied a cognitive task analysis to assess the task load of jobs
and to provide indicators for the redesign of jobs. This new application showed
its benefits over the “old” task load analysis.
A cognitive task analysis can complement existing (behavioral) forms of
training-need analysis. For example, research on meta-cognition suggests that
through continued practice or experience, individuals automatize complex behaviors, thus freeing up cognitive resources for monitoring and evaluating behavior
(Rogers et al 1997). By determining trainees’ current complex cognitive skills,
instructional designers can gain insight into trainees’ capacity for proficiency and
diagnose performance deficiencies. Thus, training-needs analysis should identify
not only the requisite knowledge and skills to perform tasks, but also the cues and
cognitions that enable trainees to know when to apply those skills. By incorporating the development of these skills into training, instructional designers can provide
trainees with valuable self-help tools. Cognitive task analysis is becoming a useful
tool but needs more development. Specifically, a theoretically driven methodology
that clearly outlines the steps to take and how to analyze the data is needed.
We found no empirical work regarding the third phase of training-needs
analysis—person analysis. However, the emerging literature of 360◦ feedback
may be relevant. This approach identifies individual strengths and weaknesses,
but perhaps more importantly, provides suggestions for improvement that often
revolve around training and development activities.
In summary, it is interesting to note that whereas most training researchers
believe and espouse that training-needs analysis is the most important phase in
training, this phase remains largely an art rather than a science. We need more
research that would enable us to develop a systematic methodology to determine
the training needs of organizations.
ANTECEDENT TRAINING CONDITIONS
Events that occur before training can be as important as (and in some cases more
important than) those that occur during and after training. Research has shown
that activities that occur prior to training have an impact on how effective training
turns out to be (Tannenbaum et al 1993). These factors fall into three general
categories: (a) what trainees bring to the training setting, (b) variables that engage
the trainee to learn and participate in developmental activities, and (c) how the
training can be prepared so as to maximize the learning experience. Each of these
is described in more detail below.
478
SALAS
¥
CANNON-BOWERS
Individual Characteristics
Cognitive Ability Research aimed at understanding how characteristics the
trainee brings to the training environment influence learning has proliferated in
the past decade. For example, Ree et al (1995) developed a causal model showing the role of general cognitive ability and prior job knowledge in subsequent
job-knowledge attainment and work-sample performance during training. The
resulting model showed that ability influenced the attainment of job knowledge
directly, and that general cognitive ability influenced work samples through job
knowledge. Similar findings have been obtained by Ree & Earles (1991), Colquitt
et al (2000), Randel et al (1992), Quick et al (1996), Kaemar et al (1997), Warr &
Bunce (1995), and many others. Therefore, it is safe to conclude, based on this
body of evidence (and others as well, e.g. Hunter 1986), that g (general intelligence) is good—it promotes self-efficacy and performance, and it helps a great
deal with skill acquisition. Clearly, those who have high cognitive ability (all other
things being equal) will likely learn more and succeed in training.
At this point, we probably need to look more closely at low-ability trainees and
conduct research on how to optimize learning for them. Also, it might be worthwhile to examine in more depth concepts such as tacit knowledge and practical
intelligence (Sternberg 1997) and their relation to on-the-job learning. Finally, it
is worth noting that cognitive ability is a viable predictor of training performance
(i.e. learning), but not necessarily performance on the job. Many jobs have requirements that go beyond cognitive ability (e.g. psychomotor demands), and/or
depend on other factors (e.g. motivation) for success. Therefore, it is important
to understand the nature of the job to determine whether cognitive ability will be
a valid predictor of training transfer.
Self-efficacy This construct has been widely studied in this past decade. The
findings are consistent: Self-efficacy, whether one has it before or acquires it
during training, leads to better learning and performance. Self-efficacy (the belief that one can perform specific tasks and behaviors) is a powerful predictor of
performance, as has been shown time and time again (e.g. Cole & Latham 1997
Eden & Aviram 1993, Ford et al 1998, Mathieu et al 1993, Martocchio 1994,
Martocchio & Webster 1992, Mathieu et al 1992, Quinones 1995, Mitchell et al
1994, Phillips & Gully 1997, Stevens & Gist 1997, Stajkovic & Luthans 1998).
Self-efficacy also mediates a number of personal variables including job satisfaction, organizational commitment, intention to quit the job, the relationship between
training and adjustment in newcomers (Saks 1995), and the relationship between
conscientiousness and learning (Martocchio & Judge 1997). Self-efficacy has also
been shown to have motivational effects (e.g. Quinones 1995), to influence training
reactions (Mathieu et al 1992), and to dictate whether trainees will use training
technology (Christoph et al 1998).
In sum, it is well established that self-efficacy enhances leaning outcomes and
performance. A research need that still remains is to expand what we know about
THE SCIENCE OF TRAINING
479
self-efficacy at the team level. Whereas a few studies have investigated collective
efficacy in training (e.g. Guzzo et al 1993, Smith-Jentsch et al 2000), considerably
more work is needed to better understand the mechanisms of collective efficacy in
learning as a means to raise team performance. In addition, it might be useful to
consider the use of self-efficacy as a deliberate training intervention (i.e. developing
training targeted at raising self-efficacy), as well as a desirable outcome of training
(i.e. as an indicator of training success).
Goal Orientation Goal orientation has received considerable attention in recent
years (e.g. Brett & VandeWalle 1999, Ford et al 1998, Phillips & Gully 1997).
This construct is broadly conceptualized as the mental framework used by individuals to interpret and behave in learning- or achievement-oriented activities. Two
classes of goal orientation have been identified (Dweck 1986, Dweck & Leggett
1988): (a) mastery (or learning) goal orientation, whereby individuals seek to
develop competence by acquiring new skills and mastering novel situations, and
(b) performance goal orientation, whereby individuals pursue assurances of their
own competence by seeking good performance evaluations and avoiding negative
ones. There is a debate in the literature as to whether goal orientation is a disposition, a state, or both (e.g. Stevens & Gist 1997), whether it is a multidimensional
construct (e.g. Elliot & Church 1997, VandeValle 1997), or whether these two
goal strategies are mutually exclusive (Buttom et al 1996). Although continued
research will bring more conceptual clarity, recent studies have shown, in general,
that goal orientation influences learning outcomes and performance. For example,
Fisher & Ford (1998) found that a mastery orientation was a strong predictor of a
knowledge-based learning outcome. Ford et al (1998) showed that mastery orientation was positively related to the meta-cognitive activity of the trainee. Phillips &
Gully (1997) demonstrated that mastery goal orientation was positively related to
self-efficacy. All these results are promising. More research is needed to determine how goal orientation is developed. Specifically, it must be determined
whether goal orientation is a relatively stable trait, or if it can be modified prior to
training. Should the latter be true, then efforts to move trainees toward a mastery
orientation should be developed.
Training Motivation
Training motivation can be conceptualized as the direction, effort, intensity, and
persistence that trainees apply to learning-oriented activities before, during, and
after training (Kanfer 1991, Tannenbaum & Yukl 1992). Recently, several studies
have found (and confirmed) that trainees’ motivation to learn and attend training has an effect on their skill acquisition, retention, and willingness to apply
the newly acquired KSAs on the job (e.g. Martocchio & Webster 1992, Mathieu
et al 1992, Quinones 1995, Tannenbaum & Yukl 1992). Whereas the literature is,
in general, clear about the influence of training motivation on learning outcomes,
it has lacked some conceptual precision and specificity, and has been somewhat
480
SALAS
¥
CANNON-BOWERS
piecemeal. An exception is a recent effort by Colquitt et al (2000) that has shed
light on the underlying processes and variables involved in understanding training
motivation throughout the training process. Their integrative narrative and metaanalytic review suggest that training motivation is multifaceted and influenced
by a set of individual (e.g. cognitive ability, self-efficacy, anxiety, age, conscientiousness) and situational (e.g. climate) characteristics. This effort provides the
beginnings of an integrative theory of training motivation—a much needed synthesis and organization.
A number of important implications for research and practice can be drawn from
Colquitt et al’s review. For example, they point out the need to assess trainee’s
personality during training-needs analysis, a much neglected or ignored assessment
during person analysis. In fact, Colquitt et al also called for the need to expand the
kind of personality variables we have examined in recent years to include emotions,
adaptability, trait goal orientation, and other Big Five variables. Another important
implication is the link they found between age and motivation to learn—older
workers showed lower motivation, learning, and post-training efficacy. However,
it may be prudent to consider this conclusion carefully because a host of other
issues must be considered when discussing the training needs of older workers
(including the design of training itself ). Clearly, in an era of technology-driven
instruction and an aging workforce, a challenge for instructional developers will be
to design learning environments where older trainees can be trained (and retrained)
with ease.
In the future, we also need to continue gaining a deeper understanding of training motivation because it is crucial for learning and has direct implications for
the design and delivery of training. Future work should consider those factors
that influence training motivation for job development activities and for situations in which workers acquire new skills through informal learning mechanisms.
Longitudinal studies are also needed.
Training Induction and Pretraining Environment
Considerable research has gone into understanding which factors help trainees to
optimize the benefits of training. These are usually interventions employed before
training to ensure that the trainee gets the most out of the learning experience.
Prepractice Conditions It is well documented that practice is a necessary condition for skill acquisition. However, all practice is not equal. In fact, the precise
nature of practice and its relationship to learning outcomes has been largely ignored or misunderstood. Recent thinking and research is beginning to suggest that
practice may be a complex process, not simply task repetition (e.g. Ehrenstein
et al 1997, Shute & Gawlick 1995, Schmidt & Bjork 1992); we address this
issue further in Specific Learning Approaches. For example, Cannon-Bowers et al
(1998) provided a framework for delineating the conditions that might enhance the
THE SCIENCE OF TRAINING
481
utility and efficacy of practice in training. They drew from the literature a number
of interventions (e.g. meta-cognitive strategies, advanced organizers, and preparatory information) that can be applied before actual practice as a way to prepare
the trainee for training. Empirical verification of these interventions needs to be
conducted.
The Pretraining Environment and Climate Can pretraining contextual factors
also affect learning outcomes? Recent research suggests that the manner in which
the organization frames the training and the nature of trainees’ previous experiences
in training do influence learning outcomes. For example, Quinones (1995) demonstrated that the manner in which training was framed (i.e. as advanced or remedial)
influenced training motivation and learning (see also Quinones 1997). Martocchio
(1992), who labeled the training assignment as an “opportunity,” showed similar
findings. Smith-Jentsch et al (1996a) demonstrated that trainees’ previous experiences with training (e.g. prior negative events) affected learning and retention.
Baldwin & Magjuka (1997) explored the notion of training as an organizational
episode and laid out a framework of other pretraining contextual factors (e.g. voluntary versus mandatory attendance) that may influence motivation to learn. These
studies suggest that experience with training (both task-based and event-based) is
important to subsequent training outcomes. This is certainly a neglected area, and
one in which much more work is needed. Specifically, we need to know how
these experiences shape self-efficacy, expectations about the training, motivation
to learn and apply skills on the job, and learning.
TRAINING METHODS AND INSTRUCTIONAL
STRATEGIES
Instructional strategies are defined as a set of tools (e.g. task analysis), methods
(e.g. simulation), and content (i.e. required competencies) that, when combined,
create an instructional approach (Salas & Cannon-Bowers 1997). Most effective
strategies are created around four basic principles: (a) They present relevant information or concepts to be learned; (b) they demonstrate the KSAs to be learned;
(c) they create opportunities for trainees to practice the skills; and (d ) they provide feedback to trainees during and after practice. Because there is no single
method to deliver training, researchers continue to address how to best present targeted information to trainees. Specifically, researchers are seeking cost-effective,
content-valid, easy-to-use, engaging, and technology-based methods (e.g. Baker
et al 1993, Bretz & Thompsett 1992, Steele-Johnson & Hyde 1997). In the next
section, we review research related to instructional strategies in several major
categories. First, we review specific learning approaches and learning technologies and distance training. Next, we cover simulation-based training and games,
followed by a review of recent work in team training.
482
SALAS
¥
CANNON-BOWERS
Specific Learning Approaches
Traditionally, training researchers have investigated how to optimize learning
and retention by manipulating feedback, practice intervals, reinforcement schedules, and other conditions within the learning process itself. In this regard, Fisk,
Kurlik, and colleagues (Kirlik et al 1998) improved performance in a complex
decision-making task by training consistently mapped components of the task to
automaticity (i.e. so that they could be performed with little or no active cognitive
control). Along these same lines, Driskell et al (1992) conducted a meta-analysis
of the effects of overlearning on retention. The results of their analysis showed that
overlearning produces a significant effect on retention which, in turn, is moderated
by the degree of overlearning, length of retention period, and type of task.
Attention has also been focused on developing collaborative training protocols.
This is distinguished from team training (see below) by the fact that team training
applies to training competencies that are required for performance of a team task.
Collaborative learning, on the other hand, refers to situations where trainees are
trained in groups, but not necessarily to perform a team task. The idea is that there
are features of group interaction that benefit the learning process (e.g. the opportunity for vicarious learning or interaction with peers). For example, Arthur et al
(1997) provided strong support and justification for the ongoing use of innovative
dyadic protocols (i.e. training two trainees at once) for the training of pilots and
navigators in both military and nonmilitary settings. However, Arthur et al (1996)
showed that the comparative effectiveness of dyadic versus individual protocols
for computer-based training is moderated by trainees’ level of interaction anxiety,
with only low interaction anxiety trainees benefiting from dyadic protocols (see
also Arthur et al 1997). Collaborative protocols have also been shown to reduce
required instructor time and resources by half (Shebilske et al 1992), and to provide observational learning opportunities that compensate for hands-on practice
efficiently and effectively, as predicted by social learning theory (Shebilske et al
1998).
Researchers have also studied the conditions of practice as they relate to learning. For example, Goettl et al (1996) compared an alternating task module protocol,
which alternated sessions on video game–like tasks and algebra word problems,
with a massed protocol, which blocked sessions on the tasks. The findings showed
that alternating task modules provided an advantage in learning and retention in
both the video games and algebra word problems.
Along these lines, Bjork and colleagues (Schmidt & Bjork 1992, Ghodsian
et al 1997) have provided an interesting reconsideration of findings regarding practice schedules. These authors argued that introducing difficulties for the learner
during practice will enhance transfer (but not necessarily immediate posttraining
performance). By reconceptualizing interpretation of data from several studies,
Schmidt & Bjork (1992) provided a compelling case for a new approach to arranging practice. This approach includes introducing variation in the way tasks are
ordered for practice, in the nature and scheduling of feedback, and in the versions of
THE SCIENCE OF TRAINING
483
the task to be practiced, and also by providing less frequent feedback. In all cases,
the authors argue that even though acquisition (i.e. immediate) performance may
be decreased, retention and generalization are enhanced owing to additional—
and most likely deeper—information processing requirements during practice.
Shute & Gawlick (1995) supported this conclusion in an investigation of computerbased training for flight engineering knowledge and skill.
In other work, Driskell and colleagues (Driskell & Johnston 1998, Johnston &
Cannon-Bowers 1996; JE Driskell, E Salas, JH Johnston, submitted) have investigated the use of stress-exposure training (SET) as a means to prepare trainees
to work in high stress environments. SET, which is based on clinical research
into stress inoculation, has several phases. In the first phase, trainees are provided
with preparatory information that includes a description of which stressors are
likely to be encountered in the environment and what the likely impact of those
stressors on the trainee will be. The second phase—skill acquisition—focuses on
behavioral- and cognitive-skills training designed to help trainees cope with the
stress. In the final phase, application and practice of learned skills is conducted
under conditions that gradually approximate the stress environment. Results of
investigations of this protocol have indicated that SET is successful in reducing
trainees’ subjective perception of stress, while improving performance. Moreover,
the effects of SET generalized to novel stressors and tasks.
Learning Technologies and Distance Training
There is no doubt that technology is shaping how training is delivered in organizations. While still relying heavily on classroom training, organizations have begun
to explore technologies such as video conferencing, electronic performance support systems, video discs, and on-line Internet /Intranet courses. Indeed, Web-based
training may make “going-to” training obsolete. It is being applied in education,
industry, and the military at an alarming rate (these days, one can get a PhD through
the Web). What is probably more alarming is that this implementation is happening without much reliance on the science of training. Many issues about how
to design distance learning systems remain open. Theoretically-based research
is needed to uncover principles and guidelines that can aid instructional designers in building sound distance training. A few have begun to scratch the surface
(e.g. Schreiber & Berge 1998) of this topic, but a science of distance learning and
training needs to evolve. Specifically, it must be determined what level of interaction is needed between trainees and instructors. Moreover, the nature of such
interaction must be specified. For example, do instructors need to see trainees in
order to conduct effective instruction? Do trainees need to see instructors or is it
better for them to view other material? What is the best mechanism for addressing trainee questions (e.g. through chat rooms or e-mail)? Should learners have
control over the pace and nature of instruction [some evidence from studies of
computer-based training support the use of learner control (see Shute et al 1998),
but the extent of its benefits for distance learning is not known]? These and
484
SALAS
¥
CANNON-BOWERS
other questions must be addressed as a basis to develop sound distance-training
systems.
Advances in technology are also enabling the development of intelligent tutoring systems that have the potential to reduce or eliminate the need for human
instructors for certain types of learning tasks. Early indications are that intelligent software can be programmed to successfully monitor, assess, diagnose, and
remediate performance in tasks such as computer programming and solving algebra problems (e.g. see Anderson et al 1995). As this technology becomes more
widely available (and less costly to develop), it may provide organizations with a
viable alternative to traditional computer-based or classroom training.
Simulation-Based Training and Games
Simulation continues to be a popular method for delivery training. Simulators are
widely used in business, education, and the military (Jacobs & Dempsey 1993).
In fact, the military and the commercial aviation industry are probably the biggest
investors in simulation-based training. These simulations range in cost, fidelity,
and functionality. Many simulation systems (including simulators and virtual environments) have the ability to mimic detailed terrain, equipment failures, motion,
vibration, and visual cues about a situation. Others are less sophisticated and have
less physical fidelity, but represent well the KSAs to be trained (e.g. Jentsch &
Bowers 1998). A recent trend is to use more of these low-fidelity devices to train
complex skills. There is also more evidence that skills transfer after training
that uses these simulations (e.g. MT Brannick, C Prince, E Salas, unpublished
manuscript; Gopher et al 1994). For example, some researchers are studying the
viability of computer games for training complex tasks. Gopher et al (1994) tested
the transfer of skills from a complex computer game to the flight performance of
cadets in the Israeli Air Force flight school. They argued that the context relevance
of the game to flight was based on a skill-oriented task analysis, which used information provided by contemporary models of the human processing system as
the framework. Flight performance scores of two groups of cadets who received
10 hours of training in the computer game were compared with a matched group
with no game experience. Results showed that the groups with game experience
performed much better in subsequent test flights than did those with no game experience. Jentsch & Bowers (1998) and Goettl et al (1996) have reported similar
findings.
Precisely why simulation and simulators work is not well known. A few studies
have provided preliminary data (e.g. Bell & Waag 1998, Jentsch & Bowers 1998,
Ortiz 1994), but there is a somewhat misleading conclusion that simulation (in and
of itself) leads to learning. Unfortunately, most of the evaluations rely on trainee
reaction data and not on performance or learning data (see Salas et al 1998). More
systematic and rigorous evaluations of large-scale simulations and simulators are
needed. Nonetheless, the use of simulation continues at a rapid pace in medicine,
maintenance, law enforcement, and emergency management settings. However,
THE SCIENCE OF TRAINING
485
some have noted (e.g. Salas et al 1998) that simulation and simulators are being
used without much consideration of what has been learned about cognition, training design, or effectiveness. There is a growing need to incorporate the recent
advances in training research into simulation design and practice. Along these
lines, some have argued for an event-based approach to training with simulations
(Cannon-Bowers et al 1998, Oser et al 1999, Fowlkes et al 1998). According
to this perspective, simulation-based training should be developed with training
objectives in mind, and allow for the measurement of training process and outcomes, and provisions for feedback (both during the exercise and for debriefing
purposes).
In related work, Ricci et al (1995) investigated the use of a computer-based
game to train chemical, biological, and radiological defense procedures. In this
case, the game was not a simulation (as discussed above), but a computer-based
slot machine that presented trainees with questions about the material. Trainees
earned points for correct answers, and received corrective feedback for incorrect
ones. The authors argued that motivation to engage in this type of presentation
(over text-based material) would result in higher learning. Results indicated that
reactions and retention (but not immediate training performance) were higher for
the game condition.
Behavior role modeling is another type of simulation-based training that has
received attention over the years. Recently, Skarlicki & Latham (1997) found
that a training approach that included role-playing, and other elements of behavior modeling, was successful in training organizational citizenship behavior in a
labor union setting. Similarly, Smith-Jentsch et al (1996b) found that a behavior modeling approach emphasizing practice (i.e. role playing) and performance
feedback was superior to a lecture only or lecture with demonstration format for
training assertiveness skills. Also studying assertiveness, Baldwin (1992) found
that behavioral reproduction (i.e. demonstrating assertiveness in a situation that
was similar to the training environment) was best achieved by exposing trainees
only to positive model displays. Conversely, the combination of both positive and
negative model displays was most effective in achieving behavioral generalization
(i.e. applying the skill outside of the training simulation) four weeks later.
Team Training
As noted by Guzzo & Dickson (1996) and Tannenbaum & Yukl (1992), teams are
heavily used in industry, government, and the military. Therefore, these organizations have invested some resources in developing teams (Tannenbaum 1997). A
number of theoretically-driven team training strategies has emerged. These include cross-training (Blickensderfer et al 1998), team coordination training (Prince
& Salas 1993), team leadership training (Tannenbaum et al 1998), team selfcorrection (Smith-Jentsch et al 1998), and distributed team training (Dwyer et al
1999). All of these have been tested and evaluated with positive results (see
Cannon-Bowers & Salas 1998a).
486
SALAS
¥
CANNON-BOWERS
The aviation community has arguably been the biggest advocate and user of
team training (see Helmreich et al 1993). The airlines and the military have extensively applied a strategy labeled crew resource management (CRM) training.
This strategy has a 20-year history in the aviation environment. It is used as a
tool to improve teamwork in the cockpit but, more importantly, to reduce human
error, accidents, and mishaps (Helmreich & Foushee 1993). CRM training has
gone through several evolutions (Helmreich et al 1999) and is maturing. Systematic
procedures for designing and developing CRM training have been developed (Salas
et al 1999), and the evaluation data are encouraging (see Leedom & Simon 1995,
Salas et al 1999, Stout et al 1997). CRM training seems to work by changing the
crew’s attitudes toward teamwork, and by imparting the relevant team competencies. Crew that have received CRM training exhibit more teamwork behaviors in
the cockpit (Salas et al 1999, Stout et al 1997). However, more and better evaluations are needed. Most of the evaluations conducted have been in simulation
environments. Only recently have evaluations begun to determine the transfer of
this training to the actual cockpit.
Other research in team training has focused on developing strategies to train
specific competencies (see also Cannon-Bowers & Salas 1998b). For example,
Smith-Jentsch et al (1996b) examined determinants of team performance–related
assertiveness in three studies. These studies concluded that, whereas both attitudinally focused and skill-based training improved attitudes toward team member
assertiveness, practice and feedback were essential to producing behavioral effects.
In addition, Volpe et al (1996) used shared mental model theory (Cannon-Bowers
et al 1993) as a basis to examine the effects of cross-training and workload on performance. The results indicated that those who received cross-training were more
effective in teamwork processes, communication, and overall team performance.
In sum, the literature has begun to provide evidence that team training works. It
works when the training is theoretically driven, focused on required competencies,
and designed to provide trainees with realistic opportunities to practice and receive
feedback. Also, guidelines for practitioners have emerged (e.g. Swezey & Salas
1992, Salas & Cannon-Bowers 2000), tools for designing team training strategies
have surfaced (e.g. Bowers et al 1993), and a number of strategies are now available
for team training (e.g. Cannon-Bowers & Salas 1997). In the future, we need
to know more about how to diagnose team cognitions during training. We also
need better and more rigorous measurement protocols to assess shared knowledge.
In addition, we need to understand better the mechanisms by which to conduct
effective distributed team training.
POST-TRAINING CONDITIONS
Events that occur after training are as important as those that occur before and
during training. Therefore, recent research has focused on improving the methods
and procedures we use to evaluate training, and on examining the events that ensure
THE SCIENCE OF TRAINING
487
transfer and application of newly acquired KSAs. In examining this body of work,
we get a clear sense that it is in these two areas where we probably have made the
most significant progress. There are theoretical, methodological, empirical, and
practical advances. All the issues have not been solved, but meaningful advances
have been made in the last decade. This is very encouraging. We first look at
training evaluation and then at transfer of training.
Training Evaluation
Kirkpatrick’s Typology and Beyond Kirkpatrick’s typology (Kirkpatrick 1976)
continues to be the most popular framework for guiding evaluations. However,
recent work has either expanded it or pointed out weaknesses, such as the need to
develop more diagnostic measures. For example, Kraiger et al (1993) proposed
a multi-dimensional view of learning, implying that learning refers to changes in
cognitive, affective, and/or skill-based outcomes. The proposed taxonomy can be
used to assess and document learning outcomes. In a meta-analysis of studies employing Kirkpatrick’s model, Alliger et al (1997) noted that utility-type reaction
measures were more strongly related to learning and performance (transfer) than
affective-type reaction measures. Surprisingly, they also found that utility-type reaction measures are more predictive of transfer than learning measures. Kraiger &
Jung (1997) suggested several processes by which learning outcomes can be derived from instructional objectives of training. Goldsmith & Kraiger (1997) proposed a method for structural assessment of an individual learner’s knowledge and
skill in a specific domain. This model has been used with some success in several
domains (e.g. Kraiger et al 1995, Stout et al 1997).
Clearly, Kirkpatrick’s typology has served as a good foundation for training
evaluation for many decades (Kirkpatrick 1976). It has been used, criticized, misused, expanded, refined, adapted, and extended. It has served the training research
community well—but a richer, more sophisticated typology is needed. Research
needs to continue finding better, more diagnostic and rigorous assessments of
learning outcomes. The next frontier and greatest challenge in this area is in designing, developing, and testing on-line assessments of learning and performance.
As we rely more on technology for training delivery, we need better and more
protocols to access learning not only after but also during training (e.g. Ghodsian
et al 1997).
Evaluation Design Issues Training evaluation is one of those activities that is
easier said than done. Training evaluation is labor intensive, costly, political, and
many times is the bearer of bad news. We also know that it is very difficult to
conduct credible and defensible evaluations in the field. Fortunately, training researchers have derived and tested thoughtful, innovative and practical approaches
to aid the evaluation process. For example, Sackett & Mullen (1993) proposed
other alternatives (e.g. posttesting-only, no control group) to formal experimental
designs when answering evaluation questions. They suggested that those questions
488
SALAS
¥
CANNON-BOWERS
(e.g. How much change has occurred? What target performance has been reached?)
should drive the evaluation mechanisms needed, and that each requires different
designs. Haccoun & Hamtiaux (1994) proposed a simple procedure for estimating
effectiveness of training in improving trainee knowledge—the internal referencing
strategy. This situation tests the implicit training evaluation notion that trainingrelevant content should show more change (pre-post) than training-irrelevant content. An empirical evaluation using internal referencing strategy versus a more
traditional experimental evaluation indicated that the internal referencing strategy
approach might permit inferences that mirror those obtained by the more complex
designs.
The costs of training evaluation have also been addressed recently. Yang et al
(1996) examined two ways to reduce costs. The first method is by assigning different numbers of subjects into training and control groups. An unequal group
size design with a larger total sample size may achieve the same level of statistical
power at lower cost. In a second method, the authors examined substituting a less
expensive proxy criterion measure in place of the target criterion when evaluating the training effectiveness. Using a proxy increases the sample size needed to
achieve a given level of statistical power. The authors described procedures for
examining the tradeoff between the costs saved by using the less expensive proxy
criterion and the costs incurred by the larger sample size. Similar suggestions have
been made by Arvey et al (1992).
Evaluations It is refreshing to see that more evaluations are being reported in the
literature; we hope this trend continues. It is only by drawing lessons learned from
past evaluations that the design and delivery of training will continue to progress.
Several field training evaluations have been reported in team training settings
(e.g. Leedom & Simon 1995, Salas et al 1999), sales training (e.g. Morrow et al
1997), stress training (e.g. Friedland & Keinan 1992), cross-cultural management
training (e.g. Harrison 1992), transformational leadership training (e.g. Barling
et al 1996), career self-management training (e.g. Kossek et al 1998), workforce diversity training (e.g. Hanover & Cellar 1998) and approaches to computer training
(e.g. Simon & Werner 1996). All suggest that training works. However, an examination of evaluations where training did not work is also needed (and we suspect
there are many). Some important lessons can be learned from these types of
evaluations as well.
Transfer of Training
Transfer of training is conceptualized as the extent to which KSAs acquired in a
training program are applied, generalized, and maintained over some time in the
job environment (Baldwin & Ford 1988). There has been a plethora of research
and thinking in the transfer of training area (see Ford & Weissbein 1997). This
emerging body of knowledge suggests a number of important propositions and
conclusions. For example, (a) the organizational learning environment can be reliably measured and varies in meaningful ways across organizations (Tannenbaum
THE SCIENCE OF TRAINING
489
1997); (b) the context matters (Quinones 1997)—it sets motivations, expectations,
and attitudes for transfer; (c) the transfer “climate” can have a powerful impact on
the extent to which newly acquired KSAs are used back on the job (e.g. Tracey
et al 1995, Thayer & Teachout 1995); (d ) trainees need an opportunity to perform
(Ford et al 1992, Quinones et al 1995); (e) delays between training and actual
use on the job create significant skill decay (Arthur et al 1998); ( f ) situational
cues and consequences predict the extent to which transfer occurs (Rouiller &
Goldstein 1993); (g) social, peer, subordinate, and supervisor support all play a
central role in transfer (e.g. Facteau et al 1995, Tracey et al 1995); (h) training can
generalize from one context to another (e.g. Tesluk et al 1995); (i) intervention
strategies can be designed to improve the probability of transfer (e.g. Brinkerhoff &
Montesino 1995, Kraiger et al 1995); ( j) team leaders can shape the degree
of transfer through informal reinforcement (or punishment) of transfer activities
(Smith-Jentsch et al 2000); (k) training transfer needs to be conceptualized as a
multidimensional construct—it differs depending on the type of training and closeness of supervision on the job (Yelon & Ford 1999).
As noted by Ford & Weissbein (1997), much progress has been made in this
area. There are more studies using complex tasks with diverse samples that actually
measure transfer over time. However, much more is needed. Specifically, we need
more studies that actually manipulate the transfer climate (e.g. Smith-Jentsch et al
2000). The measurement problems remain. Most studies still use surveys as the
preferred method for measuring transfer. Other methods need to be developed and
used. Finally, we need to assume that learning outcomes at the individual level will
emerge to influence higher level outcomes. Vertical transfer of training is the next
frontier. Vertical transfer may be a leverage point for strengthening the links between learning outcomes and organizational effectiveness (see Kozlowski et al
2000).
Taken together, these studies validate the importance of the organizational environment in training. In the future we need to continue to determine which factors
affect transfer so that we can maximize it.
FINAL OBSERVATIONS, CONCLUSIONS,
AND THE FUTURE
In closing, we draw on the extensive literature just reviewed to offer the
following observations:
1. As Tannenbaum & Yukl (1992) predicted, the quality and quantity of
research has increased. We truly have seen an explosion of theoretical,
methodological, and empirical work in training research, and we do not see
an end to this trend. This is very encouraging, and we believe this body of
work will pay off as we learn more about how to design and deliver
training systems. Therefore, we contend that training research is here to
stay and prosper.
490
SALAS
¥
CANNON-BOWERS
2. The progress in theoretical development, especially the attention given to
cognitive and organizational concepts, is revolutionizing the field. These
new developments promise to change how we conceptualize, design,
implement, and institutionalize learning and training in organizations. In
the future, we will need a deeper understanding of these concepts, we must
strive for more precision and clarity of constructs, and our methods must
be more rigorous.
3. The body of literature generated over the past decade suggests that the
field does not belong to any single discipline anymore. In the past,
industrial/organizational and educational psychologists primarily
conducted training research. A closer look at the literature now suggests
that cognitive, military, engineering, human factors, and instructional
psychologists are involved in training research to an equal degree. In fact,
computer scientists and industrial engineers are also researching learning,
training technology, and training systems. As many others have observed,
we need more cross-fertilization, collaboration, and dialogue among
disciplines. To start, we need to read each other’s work, and leverage each
other’s findings, ideas, and principles.
4. Technology has influenced—and will continue to do so for the foreseeable
future—the design and delivery of training systems. Whether we like it or
not, technology has been embraced in industrial, educational, and military
institutions as a way to educate and train their workforces. Technology
may, or may not, have instructional features because it is often employed
without the benefit of findings from the science of training. However, as
we learn more about intelligent tutoring systems, modeling and simulation,
multimedia systems, learning agents, Web-based training, distance
learning, and virtual environments, this state of affairs may change. It is
encouraging that basic and applied research is currently going on to
uncover how these technologies enhance learning and human performance
(e.g. Cannon-Bowers et al 1998). More research is needed, and the
prospects of it happening are very promising. Specifically we need to
know more about how to best present knowledge over the Internet, how
and when to provide feedback, which instructional strategies are best for
Web-based applications, what role instructors and trainees play in these
modern systems, and how effectiveness can best be evaluated.
5. The distinction between training effectiveness and training evaluation is
much clearer. Kraiger et al (1993) provided the seed for this important
distinction. Training effectiveness is concerned with why training works
and it is much more “macro” in nature. That is, training effectiveness
research looks at the training intervention from a systems
perspective—where the success of training depends not only on the method
used but on how training (and learning) is positioned, supported, and
reinforced by the organization; the motivation and focus of trainees; and
THE SCIENCE OF TRAINING
491
what mechanisms are in place to ensure the transfer of the newly acquired
KSAs to the job. Training evaluation on the other hand, examines what
works and is much more “micro” (i.e. focused on measurement). It looks
at what was learned at different levels and is the basis for determining the
training effectiveness of a particular intervention. This distinction has
made some significant contributions to practice possible and, more
importantly, is helping avoid the simplistic view of training (i.e. that
training is just a program or curriculum rather than the complex interaction
of many organizational factors). More research aimed at uncovering why
training works is, of course, desirable.
6. Much more attention has been given to discussing training as a system
embedded in an organizational context (e.g. Dipboye 1997, Kozlowski &
Salas 1997, Kozlowski et al 2000, Tannenbaum & Yukl 1992). This is
refreshing and welcome. For many decades, training researchers have
ignored the fact that training cannot be isolated from the system it
supports. In fact, the organizational context matters (e.g. Quinones 1997,
Rouillier & Goldstein 1993) and matters in a significant way. Research
aimed at studying how organizations implement training and why even the
best-designed training systems can fail is encouraged.
7. Research has begun to impact practice in a more meaningful, and it is to be
hoped, quantifiable way. We can offer principles and guidelines to organizations regarding how to analyze, design, develop, implement, and evaluate
training functions. Much needs to be done, but it is only through mutual
reciprocity—science and practice—that real progress will be made (see Salas
et al 1997). As already stated, we have seen more evaluations conducted,
there are more guidelines for designers and practitioners, and viable
strategies that seem to impact organizational outcomes and the link between
learning and performance are more tangible today (Bassi & Van Buren 1999).
A number of training issues need considerable attention in the next few years
(in addition to the ones we have noted throughout this chapter). In particular, we
need research that helps us get a better understanding of what, how, and when onthe-job training works. On-the-job training is a common practice in organizations,
but few principles and guidelines exist on how to optimize this strategy. We need a
deeper understanding of how to build expertise and adaptability through training.
Although some work has started (e.g. Kozlowski 1998, Smith et al 1997), longitudinal studies in the field are desirable. How learning environments are created
and maintained in organizations needs to be researched and better understood. A
related issue is how, and under what circumstances, individuals and teams learn
from informal organizational activities. In addition, as organizations become older
and more diverse, more attention must be paid to the special training needs of
nontraditional workers (especially given the anticipated reliance on high-tech
systems). Moreover, as organizations allow more flexibility in how work is
accomplished (e.g. telecommuting), training practices must keep pace. In fact,
492
SALAS
¥
CANNON-BOWERS
organizations will increasingly depend on workers who can develop, maintain,
and manage their own skills, requiring attention to the challenge of how to develop and attract self-directed learners. Finally, as noted, training researchers
need to embrace and investigate new technologies. We know that organizations
will; we hope that new developments in training are driven by scientific findings
rather than the band wagon.
In conclusion, we are happy to report that, contrary to charges made by our
predecessors over the years, training research is no longer atheoretical, irrelevant,
or dull. Exciting advances in all areas of the training enterprise have been realized. Training research has also been called faddish, a characteristic we hope is
beginning to fade as well. However, we wonder whether there is compelling evidence to suggest that training practitioners in organizations are actually applying
what has been learned from the research. This brings us back to a question we
raised at the beginning of this chapter; namely, to what degree does the science
of training affect organizational training practices? In other words, can we find
evidence that organizations are implementing the lessons being learned from training research (especially the work reviewed here), or are practitioners still prone
to latch on to the latest training craze? The answer is, quite simply, that we just
do not know. This is due, at least in part, to the fact that detailed records documenting training practices (and more importantly, the rationale that went into
developing them) are not typically available. However, one thing seems clear:
The stage for the application of training research is set. We say this because, as
noted, organizations are beginning to question the value-added of human resource
activities (including training), and to pay more attention to human capital. Simply put, organizations want to know what the return is on their training investment.
Assuming this trend continues, it should force training professionals to turn to
the science of training for empirically verified guidelines regarding how to optimize
training outcomes (including transfer), and how to evaluate whether training has
been effective in reaching organizational goals. As the pressure grows to show
an impact on the bottom line, training practitioners will do well to employ sound
principles, guidelines, specifications, and lessons learned from the literature, rather
than relying on a trial-and-error approach. For this reason, we believe a new era of
training has begun—one in which a truly reciprocal relationship between training
research and practice will be realized.
ACKNOWLEDGMENTS
We thank Clint Bowers, Ken Brown, Irv Goldstein, Steve Kozlowski, Kevin Ford,
John Mathieu, Ray Noe, Paul Thayer, Scott Tannenbaum, and Will Wooten for
their valuable comments and suggestions on earlier drafts of this chapter. We
were aided in the literature review by two doctoral students in the Human Factors
program from the University of Central Florida: Katherine Wilson and Shatha
Samman.
THE SCIENCE OF TRAINING
493
Visit the Annual Reviews home page at www.AnnualReviews.org
LITERATURE CITED
Alliger GM, Tannenbaum SI, Bennett W,
Traver H, Shotland A. 1997. A meta-analysis
of the relations among training criteria. Pers.
Psychol. 50:341–58
Anderson JR, Corbett AT, Koedinger KR, Pelletier R. 1995. Cognitive tutors: lessons
learned. J. Learn. Sci. 4:167–207
Arthur W, Bennett W, Stanush PL, McNelly
TL. 1998. Factors that influence skill decay
and retention: a quantitative review and analysis. Hum. Perform. 11:79–86
Arthur W, Day EA, Bennett W, McNelly TL,
Jordan JA. 1997. Dyadic versus individual
training protocols: loss and reacquisition of
a complex skill. J. Appl. Psychol. 82:783–91
Arthur W, Young B, Jordan JA, Shebilske WL.
1996. Effectiveness of individual and dyadic
training protocols: the influence of trainee
interaction anxiety. Hum. Factors 38:79–86
Arvey RD, Salas E, Gialluca KA. 1992. Using
task inventories to forecast skills and abilities. Hum. Perform. 5:171–90
Baker D, Prince C, Shrestha L, Oser R, Salas
E. 1993. Aviation computer games for crew
resource management training. Int. J. Aviat.
Psychol. 3:143–56
Baker D, Salas E, Cannon-Bowers J. 1998.
Team task analysis: lost but hopefully not
forgotten. Ind. Organ. Psychol. 35:79–83
Baldwin TT. 1992. Effects of alternative modeling strategies on outcomes of interpersonalskills training. J. Appl. Psychol. 77:147–54
Baldwin TT, Ford JK. 1988. Transfer of training: a review and directions for future research. Personnel Psychol. 41:63–105
Baldwin TT, Magjuka RJ. 1997. Training as an
organizational episode: pretraining influences on trainee motivation. See Ford et al
1997, pp. 99–127
Barling J, Weber T, Kelloway EK. 1996. Effects of transformational leadership training
on attitudinal and financial outcomes: a field
experiment. J. Appl. Psychol. 81:827–32
Bassi LJ, Van Buren ME. 1999. The 1999 ASTD
State of the Industry Report. Alexandria, VA:
Am. Soc. Train. Dev.
Bell H, Waag W. 1998. Evaluating the effectiveness of flight simulators for training combat skills: a review. Int. J. Aviat. Psychol.
8:223–42
Blickensderfer E, Cannon-Bowers JA, Salas E.
1998. Cross training and team performance
See Cannon-Bowers & Salas 1998b, pp. 299–
311
Blickensderfer E, Cannon-Bowers JA, Salas
E, Baker DP. 2000. Analyzing knowledge
requirements in team tasks. In Cognitive Task
Analysis, ed. JM Schraagen, SF Chipman, VJ
Shalin. Mahwah, NJ: Erlbaum
Bowers CA, Morgan BB Jr, Salas E, Prince C.
1993. Assessment of coordination demand
for aircrew coordination training. Mil. Psychol. 5:95–112
Brett JF, VandeValle D. 1999. Goal orientation and goal content as predictors of performance in a training program. J. Appl. Psychol. 84:863–73
Bretz RD Jr, Thompsett RE. 1992. Comparing
traditional and integrative learning methods
in organizational training programs. J. Appl.
Psychol. 77:941–51
Brinkerhoff RO, Montesino MU. 1995. Partnership for training transfer: lessons from a
corporate study. Hum. Res. Dev. Q. 6:263–74
Buttom SB, Mathieu JE, Zajac DM. 1996.
Goal orientation in organizational research:
a conceptual and empirical foundation.
Organ. Behav. Hum. Decis. Process. 67:26–
48
Campbell JP. 1971. Personnel training and
development. Annu. Rev. Psychol. 22:565–
602
Cannon-Bowers J, Burns J, Salas E, Pruitt
J. 1998. Advanced technology in decisionmaking training. See Cannon-Bowers &
Salas 1998b, pp. 365–74
494
SALAS
¥
CANNON-BOWERS
Cannon-Bowers JA, Salas E. 1997. Teamwork
competencies: the interaction of team member knowledge, skills, and attitudes. In
Workforce Readiness: Competencies and
Assessment, ed. HF O’Niel, pp. 151–74.
Mahwah, NJ: Erlbaum
Cannon-Bowers JA, Salas E. 1998a. Individual
and team decision making under stress: theoretical underpinnings. See Cannon-Bowers
& Salas 1998b, pp. 17–38
Cannon-Bowers JA, Salas E, eds. 1998b. Making Decisions Under Stress: Implications for
Individual and Team Training. Washington,
DC: Am. Psychol. Assoc.
Cannon-Bowers J, Salas E, Converse S. 1993.
Shared mental models in expert team decision making. In Individual and Group
Decision Making: Current Issues, ed. NJ
Castellan Jr, pp. 221–46. Hillsdale, NJ: Erlbaum
Cannon-Bowers JA, Salas E, Tannenbaum SI,
Mathieu JE. 1995. Toward theoreticallybased principles of trainee effectiveness: a
model and initial empirical investigation.
Mil. Psychol. 7:141–64
Christoph RT, Schoenfeld GA Jr, Tansky JW.
1998. Overcoming barriers to training utilizing technology: the influence of self-efficacy
factors on multimedia-based training receptiveness. Hum. Res. Dev. Q. 9:25–38
Cole ND, Latham GP. 1997. Effects of training
in procedural justice on perceptions of disciplinary fairness by unionized employees and
disciplinary subject matter experts. J. Appl.
Psychol. 82:699–705
Colquitt JA, LePine JA, Noe RA. 2000. Toward
an integrative theory of training motivation:
a meta-analytic path analysis of 20 years of
research. J. Appl. Psychol. In press
Colquitt JA, Simmering MS. 1998. Consciousness, goal orientation, and motivation to learn
during the learning process: a longitudinal
study. J. Appl. Psychol. 83:654–65
Dipboye RL. 1997. Organizational barriers to
implementing a rational model of training.
See Quinones & Ehrenstein 1997, pp. 119–
48
Driskell JE, Johnston JH. 1998. Stress exposure training. See Cannon-Bowers & Salas
1998b, pp. 191–217
Driskell JE, Willis RP, Copper C. 1992. Effect
of overlearning on retention. J. Appl. Psychol. 77:615–22
Dubois DA, Shalin VL, Levi KR, Borman WC.
1997/1998. A cognitively-oriented approach
to task analysis. Train. Res. J. 3:103–41
Dweck CS. 1986. Motivational processes affecting learning. Am. Psychol. 41:1040–48
Dweck CS, Leggett EL. 1988. A socialcognitive approach to motivation and personality. Psychol. Rev. 95:256–73
Dwyer DJ, Oser RL, Salas E, Fowlkes JE.
1999. Performance measurement in distributed environments: initial results and implications for training. Mil. Psychol. 11:189–
215
Eden D, Aviram A. 1993. Self-efficacy training to speed reemployment: helping people
to help themselves. J. Appl. Psychol. 78:352–
60
Ehrenstein A, Walker B, Czerwinski M, Feldman E. 1997. Some fundamentals of training
and transfer: Practice benefits are not automatic. See Quinones & Ehrenstein 1997, pp.
31–60
Elliot AJ, Church MA. 1997. A hierarchical
model of approach and avoidance achievement motivation. J. Pers. Soc. Psychol.
72:218–32
Facteau JD, Dobbins GH, Russel JEA, Ladd
RT, Kudisch JD. 1995. The influence of
general perceptions for the training environment on pretraining motivation and perceived training transfer. J. Manage. 21:1–
25
Fisher SL, Ford JK. 1998. Differential effects
of learner efforts and goal orientation on two
learning outcomes. Pers. Psychol. 51:397–
420
Ford JK, Kozlowski S, Kraiger K, Salas E,
Teachout M, eds. 1997. Improving Training Effectiveness in Work Organizations.
Mahwah, NJ: Erlbaum. 393 pp.
Ford JK, Quinones MA, Sego DJ, Sorra JS.
THE SCIENCE OF TRAINING
1992. Factors affecting the opportunity to
perform trained tasks on the job. Pers. Psychol. 45:511–27
Ford JK, Smith EM, Sego DJ, Quinones
MA. 1993. Impact of task experience and
individual factors on training-emphasis ratings. J. Appl. Psychol. 78:218–33
Ford JK, Smith EM, Weissbein DA, Gully
SM, Salas E. 1998. Relationships of goalorientation, metacognitive activity, and practice strategies with learning outcomes and
transfer. J. Appl. Psychol. 83:218–33
Ford JK, Weissbein DA. 1997. Transfer of
training: an updated review and analysis.
Perform. Improv. Q. 10:22–41
Fowlkes J, Dwyer D, Oser R, Salas E.
1998. Event-based approach to training. Int.
J. Aviat. Psychol. 8:209–22
Friedland N, Keinan G. 1992. Training effective performance in stressful situations:
three approaches and implications for combat training. Mil. Psychol. 4:157–74
Ghodsian D, Bjork R, Benjamin A. 1997. Evaluating training during training: obstacles and
opportunities. See Quinones & Ehrenstein
1997, pp. 63–88
Goettl BP, Yadrick RM, Connolly-Gomez
C, Regian WJ, Shebilske WL. 1996. Alternating task modules in isochronal distributed training of complex tasks. Hum. Factors 38:330–46
Goldsmith T, Kraiger K. 1997. Structural
knowledge assessment and training evaluation. See Ford et al 1997, pp. 19–46
Goldstein IL. 1980. Training in work organizations. Annu. Rev. Psychol. 31:229–72
Goldstein IL. 1993. Training in Organizations:
Needs Assessment, Development and Evaluation. Monterey, CA: Brooks/Cole. 3rd ed.
Gopher D, Weil M, Bareket T. 1994. Transfer of skill from a computer game trainer to
flight. Hum. Factors 36:387–405
Guzzo RA, Dickson MW. 1996. Teams in
organizations: recent research on performance and effectiveness. Annu. Rev. Psychol.
47:307–38
Guzzo RA, Yost PR, Campbell RJ, Shea GP.
495
1993. Potency in groups: articulating a construct. Br. J. Soc. Psychol. 32:87–106
Haccoun RR, Hamtiaux T. 1994. Optimizing
knowledge tests for inferring learning acquisition levels in single group training evaluation designs: the internal referencing strategy. Pers. Psychol. 47:593–604
Hanover JMB, Cellar DF. 1998. Environmental factors and the effectiveness of workforce
diversity training Hum. Res. Dev. Q. 9:105–
24
Harrison JK. 1992. Individual and combined
effects of behavior modeling and the cultural assimilator in cross-cultural management training. J. Appl. Psychol. 77:952–62
Helmreich RL, Foushee HS. 1993. Why crew
resource management? Empirical and theoretical bases of human factors training in
avaiation. See Wiener et al 1993, pp. 3–45
Helmreich RL, Merritt AC, Wilhelm JA. 1999.
The evolution of crew resource management
training in commercial aviation. Int. J. Aviat.
Psychol. 9:19–32
Helmreich RL, Wiener EL, Kanki BG. 1993.
The future of crew resource management in
the cockpit and elsewhere. See Wiener &
Kanki 1993, pp. 479–501
Howard A, ed. 1995. The Changing Nature of
Work. San Francisco: Jossey-Bass
Hunter JE. 1986. Cognitive ability, cognitive
aptitudes, job knowledge, and job performance. J. Vocat. Behav. 29:340–62
Huselid MA. 1995. The impact of human resource management practices on turnover,
productivity, and corporate financial performance. Acad. Manage. J. 38:635–72
Jacobs JW, Dempsey JV. 1993. Simulation and
gaming: fidelity, feedback, and motivation.
In Interactive Instruction and Feedback, ed.
JV Dempsey, GC Sales, pp. 197–229. Englewood Cliffs, NJ: Educ. Technol.
Jentsch F, Bowers C. 1998. Evidence for the
validity of PC-based simulations in studying
aircrew coordination. Int. J. Aviat. Psychol.
8:243–60
Johnston JH, Cannon-Bowers JA. 1996. Training for stress exposure. In Stress and Human
496
SALAS
¥
CANNON-BOWERS
Performance, ed. JE Driskell, E Salas, pp.
223–56. Mahweh, NJ: Erlbaum
Kaemar KM, Wright PM, McMahan GC.
1997. The effects of individual differences
on technological training. J. Manage. Issues.
9:104–20
Kanfer R. 1991. Motivational theory and industrial and organizational psychology. In Handbook of Industrial and Organizational Psychology, ed. MD Dunnette, LM Hough, M
Leaetta, 2:75–170. Palo Alto, CA: Consult.
Psychol. 2nd ed.
Kirkpatrick DL. 1976. Evaluation of training.
In Training and Development Handbook, ed.
RL Craig, Ch. 18. New York: McGraw-Hill.
2nd ed.
Kirlik A, Fisk AD, Walker N, Rothrock
L. 1998. Feedback augmentation and parttask practice in training dynamic decisionmaking skills. See Cannon-Bowers & Salas
1998b, pp. 247–70
Kossek EE, Roberts K, Fisher S, Demarr
B. 1998. Career self-management: a quasiexperimental assessment of the effects of a
training intervention. Pers. Psychol. 51:935–
62
Kozlowski SW. 1998. Training and developing
adaptive teams: theory, principles, and research. See Cannon-Bowers & Salas 1998b,
pp. 247–70
Kozlowski SWJ, Brown K, Weissbein D,
Cannon-Bowers J, Salas E. 2000. A multilevel approach to training effectiveness: enhancing horizontal and vertical transfer. In
Multilevel Theory, Research and Methods in
Organization, ed. K Klein, SWJ Kozlowski.
San Francisco: Jossey-Bass
Kozlowski SWJ, Salas E. 1997. A multilevel
organizational systems approach for the implementation and transfer of training. See
Ford et al 1997, pp. 247–87
Kraiger K, Ford JK, Salas E. 1993. Application of cognitive, skill-based, and affective
theories of learning outcomes to new methods of training evaluation. J. Appl. Psychol.
78:311–28
Kraiger K, Jung K. 1997. Linking train-
ing objectives to evaluation criteria. See
Quinones & Ehrenstein 1997, pp. 151–76
Kraiger K, Salas E, Cannon-Bowers JA.
1995. Measuring knowledge organization as
a method for assessing learning during training. Hum. Perform. 37:804–16
Latham GP. 1988. Human resource training and
development. Annu. Rev. Psychol. 39:545–82
Leedom DK, Simon R. 1995. Improving team
coordination: a case for behavior-based training. Mil. Psychol. 7:109–22
London M, Moore EM. 1999. Continuous
learning. In The Changing Nature of Performance, ed. DR Ilgen, ED Pulakos, pp. 119–
53. San Francisco: Jossey-Bass
Martocchio JJ. 1992. Microcomputer usage as
an opportunity: the influence of context in
employee training. Pers. Psychol. 45:529–51
Martocchio JJ. 1994. Effects of conceptions of
ability on anxiety, self-efficacy and learning
in training. J. Appl. Psychol. 79:819–25
Martocchio JJ, Baldwin TT. 1997. The evolution of strategic organizational training.
In Research in Personnel and Human Resource Management, ed. RG Ferris, 15:1–46.
Greenwich, CT: JAI
Martocchio JJ, Judge TA. 1997. Relationship
between conscientiousness and learning in
employee training: mediating influences of
self-deception and self-efficacy. J. Appl. Psychol. 82:764–73
Martocchio JJ, Webster J. 1992. Effects of
feedback and cognitive playfulness on performance in microcomputer software training. Pers. Psychol. 45:553–78
Mathieu JE, Martineau JW. 1997. Individual
and situational influences in training motivation. See Ford et al 1997, pp. 193–222
Mathieu JE, Martineau JW, Tannenbaum SI.
1993. Individual and situational influences
on the development of self-efficacy: implication for training effectiveness. Pers. Psychol.
46:125–47
Mathieu JE, Tannenbaum SI, Salas E. 1992.
Influences of individual and situational characteristics on measures of training effectiveness. Acad. Manage. J. 35:828–47
THE SCIENCE OF TRAINING
McKenna JA. 1990. Take the “A” training:
Facing world-class challenges, leading-edge
companies use progressive training techniques to stay competitive. Ind. Week
239:22–26
Mitchell TR, Hopper H, Daniels D, GeorgeFalvy J, James LR. 1994. Predicting selfefficacy and performance during skill acquisition. J. Appl. Psychol. 79:506–17
Morrow CC, Jarrett MQ, Rupinski MT. 1997.
An investigation of the effect and economic
utility of corporate-wide training. Pers. Psychol. 50:91–119
Neerincx MA, Griffoen E. 1996. Cognitive
task analysis: harmonizing tasks to human
capacities. Ergonomics 39:543–61
Noe RA, ed. 1999. Employee Training and Development. Boston: Irwin/McGraw-Hill
Noe RA, Wilk SL. 1993. Investigation of the
factors that influence employees’ participation in development activities. J. Appl. Psychol. 78:291–302
Ortiz GA. 1994. Effectiveness of PC-based
flight simulation. Int. J. Aviat. Psychol.
4:285–91
Oser RL, Cannon-Bowers JA, Salas E, Dwyer
DJ. 1999. Enhancing human performance
in technology-rich environments: guidelines for scenario-based training. In Human/ Technology Interaction in Complex Systems, ed. E Salas, 9:175–202. Greenwich,
CT: JAI
Phillips JM, Gully SM. 1997. Role of goal orientation, ability, need for achievement, and
locus of control in the self-efficacy and goalsetting process. J. Appl. Psychol. 82:792–802
Prince C, Salas E. 1993. Training and research for teamwork in the military aircrew.
See Wiener et al 1993, pp. 337–66
Quick JC, Joplin JR, Nelson DL, Mangelsdorff
AD, Fiedler E. 1996. Self-reliance and military service training outcomes. Mil. Psychol.
8:279–93
Quinones MA. 1995. Pretraining context effects: training assignment as feedback.
J. Appl. Psychol. 80:226–38
Quinones MA. 1997. Contextual influencing
497
on training effectiveness. See Quinones &
Ehrenstein 1997, pp. 177–200
Quinones MA, Ehrenstein A. 1997. Training for a Rapidly Changing Workplace:
Applications of Psychological Research.
Washington, DC: Am. Psychol. Assoc.
Quinones MA, Ford JK, Sego DJ, Smith EM.
1995. The effects of individual and transfer
environment characteristics on the opportunity to perform trained tasks. Train. Res. J.
1:29–48
Randel JM, Main RE, Seymour GE, Morris BA. 1992. Relation of study factors to
performance in Navy technical schools. Mil.
Psychol. 4:75–86
Ree MJ, Carretta TR, Teachout MS. 1995.
Role of ability and prior job knowledge in
complex training performance. J. Appl. Psychol. 80:721–30
Ree MJ, Earles JA. 1991. Predicting training
success: not much more than G. Pers. Psychol. 44:321–32
Ricci KE, Salas E, Cannon-Bowers JA. 1995.
Do computer based games facilitate knowledge acquisition and retention? Mil. Psychol.
8:295–307
Rogers W, Maurer T, Salas E, Fisk A. 1997.
Task analysis and cognitive theory: controlled and automatic processing task analytic methodology. See Ford et al 1997, pp.
19–46
Rouillier JZ, Goldstein IL. 1993. The relationship between organizational transfer climate
and positive transfer of training. Hum. Res.
Dev. Q. 4:377–90
Sackett PR, Mullen EJ. 1993. Beyond formal
experimental design: towards an expanded
view of the training evaluation process. Pers.
Psychol. 46:613–27
Saks AM. 1995. Longitudinal field investigation of the moderating and mediating effects
of self-efficacy on the relationship between
training and newcomer adjustment. J. Appl.
Psychol. 80:221–25
Salas E, Bowers CA, Blickensderfer E.
1997. Enhancing reciprocity between training theory and training practice: principles,
498
SALAS
¥
CANNON-BOWERS
guidelines, and specifications. See Ford et al
1997, pp. 19–46
Salas E, Bowers CA, Rhodenizer L. 1998. It
is not how much you have but how you use
it: toward a rational use of simulation to support aviation training. Int. J. Aviat. Psychol.
8:197–208
Salas E, Cannon-Bowers JA. 1997. Methods, tools, and strategies for team training.
See Quinones & Ehrenstein 1997, pp. 249–
80
Salas E, Cannon-Bowers JA. 2000. The
anatomy of team training. In Training and
Retraining: A Handbook for Business, Industry, Government and the Military, ed. S
Tobias, D Fletcher. Farmington Hills, MI:
Macmillan. In press
Salas E, Fowlkes J, Stout RJ, Milanovich DM,
Prince C. 1999. Does CRM training improve
teamwork skills in the cockpit?: two evaluation studies. Hum. Factors 41:326–43
Salas E, Klein G, eds. 2000. Linking Expertise
and Naturalistic Decision Making. Mahwah,
NJ: Erlbaum. In press
Salas E, Rhodenizer L, Bowers CA. 2000. The
design and delivery of CRM training: exploring the available resources. Hum. Factors. In
press
Schmidt RA, Bjork RA. 1992. New conceptualizations of practice: Common principles
in three paradigms suggest new concepts for
training. Psychol. Sci. 3:207–17
Schraagen JM, Chipman SF, Shalin VL, eds.
2000. Cognitive Task Analysis. Mahwah, NJ:
Erlbaum. 392 pp.
Schreiber D, Berge Z, eds. 1998. Distance
Training: How Innovative Organizations Are
Using Technology to Maximize Learning and
Meet Business Objectives. San Francisco:
Jossey-Bass
Shebilske WL, Jordan JA, Goettl BP, Paulus
LE. 1998. Observation versus hands-on practice of complex skills in dyadic, triadic,
and tetradic training-teams. Hum. Factors
40:525–40
Shebilske WL, Regian JW, Arthur W Jr, Jordan JA. 1992. A dyadic protocol for train-
ing complex skills. Hum. Factors 34:369–
74
Shute VJ, Gawlick LA. 1995. Practice effects
on skill acquisition, learning outcome, retention, and sensitivity to relearning. Hum. Factors 37:781–803
Shute VJ, Gawlick LA, Gluck KA. 1998. Effects of practice and learner control on shortand long-term gain. Hum. Factors 40:296–
310
Simon SJ, Werner JM. 1996. Computer training through behavior modeling, self-paced,
and instructional approaches: a field experiment. J. Appl. Psychol. 81:648–59
Skarlicki DP, Latham GP. 1997. Leadership training in organizational justice to increase citizenship behavior within a labor
union: a replication. Pers. Psychol. 50:617–
33
Smith E, Ford K, Kozlowski S. 1997. Building
adaptive expertise: implications for training
design strategies. See Quinones & Ehrenstein
1997, pp. 89–118
Smith-Jentsch KA, Jentsch FG, Payne SC,
Salas E. 1996a. Can pretraining experiences
explain individual differences in learning?
J. Appl. Psychol. 81:909–36
Smith-Jentsch KA, Salas E, Baker DP. 1996b.
Training team performance-related assertiveness. Pers. Psychol. 49:110–16
Smith-Jentsch KA, Salas E, Brannick M. 2000.
To transfer or not to transfer? An investigation of the combined effects of trainee characteristics and team transfer environments. J.
Appl. Psychol. In press
Smith-Jentsch KA, Zeisig RL, Acton B,
McPherson JA. 1998. Team dimensional
training: a strategy for guided team selfcorrection. See Cannon-Bowers & Salas
1998b, pp. 247–70
Stajkovic AD, Luthans F. 1998. Self efficacy and work-related performance. A metaanalysis. Psychol. Bull. 124:240–61
Steele-Johnson D, Hyde BG. 1997. Advanced
technologies in training: intelligent tutoring
systems and virtual reality. See Quinones &
Ehrenstein 1997, pp. 225–48
THE SCIENCE OF TRAINING
Sternberg RJ. 1997. Managerial intelligence:
Why IQ isn’t enough. J. Manage. 23:475–
93
Stevens CK, Gist ME. 1997. Effects of selfefficacy and goal-orientation training on negotiation skill maintenance: what are the
mechanisms? Personnel Psychol. 50:955–78
Stout RJ, Salas E, Fowlkes J. 1997. Enhancing teamwork in complex invironments
through team training. Group Dyn.: Theory
Res. Pract. 1:169–82
Swezey RW, Salas E, eds. 1992. Teams: Their
Training and Performance. Norwood, NJ:
Ablex
Tannenbaum SI. 1997. Enhancing continuous
learning: diagnostic findings from multiple
companies. Hum. Res. Manage. 36:437–52
Tannenbaum SI, Cannon-Bowers JA, Mathieu JE. 1993. Factors That Influence Training Effectiveness: A Conceptual Model and
Longitudinal Analysis. Rep. 93-011, Naval
Train. Syst. Cent., Orlando, FL
Tannenbaum SI, Dupuree-Bruno LM. 1994.
The relationship between organizational and
evironmental factors and the use of innovative human resource practices. Group Organ.
Manage. 19:171–202
Tannenbaum SI, Smith-Jentsch KA, Behson
SJ. 1998. Training team leaders to facilitate
team learning and performance. See CannonBowers & Salas 1998b, pp. 247–70
Tannenbaum SI, Yukl G. 1992. Training and
development in work organizations. Annu.
Rev. Psychol. 43:399–441
Tesluk PE, Farr JL, Mathieu JE, Vance
RJ. 1995. Generalization of employee involvement training to the job setting: individual and situational effects. Pers. Psychol.
48:607–32
Thayer PW. 1997. A rapidly changing world:
some implications for training systems in
499
the year 2001 and beyond. See Quinones &
Ehrenstein 1997, pp. 15–30
Thayer PW, Teachout MS. 1995. A Climate
for Transfer Model. Rep. AL/HR-TP-19950035, Air Force Mat. Command, Brooks Air
Force Base, Tex.
Tracey JB, Tannenbaum SI, Kavanagh MJ.
1995. Applying trained skills on the job: the
importance of the work environment. J. Appl.
Psychol. 80:239–52
VandeValle D. 1997. Development and validation of a work domain goal orientation instrument. Educ. Psychol. Meas. 57:995–1015
Volpe CE, Cannon-Bowers JA, Salas E, Spector PE. 1996. The impact of cross-training
on team functioning: an empirical investigation. Hum. Factors 38:87–100
Warr P, Bunce D. 1995. Trainee characteristics and the outcomes of open learning. Pers.
Psychol. 48:347–75
Wexley KN. 1984. Personnel training. Annu.
Rev. Psychol. 35:519–51
Wexley KN, Latham GP. 2000. Developing
and Training Human Resources in Organizations, Vol. 3. Englewood Cliffs, NJ: PrenticeHall. In press
Wiener EL, Kanki BG, Helmreich RL, eds.
1993. Cockpit Resource Management. San
Diego, CA: Academic
Wilson MA, Zalewski MA. 1994. An expert
system for abilities-oriented job analysis.
Comp. Hum. Behav. 10:199–207
Yang H, Sackett PR, Arvey RD. 1996. Statistical power and cost in training evaluation: some new considerations. Pers. Psychol. 49:651–68
Yelon SL, Ford JK. 1999. Pursuing a multidimensional model of transfer. Perform. Improv. Q. 12:58–78
Zsambok C, Klein G, eds. 1997. Naturalistic
Decision Making. Mahweh, NJ: Erlbaum