Roop Et Al Effect of Clinical Teaching
Roop Et Al Effect of Clinical Teaching
Roop Et Al Effect of Clinical Teaching
eaching . . . is intended to lead toward learning (1). Educational research has explored
the relation between the process of educating
(what the teacher does) and the desired product of education (what the student learns). This process-product
paradigm has also begun to influence the study and development of medical education.
One well-known initiative for the evaluation and development of clinical teachers is the Stanford Faculty Development Program (2), which has a framework for clinical teaching composed of seven educational categories:
setting a learning climate, leadership in control of session,
communicating goals, fostering understanding and retention, evaluation, feedback, and encouraging self-diFrom the Pulmonary/Critical Care Medicine Service (SAR), Walter
Reed Army Medical Center, Washington, DC, and the F. Edward Hebert School of Medicine (LP), Uniformed Services University of the
Health Sciences, Bethesda, Maryland.
The opinions contained in this article solely represent the views of the
authors and are not to be construed as representing the views of the
Department of Defense or the Department of the Army.
Requests for reprints should be addressed to Stuart A. Roop, MD, Pulmonary/Critical Care Medicine Service (SAR), Walter Reed Army Medical
Center, 6900 Georgia Avenue NW, Washington, DC 20307-5001.
Manuscript submitted August 20, 1999, and accepted in revised form
September 26, 2000.
2001 by Excerpta Medica, Inc.
All rights reserved.
METHODS
From August 1992 to June 1994, 314 third-year students
at the Uniformed Services University of the Health Sciences School of Medicine (all students in 2 consecutive
years) were asked to fill out critiques evaluating the skills
of each clinical teacher. Two consecutive 6-week clerkship rotations were completed at 2 of 5 core teaching
hospitals. The critiques were completed before the students knew their final grade or test scores, although regular feedback was provided to the students throughout
the rotation. Students on the inpatient wards evaluated
interns, residents, and attending physicians; students on
ambulatory medicine rotations evaluated each clinic attending physician with whom they spent at least 5 halfdays. Both groups evaluated their small-group preceptors. Individual teaching behaviors were rated on a Likert
scale (1 strongly disagree, 5 strongly agree). The
critiques consisted of 10 to 15 statements that were designed to assess teaching behaviors expected of each level
of teacher within the 7 educational categories of the Stan206
Volume 110
Table 2. Effects of Selected Factors on Clerkship Final Grade and Student Growth*
Clerkship Total Score
Variables
Multiple
r2
0.28
0.34
0.35
0.36
0.36
0.36
Increase
in r2
P
Value
Multiple
r2
Increase
in r2
0.06
0.01
0.01
0.0001
0.0001
0.004
0.07
0.6
0.7
NA**
0.10
0.14
0.15
0.15
0.15
0.04
0.01
P
Value
0.003
0.001
0.04
0.9
0.9
Based on clinical performance score (72%), National Board of Medical Examiners Medicine subject examination (18%), a 3-hour open book essay
examination of analytic ability (6%) and a multiple-choice test in the interpretation of laboratory values (4%).
Calculated as the difference between clerkship final grade and a preclerkship performance score (see methods).
Grade point average at the completion of the second year of medical school.
Cumulative measure (mean) of all clinical teaching scores from the student ratings of teaching behaviors.
Undergraduate degrees were divided into traditional biologic sciences (or premedical degree) and nonbiologic sciences degrees.
NA not applicable: the preclerkship grade point average was one of the factors used in the growth outcome determination.
RESULTS
Complete preclerkship and postclerkship performance
data as well as student critiques were available for 293
(93%) of the 314 students who completed medicine
clerkships during the 2 academic years. A total of 2,817
critiques were completed (a mean of 9.6 per student). The
DISCUSSION
During the relatively short period of a 12-week medicine
clerkship, reported teaching skills affected student learning, although the measured effect was modest. Skeff et al
(2) examined the effect of clinical teaching seminars on
teaching behaviors using faculty self-assessments and
student evaluations but did not look at student performance as an outcome. Lucas et al (11) demonstrated a
relation between student performance and teaching behaviors affecting learning climate (involving students
208
Volume 110
management, leadership style, and involvement of learners, whereas for the upper half of the class, fostering active
learning was more important.
We used an evaluation program that has been rigorously developed and well studied (12,13). Student performance was assessed using a combination of quantified
scores on examinations and descriptive clinical grades
from teachers observations as part of a highly structured
program using formal evaluation sessions. We have previously demonstrated the predictive validity of this evaluation system, relating clerkship performance to internship ratings (12). The reliability, or intraclass correlation,
of the clinical evaluations in this study was 0.83, and was
based on input from an average of 10 teachers for each
student. Previous studies have demonstrated that clinical
evaluations based on input from 7 or more teachers has a
reliability of greater than 0.8, which is comparable to
standardized multiple choice examinations and considered suitable for high-stakes decision making (16). Our
study is further strengthened in that it spanned 2 full academic years, was conducted at several teaching hospitals, included 314 students, and had a response rate of
more than 93%.
One limitation of this study is that although the validity and reliability of student ratings of clinical teachers
using the Stanford framework has been demonstrated
(3,4,17), this method does rely on students perspectives
of their teachers behaviors rather than ratings by an outside observer. During clinical clerkships, the majority of a
students grade (72% in our program) is based on clinical
evaluations from teachers. A reciprocal relation between
students perceptions of teachers behaviors and teachers
evaluations of students performance cannot be ignoredstudents and teachers who liked (or disliked)
each other may be more likely to provide favorable (or
unfavorable) evaluations. In fact, our study showed that,
on average, students who received higher clinical evaluation scores did rate their teachers behaviors more favorably. Teaching during clinical clerkships is characterized
by small group interaction, and it may not be possible to
isolate the reciprocal reward phenomenon from accurate evaluations. For example, students who enjoy their
teachers may learn more, and teachers whose students are
engaged and enthusiastic may spend more time and use
more effective teaching behaviors. To minimize this effect, we used student reports of specific teaching behaviors rather than their general assessments of overall
teacher skills. More importantly, because the clinical
evaluation for each student consisted of structured input
from an average of 10 clinical teachers, the effect of any
favoritism should be lessened. Another limitation of this
study is that it involved only students at a single medical
school during the medicine clerkship. The findings may
not apply to other clerkships or to postgraduate training
in medicine.
In summary, we measured the effect of perceived clinical teaching behaviors on performance during a medicine clerkship and linked overall teaching behaviors to
medical student performance. This study provides an additional validation of the Stanford Faculty Development
Program framework using student learning as the measured outcome. Among the educational categories, teaching behaviors that reflect leadership style and foster understanding and retention were the most effective.
ACKNOWLEDGMENTS
The authors thank Andrew Shorr, MD, MPH, for review of the
manuscript, and David Cruess, PhD, for statistical review.
REFERENCES
1. Gage NL. Hard Gains in the Soft Sciences: The Case of Pedagogy.
Bloomington, Ind: Phi Delta Kappa; 1985:1.
2. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical
teaching: evaluation of a national dissemination program. Arch Intern Med. 1992;152:1156 1161.
3. Litzelman DK, Stratos GA, Marriot DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating
clinical teachers. Acad Med. 1998;73:688 695.
4. Marriot DJ, Litzelman DK. Students global assessments of clinical
teachers: a reliable and valid measure of teaching effectiveness. Acad
Med. 1998;73(suppl 10):S72S74.
5. Dewitt TG, Goldberg RL, Roberts KB. Developing community faculty. Principles, practice and evaluation. Am J Dis Child. 1993;147:
49 53.
6. Albright CL, Farquhar JW, Fortmann SP, et al. Impact of a clinical
preventive medicine curriculum for primary care faculty: results of
a dissemination model. Prev Med. 1992;21:419 435.
7. Patridge MI, Harris IB, Petzel RA. Implementation and evaluation
of a faculty development program to improve clinical teaching.
J Med Educ. 1980;55:711713.
8. Ramsbottom-Lucier MT, Gillmore GM, Irby DM, Ramsey PG.
Evaluation of clinical teaching by general internal medicine faculty
in outpatient and inpatient settings. Acad Med. 1994;69:152154.
9. Williams BC, Stern DT, Pillsbury MS. Validating a global measure
of faculty teaching performance. Acad Med. 1998;73:614 615.
10. Skeff KM, Stratos GA, Mygdal WK, et al. Clinical teaching
improvement: past and future for faculty development. Fam Med.
1997;29:257257.
11. Lucas CA, Benedek D, Pangaro L. Learning climate and students
achievement in a medicine clerkship. Acad Med. 1993;68:811 812.
12. Lavin B, Pangaro L. Internship ratings as a validity outcome measure for an evaluation system to identify inadequate clerkship performance. Acad Med. 1998;73:998 1002.
13. Pangaro L, Gibson K, Russel W, Lucas C, Marple R. A prospective
randomized trial of a six-week ambulatory medicine rotation. Acad
Med. 1995;70:537541.
14. Elnicki DM, Ainsworth MA, Magarian GJ, Pangaro LN. Evaluating
the internal medicine clerkship: a CDIM commentary. Am J Med.
1994;97:1 6.
15. Pangaro L. A new vocabulary and other innovations for improving
descriptive in-training evaluations. Acad Med. 1999;74:12031207.
16. Carline JD, Paauw DS, Thiede KW, Ramsey PG. Factors affecting
the reliability of ratings of students clinical skills in a medicine
clerkship. J Gen Intern Med. 1992;7:506 510.
17. Donnelly MB, Woolliscroft JO. Evaluation of clinical instructors by
third-year medical students. Acad Med. 1989;64:159 164.