Evidence-Based Surgery

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Surg Clin N Am 86 (2006) 1–16

Evidence-Based Surgery
Jonathan L. Meakins, OC, MD, DSc, FRCS(Hon),
FRCS(C,Glas)
Nuffield Department of Surgery, John Radcliffe Hospital, University of Oxford,
Headington, Oxford OX3 9DU, England, UK

Evidence-based medicine (EBM) has entered into the lexicography of


clinical practice. We are here involved with that set of knowledge that relates
to surgery, clearly a subset of medicine overall, and to surgical practice. The
author will therefore substitute evidence-based surgery (EBS) for EBM and
refer to its application clinically as evidence-based surgical practice (EBSP).
EBSP should incorporate the entire patient journey from first contact to
completion of the care episode whether short or long term.
The EBM proponents and gurus have had much scorn showered on them
since the introduction of the term and the development of coherent pro-
cesses for clinical application and a structure into which to place the ap-
proach. Established clinicians were ruffled at the very idea that their
clinical practices were not evidence-based. Most considered themselves to
be up-to-date, and some were opinion leaders in their field. The operative
word is, of course, opinion. Our approach to any clinical problem is a reflec-
tion of our training, and our trainers, and often what we last read or listened
to or even ignored.

What is evidence-based surgery?


Evidence-based surgery could be defined as the integration of: best re-
search evidence (clinically relevant research, basic science, relating to diag-
nosis, treatment, prognosis) with clinical expertise (skills and experience
adapted to a particular patient) and patient values (patient preference and
attitudes to clinical entity and its overall management) [1].
The definition and the values it expresses are hard to dispute. The devil is
in the details, however, and the most contentious and difficult of these are in
clarifying, defining, or establishing ‘‘best research evidence.’’ Typically, a sin-
gle key paper in a recently published journal of high repute may carry the

E-mail address: [email protected]

0039-6109/06/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.suc.2005.10.004 surgical.theclinics.com
2 MEAKINS

day. The classic debating trick on rounds: ‘‘In this week’s journal, X, not Y,
is the way to manage this clinical scenario.’’ Yet in reality, the best evidence
is usually a summary of all the evidence that will assist solving the clinical
problem with the elimination of bias. The detailed process for a simple ques-
tion in a particular patient might be different from the same clinical problem
in a population. The steps, however, are essentially the same:
1. Define the question.
2. Search for the evidence.
3. Critically appraise the literature.
4. Apply the results: on a patient or a population.
5. Evaluate the outcome.
Closing the circledevaluation of outcomedis as important as any of the
other steps.
There are specific tools that must be mastered to implement this ap-
proach to any clinical issue. Just as dissecting with a scalpel is a learned skill
and somewhat different from using Metzenbaum scissors, search methodol-
ogy has multiple techniques that one can use to identify the same collection
of articles. Multiple techniques can be learned, but as with most tools, a fa-
vorite approach or technique will emerge. Searching is an important tool to
recognize and use well.
The next key tool is critical appraisal. This does not mean evaluation of
the center (reputable), the authors (well-known to me or by reputation) or
the level of agreement of the conclusions with our preformed and often
well-established thoughts. It means using a structured framework to evalu-
ate the literature (evidence). When asking a question relating to a specific
patient and clinical problem [2], the article must address three questions:
1. Is the evidence from this randomized, controlled trial (RCT) valid?
2. If valid, is it important?
3. When valid and important, can it be applied to the patient or problem at
hand?
We must always separate the statistically significant from the clinically
important. Box 1 provides the template for evaluating an article and facili-
tates its use in answering the above three questions:
A useful concept to calculate from the critical appraisal is the number
needed to treat (NNT). The NNT is the number of patients treated to
achieve the primary goal in a patient. For surgeons deciding upon an oper-
ation; the number needed to harm (NNH) is as important, and ought to be
integrated into any clinical or operative decision [1].
In their article on finding and appraising evidence elsewhere in this issue,
McCulloch and Badenoch outline the creation of a critically appraised topic
or CAT. This is another tool of great use, because once done it can be
updated as new evidence surfaces. Most approaches to CAT making and
application of results do depend on a number of statistical calculations.
EVIDENCE-BASED SURGERY 3

Box 1. Guidelines for appraising a therapeutic article


1. Did the authors answer the question?
2. Were groups of patients randomized?
3. Are the comparison groups similar?
4. Were patients blinded?
5. Was it placebo-controlled?
6. Was evaluation of effect blinded?
7. Was the length of study appropriate?
8. What was the follow-up rate?
9. Are there clear measures of outcome?
10. Was it an ‘‘intention-to-treat’’ trial?
11. Is the context of the study similar to your own?
12. Did the treatment work?

Adapted from Dawes M. Randomized controlled trials. In: Evidence-Based


Practice: A primer for Health Care Professionals. (eds. Dawes M. et al. Churchill
Livingstone, London) 1999; p. 52.

Table 1 lists some of the terms to define outcomes and used to calculate
measures of the importance of findings. To be clear, if the NNT to benefit
from an operation were 30 and NNH were also 30, serious consideration
to nonoperative therapy would be appropriate. The chance of benefit is
low and equal to that of harm. If the NNT were 5, thinking might be
very different.
The terms listed in Table 1 seem daunting and a trifle irritating, but they
do assist in understanding the final assessment of the value of an interven-
tion [3]. Systematic reviews and meta-analyses as well as results of clinical
trials will use these terms to clarify the magnitude of a treatment effect or
its absence.

Table 1
Measures of outcome and useful terms
ARR Absolute risk reduction
CER Control event rate
CI Confidence interval
EER Experimental event rate
OR Odds ratio
QALY Quality-adjusted life-year
ROC Receiver-operating characteristic
RR Relative risk
RRR Relative reduction in risk
Adapted from Dawes M. Randomised controlled trials. In: Dawes M, Davies P, Gray A,
et al. Evidence-based practice: a primer for health care professionals. London: Churchill Living-
stone; 1999. p. XI.
4 MEAKINS

An integral component of EBM/EBS is the hierarchy of evidence. The


table in the article by Burger, Sugarmen, and Goodman on ethical issues else-
where in this issue outlining the levels of evidence and grades of recommen-
dation is from the Centre for Evidence-Based Medicine at the University of
Oxford [1,4]. Although in general surgeons are concerned with therapy and
its effects, the table also outlines the various types of evidence associated
with prognosis, diagnosis, and economic analysis. The studies are stratified
into levels of evidence by their quality, lack of bias, homogeneity, and so
forth. There are five levels. The strongest evidence, 1a, is a systematic review
of RCTs with homogeneity. The weakest is expert opinion, however quali-
fied. The level identifies the quality of the evidence and leads to a grade of
recommendation A through D. The development of the levels of evidence
and grades of recommendation has been iterative [4–7].
The levels of evidence most available to surgeons and operative therapy
are listed in Table 2.
Most of the surgical literature consists of case series and has been heavily
criticized for the absence of RCTs and well-structured prospective studies
[8,9]. There is nonetheless a significant body of information in the literature
that has directed therapy of common and uncommon clinical problems.
Management of that knowledge and identification of significant lacuna
via critical appraisal is a way forward.

Is surgical practice evidence-based?


The surgical community’s answer will inevitably be ‘‘of course.’’ Two
studies have specifically addressed the question. Howes and colleagues
[10] evaluated surgical treatment to sequential emergency admissions and
found that their practice was supported by Level I evidence in 24% and
Level II in 71%. These were common problems and evaluation was of over-
all treatment plans. The second study [11] retrospectively assessed all admis-
sions to a general surgical service, identified diagnosis and treatment, and
then found supporting evidence in the literature for management. The stud-
ies were not classified by level and quality of RCTs was not defined. The dis-
tribution of study quality is similar to that of the above study, but at least
30% are Level III, grade C or lower quality studies. Although it is possible
to find some support for most therapies, disciplined critical appraisal of the
Table 2
Levels of evidence
Grade Level
A 1c All or none
B 2a Systematic review of cohort studies
B 2b A cohort study
B 2c ‘‘Outcomes’’ research
C 4 Case series
D 5 Expert opinion
EVIDENCE-BASED SURGERY 5

studies may demonstrate significant flaws, structural biases, or failure of ad-


equate follow-up. The surgical literature has been criticized [8,9,12]; the
medical literature in general is often seen to be flawed through poor studies,
editorial bias, poor peer review, the power of industry, lack of generalizabil-
ity, and many more weaknesses. Indeed, it seems hard to believe that we de-
pend so much on a body of knowledge that is so heavily criticized. Critical
appraisal, systematic reviews, and meta-analysis will help to identify the
value of knowledge that we have and that which we require.
Despite some support of the notion that surgical practice is evidence-
based, there are significant data that suggest it is not. Examples follow of
demonstrated failure to apply grade A recommendations. The use and tim-
ing of prophylactic antibiotics for prevention of wound infection (surgical
site infection [SSI]) following an operation has been clearly defined for dec-
ades. Yet an important factor in SSIs is the failure to use perioperative anti-
biotics correctly. Classen and colleagues [13] identified inappropriate
antibiotic prophylactic in the mid 1980s.
The Latter Day Saints (LDS) Hospital in Salt Lake City, Utah has
tracked appropriate use of prophylactic antibiotics from 1985 through
1998. The use of education, and specifically a computer reminder system,
improved the appropriate administration of antibiotics from 40% to 96%
over a period of 6 years. When the reminders were discontinued, compliance
dropped to 80%. Appropriate use was 97% to 99% upon reintroduction of
the system [14].
In what must have been part of the same system, a computer-assisted
management program for antibiotic use improved quality of patient care
and reduced costs [14].
There are areas of surgical practice in colorectal surgery in which the data
are solid regarding use of drains, antibiotic prophylaxis, and use of subcu-
taneous heparin to prevent thromboembolic complications of an operation.
Indeed, no one could pass the ‘‘Principles of Surgery’’ examination before
board certification without knowledge of these three data sets. Despite ev-
eryone knowing how best to use the evidence on these three issues, Wasey
and coworkers [15] have demonstrated clearly on a colorectal service.
Overuse of drains
Underuse of heparin
Misuse of antibiotics (timing, duration)
We know what to do, yet somehow fail to manage and implement the
knowledge we have [16].
Recent systemic reviews and meta-analysis of the use of drains [17] in ab-
dominal surgery and of nasogastric tubes [18] in general surgery have con-
firmed individual studies showing absence of patient benefit and the
possibility of harm. Yet a tour on any general surgical floor and some others
will confirm that both abdominal drains and nasogastric tubes are in com-
mon use. Why?
6 MEAKINS

Further evidence that we know what to do but often fail to implement


that knowledge is presented by Dexter and colleagues [19]. They evaluated
the use of computer reminders for the use of aspirin following a myocardial
infarction, prophylactic heparin, and vaccination, influenza or pneumococ-
cal, in defined clinical settings. The indications for these interventions are
well-known. Yet for the vaccines, use was about 1% without reminders
and only 36% or 51% with them. For heparin and aspirin post-discharge,
use was increased to one third from 19% and 28% respectively. Some physi-
cians routinely ignored the reminders, others were responsive. Yet if asked,
most would acknowledge that there was a valid evidence base for these
simple interventions.

Evidence-based surgical practice: why is it so difficult?


There are a number of more or less defensible reasons, associated both
with surgical practice and the nature of surgeons, that explain the difficul-
ties, and a number of less defensible reasons for why there is so much var-
iation in the management of specific clinical problems. More difficult is
understanding geographical variations in the performance of specific proce-
dures. In this instance, the author will not approach the regional variations,
but the concepts are likely to be much the same as they are when details of
operative and postoperative management are examined and found to vary
significantly within an institution as well as across institutions. A major
driver for variations in clinical practice relates to the surgical training struc-
ture, which is essentially an apprentice-based approach through which sur-
gical trainees are expected to practice in the model of their teachers. If
therefore there are four colorectal surgeons on a service and all do a partic-
ular component of patient preparation, operative practice, or postoperative
management differently, trainees will do what each surgeon instructs and
eventually develop some synthesis to manage their own patients. Thus the
surgical training structure dictates a major variation in clinical practice.
There is also a question of what is acceptable evidence. The process in
Australia for the evaluation of new techniques or new technology is outlined
in the article by Maddern elsewhere in this issue. Many of the Australian
evaluative processes end up indicating that more research is required. In
many instance, this remains the case in sorting aspects of surgical evidence,
and can be seen not only in the Australian Safety and Efficacy Register of
New Internventional Procedures-Surgical (ASERNIP–S) process but also
that of the Cochrane collaboration. In many instances in which we take
very dogmatic approaches, a systematic review will indicate that the data
are not available to support any of them. This may mean that it does not
matter how a particular problem is approached, or that there is a best
method but we may not like it.
Frequently, there are areas in which best practice has been identifieddfor
example the use of drains, postoperative pain management, or nasogastric
EVIDENCE-BASED SURGERY 7

tubesdyet variation is common. A major problem is that implementation of


a single approach supported by the evidence will require significant change
in professional behavior, which is a particularly irksome administrative task
for leader and follower. It tends to impact on professionals’ belief in the au-
tonomy of their clinical decisions and their right to make specific defined ap-
proaches to the management of their own patients. Change is hard;
changing a professional’s practice is really hard. The translation of knowl-
edge to practice has foundered on a profession’s culture [20,21].
A final consideration regarding defensibility is that rules of evidence are
sometimes not useful for procedural-based problems. There are many as-
pects in surgery that are quite suitable to an RCT; there are others much
less so, and this subject is addressed subsequently.
Much less defensible are the attitudes that it is ‘‘my way or the highway,’’
and that personal experience indicates best practice. One hears remarks that
the author or the journal is not credible. The reluctance to change is enor-
mous, and the resistance such that a negotiation over relatively simple clin-
ical protocols becomes lost in dogma and is often compounded by the
absence of critical evidence. The worst arguments are those where the evi-
dence is sparse. The dogmatism has a religious fervor. One must, however,
ask the question: If it doesn’t matter, why can the same problem not be
managed by everyone in the same manner? The benefits to nursing, house
staff, costs, patient safety, and so on would be great. The most obvious ex-
ample around which there are considerable differences of opinion, despite
the presence of good evidence, is the use of drains or nasogastric tubes in
association with gastrointestinal surgery. If drains are associated with in-
creased complications without particular benefit to the patient, it would
seem a simple matter of eliminate their use [17], yet this has proved to be
an irksome problem, a good example of which is the use of abdominal
drains in open cholecystectomy. Studies were done as to whether the drain
should be through a separate stab wound or come out the wound, but in re-
ality, every study looking at the value of those drains in cholecystectomy in-
dicated that they were of no benefit to the patient and actually had
a clinically significant complication rate as well as occupying nursing time
and occupying doctor time. On ward rounds, the question is posed: Should
the drain come out? ‘‘Well, it is draining a little bit, perhaps we will leave it
for another day,’’ and so on. The story with nasogastric tubes is the same
[19]. It seems obvious that a group should arrive at a standard policy where
neither of these instruments was used except in defined circumstances, yet
this has proven to be exceptionally difficult to achieve. The reasons have
to do with tradition (it is traditional to do this or, in my training program,
that is how we did it), and a conflict in belief (we don’t believe the study be-
cause we can remember a situation in which X or Y was useful). To submit
all patients to something that has a small benefit in a small group of patients
and yet has complications that are measurable in the larger group, seems
madness. It is here that use of the NNT and NNH can be very useful.
8 MEAKINS

Finally, the concept of understanding who is the patient is important.


Frequently interventions are used because the surgeon is worried. Putting
a drain in the patient to relieve surgeon anxiety does not seem to be an ap-
propriate therapy for either the surgeon’s anxiety or the patient’s operation.
It is important to recognize that in every doctor–patient relationship there
are indeed two parties or patients:
(1) the patient who is receiving medical advice or therapy
(2) the physician who is concerned that what he does for the patient is in
the patient’s best interest, and who makes sometimes decisions on the
basis of personal angst.
It is important to suppress the need of physicians to look after their con-
cerns and apply best evidence to patient care. A common example of surgeon
worry dictating a decision is prolonging the use of prophylactic antibiotics.
The reasoning being: ‘‘That was a difficult operation, we better give three
days of antibiotics,’’ despite the dangers of resistance and Clostridium difficile,
and the large body of evidence that there is no patient benefit.
In the long run, a major concern must be that if the profession does not
use best evidence, external pressures will force the issue [16,20,21].

What would be the face of evidence-based surgical practice?


The most obvious change, perhaps the unacceptable one, would be stan-
dardization of pre- and postoperative care. Evidence-based protocols and
care maps would be standard operating practice.
In some instances, it would be the recognition that many details of care
do not really matter that much, and when evidence is not available it should
be possible to arrive at some reasonable compromise that is suitable for sim-
plifying modern surgical practice, the patient’s journey, and outcomes. Very
good examples would be ‘‘When do you remove clips following an open op-
eration?’’ and ‘‘Should that be done in hospital or at home?’’ With the short-
ening of patient stay in hospital, sutures or clips are frequently left in place
to be removed subsequently. Perhaps it would be more suitable to remove
them in hospital and apply adhesive strips, or to close the wound in another
manner, such as a subcuticular stitch with absorbable sutures, steristrips in
the operating room, or tissue glues that are recently being promoted by in-
dustry. It should surely be possible to arrive at an agreement in a service as
to how to manage such a simple issue, yet it seems to be very difficult.
There are nonetheless many processes of care that can be protocolized on
the basis of very good evidence. Henrik Kehlet has developed such a pro-
gram in Copenhagen relating to what is termed ‘‘fast-track surgery’’
[22,23]. This is in essence a variation of a care map or protocolized surgery.
At the heart of this approach is a common pathway for therapy that is
developed by all relevant health care professionals and is composed of
evidence-based elements. Where data are not available, team agreements
EVIDENCE-BASED SURGERY 9

are established. Checklists and care maps are integral to these patient man-
agement approaches. Monitoring of outcomes is important. Indeed, the pro-
cess is identical to the five steps of EBS identified at the beginning of this
article. The evaluation step allows, perhaps guarantees, continuous quality
improvement. Patient education is a key component and makes the patient
an integrated participant. The system can plan patient discharge following
a colon resection on the third postoperative day [23].
The fast-track approach has been applied to a number of procedures, in-
cluding aortocoronary bypass and joint replacements of the hip and knee.
Although the development of these programs requires considerable energy
and flattening of the clinical hierarchy, they can be achieved. The evidence
suggests that the clinical outcomes are more than satisfactory [23]. Failure of
wide adoption of these approaches speaks in part to the fact that it is diffi-
cult, and that a traditional clinical approach that appears to work is very
difficult to modify.
Cardiac surgeons have been exposed to publication of their results in the pop-
ular press. It is uncertain if this has led to significant improvement in regional
results or transfer of high-risk patients elsewhere. An approach to registry
data, evidence, and continuous quality improvement has been demonstrated
by the Northern New England Cardiovascular Disease Study Group
(NNECVDSG). They have demonstrated shortened length of stay, reduced
costs, and improved outcome with a judicious mix of their own risk-adjusted
data and application of best evidence. All six units in the NNECVDSG have
demonstrated benefits, and all have contributed to improved outcomes [24–26].

Are there external drivers?


External drivers to push clinical practice toward a more standardized
EBSP (perhaps the term ‘‘best practice’’ ought to be used) are principally
three. First, the cost of health care throughout the Western world is increas-
ing almost exponentially and needs to be managed. Second, issues surround-
ing patient safety have highlighted the reality that there are vast differences
in the way in which similar problems are handled, and that undoubtedly
some will not matter; however, in many areas it is likely that there will be
a best practice approach, and it is up to the medical profession to sort these
out. Third, a threatening medico-legal environment, increasing in all Orga-
nization for Economic Cooperation and Development (OECD) countries, is
using concepts of evidence-based practice and standardized approaches to
the management of common problems to support claims. In circumstances
in which there is an untoward event, failure to have used best evidence leaves
the clinician open to legal liability.
Patient safety has been highlighted in recent years following the publica-
tion of the Institute of Medicine’s report To Err is Human [27]. This was fol-
lowed by a second report entitled Crossing the Quality Chasm [28]. Following
publication of the first, newspaper headlines across North America reported
10 MEAKINS

that 44,000 to 98,000 deaths a year were the result of medical error. Although
these are estimates, they nonetheless caught the attention of policy makers,
government, and patient advocacy groups, as well as the medico-
legal fraternity. Patient safety issues are therefore very high on the agenda
of a knowledge-management conscious profession. In the United Kingdom,
the Times of London recently headlined ‘‘Blundering hospitals kill 40,000 ev-
ery year’’ [29]. That over 50% of medical errors are pharmacological does
not leave procedure-based medicine in the clear. Errors in hospital are either
those of omission or commission if one excludes technical complications or
complications associated with patient disease or comorbidity. If there is an
error of omission or commission, there is a knowledge base against which de-
cisions can be tested either preoperatively, operatively, or postoperatively. It
is coordination of those evidence bases that are required, and their applica-
tion rather more routinely to common problems. Recognizing that this inter-
feres with physician autonomy, the issues of patient safety will not long
tolerate the desire for professional independence versus continuous quality
improvement evidence-based surgical care.
It is likely that some of the solutions will be modeled after the airline indus-
try, which has defined the importance of checklists as well as persistent and
ongoing cross-checking. This demands a flattening of the traditional hierar-
chy present in surgical practice, in which junior members of the teams or non-
physicians are unwelcome in the identification of problems. There is indeed
evidence that the flattening of the hierarchy is beneficial, as demonstrated in
pediatric cardiac surgery through the work of Professor de Leval and col-
leagues [30]. Some of the solutions will incorporate the use of checklists in
the operating room, where the surgical team, the anesthetist, and the nursing
team all agree on the name of the patient, the operation to be done, the site and
side of the operation, whether or not antibiotics have been given or are to be
given, the use of heparin for thromboembolic prophylaxis, the presence of
a catheter, and the presence or absence of drains, tubes, and techniques of
pain control in the postoperative period. The checklist would be adapted to
procedure and discipline, incorporating other relevant details.
The medical profession in all developed countries is feeling increasingly
threatened by malpractice suits. The EBM movement has touched the legal
profession as it uses the evidence to question medical practice when there
has been an adverse event. The two Institute of Medicine (IOM) reports fur-
ther identifying medical errors and issues of patient safety have used data to
support their contentions [27,28]. If evidence-based practice is to become the
standard of care rather than local practice, not an unreasonable scenario,
the surgical profession must learn EBS and EBSP [29].

How should surgical innovation be assessed?


Surgical innovation falls roughly into two categories. The first relates to
techniques and technology, all of which are tied into the development of new
EVIDENCE-BASED SURGERY 11

procedures or modification of old procedures. One must always ask the


question ‘‘How should a new operation or a new technique, or the use of
a new technology, be introduced into clinical practice?’’ The other major
area of surgical innovation relates to processes of care, including the previ-
ously mentioned fast-track approaches, care maps, and the development
and use of clinical guidelines. All of these require assessment and evaluation
and, in many instances, the question needs to be posed ‘‘Are the rules of ev-
idence different for surgical procedures or aspects of surgical innovation
that involve clinical care?’’ The standard approach for assessment of any
clinical entity has been the RCT, preferably double-blind. One needs to
pose the question of what is the role of the RCT in evaluation and new pro-
cedures and new technology, as well as innovative approaches to processes
of care [32].
If we look at the clinical advances in techniques and systems of care asso-
ciated with aortocoronary bypass, hepatic resection, pancreatic resection, or
aortic surgery for aneurysm disease, incremental changes in clinical care have
been too small to measure, yet despite operating in increasingly complex clin-
ical circumstances, morbidity and mortality in association with all of these
procedures has declined over time. This is undoubtedly associated in some
part with improved surgical technique; however, the sum of tiny improve-
ments in staging, surgical approaches, aspects of operating room technology,
and advances in anesthetic care, pain management, and all aspects of post-
operative management are responsible. There is indeed some evidence that
results improve over time as an individual surgeon develops an operative
team and the team approach to management of a specific operation. This
is often ascribed to the surgeon and the surgeon’s technique, but is almost
certainly related to a comprehensive and coordinated approach understood
by the team [30,33]. We see knowledge management, EBSP, in action.
One of the critical issues in evaluating surgical innovation, either of sys-
tems or related to procedures, has been the nature of the study required to
demonstrate progress. The majority of surgical studies are observational in
one manner or another and the number of RCTs is seen to be too low [8].
Box 2 outlines some of the progress areas in surgery where RCTs have either
been not required, not helpful or, indeed, have been the keys to understand-
ing progress. There is no question that observational studies can provide
very important information in the evaluation of new techniques, and in
some instances, will not allow for the performance of an RCT, for a variety
of reasons frequently outlined [32,34]. Arguments have recently been made
to indicate that the circumstances in which RCTs are not suitable in surgical
situations are plentiful, and pleas have been made for other forms of evalu-
ative techniques to be applied to develop the evidence base defining tech-
nical approaches to a variety of issue [32,34,35]. Box 3 outlines some of the
difficulties in surgical RCTs, but also highlights that there are educational
problems that surgeons can easily address to rectify some of these issues.
The establishment of an educational approach to clinical epidemiology with
12 MEAKINS

Box 2. Progress areas in surgery


Procedures in which RCTs are helpful
 Extracranial-intracranial (EC-IC) bypass
 Carotid endarterectomy
 Lung reduction surgery
 Stenting of vessels
 Breast cancer surgical trials
 Colon cancer adjuvant therapy
Procedures in which RCTs are not helpful
 Percutaneous drainage
 Liver transplantation
 Laparoscopic adrenalectomy or splenectomy
 Inflammatory bowel disease
RCTs not required
 ‘‘The penicillin effect’’
 Hip replacement
 Liver resection for colorectal metastases
 Nonoperative management of splenic injury
 Drainage of subphrenic abscess

the acquisition of the discipline’s tools, many of which are incorporated in


the principles of EBS, should allow a satisfactory evidence-based approach
to be developed.
The evolution of interventional radiology and the inevitable competition
that is surfacing between surgical practice and the ability of radiologists to
perform similar procedures in a minimally invasive manner will drive com-
parative studies to be established. An example recently published is that of
the International Subarachnoid Aneurysm trial, in which it was demon-
strated that the use of coils via interventional neuroradiologists provided a
better outcome to patients who had cerebral vascular aneurysms than did an

Box 3. Difficulties in surgical RCTs


 Equipoise: patients/surgeons
 Bias: selection and observer
 Blinding
 Learning curvedwhen
 Effectiveness versus efficacy
 Standardization of technique
 Lack of education in clinical epidemiology
EVIDENCE-BASED SURGERY 13

open surgical approach [36]. There will be increasing studies of this sort, and
these are of course suitable for RCTs to define which is the best approach.
The standards required to ensure that comparable groups and techniques
are being compared will continue to be difficult. Nevertheless it is obvious
that procedures can be compared. The majority of studies evaluating
surgery have been comparisons to best medical management, the surgical
procedure having been established as safe and effective via observational
studies or cohort studies.
Although the difficulties in RCTs have been identified in the past [32,34,35],
it is useful to point out that some of these are quite significant (see Box 3). The
equipoise required to do a procedure-based RCT is significant. It is required
on the part of both the patient and the surgeon. When conducting an RCT of
laparoscopic cholecystectomy, it became quickly apparent that the patients
did not have equipoise with respect to the procedure. Patients had been suffi-
ciently biased that their wish was to have the ‘‘minimally invasive approach.’’
Not long after patients lost their equipoise, surgeons did as well. The study
had to be terminated [37]. In addition, bias with respect to patient selection
and observer evaluation is a significant variable in this setting, and blinding
both patient and evaluator to procedure done can be quite difficult. When dur-
ing the learning curve should the RCT be done? The learning curve can
be quite protracted in some procedures, those of a complex technical or skill
level, whereas others are complex as a result of the entire team requiring train-
ing in the production of standardized results [30,33]. There is the question of
effectiveness versus efficacy. That is, in the hands of a master surgeon, what
would the results be compared with those of surgeons representing average
ability in the community. Standardization of technique is enormously diffi-
cult, as is seen in the recent Veteran’s Administration study of laparoscopic
versus open hernia repair [38]. Continuous monitoring was required to ensure
that both operations were done in a completely standardized manner. It is al-
most inevitable that surgeons will constantly modify their techniques and
thereby create difficulties with the result to the end point. Finally, a part of
the problem within surgery relates to a lack of education in the clinical tools
of clinical epidemiology.
All of this having been said, it is the surgeon’s responsibility to define the
evidence base upon which our clinical practice is founded, and some of these
issues will need to be faced up to squarely.

What are some of the solutions?


Driving forces have already been mentioned with respect to the need for
EBSP. These drivers will demand for all surgeons an understanding of the
principles of EBS and how to implement them in practice. The objective
is not standardization of all aspects of surgical clinical activity, but to ensure
that patients at all times receive optimal surgical care. Therefore, from a sur-
gical career point of view, understanding the issues associated with surgical
14 MEAKINS

epidemiology, knowledge management, and EBSP has implications for clini-


cians in the community, surgeons in large metropolitan hospitals, surgeon
scholars, and the academic surgeon. All need to have some understanding
of not only the evaluation of the evidence and how to find it but, in addition,
application of those concepts to continuous quality improvement and clos-
ing a circle of surgical audit. These issues are well outlined by Jones [31] and
in his American Surgical Association Presidential Address. If the surgical
profession has an obligation to redefine clinical modus operandi and educa-
tional processes, the arguments for formal training in aspects of clinical ep-
idemiology during the surgical residency program are obvious, because all
surgeons will benefit from those educational exercises.
What would be required from the surgical community to implement
EBSP and the concepts of continuous quality improvement, which are in-
nately linked with the knowledge management required, is leadership within
the discipline for the cultural changes required. In addition, surgical training
needs to be modified to be less apprentice-associated and less individualistic
in the solution of problems, and leadership is required to identify what mat-
ters and what doesn’t matter.
Surgical societies can contribute effectively in the way in which their
annual programs and continuing professional development aspects are de-
veloped by insisting on specific standards with respect to the studies
presented. Journals, in addition, have a specific responsibility to ensure
through peer review and editorial assessment that published articles meet
certain standards. In addition, a number of journals, including the Canadian
Journal of Surgery, the Journal of the American College of Surgeons, and the
British Journal of Surgery have either classified their articles or have specific
segments of the journal that address evidence-based principles.
Further work can be done through large surgical organizations such as
the American College of Surgeons, which has developed a clinical trials op-
eration most obviously seen through the American College of Surgeons On-
cology Group, but is also seen in the recent studies in hernia repaird
comparison of techniques, as well as evaluation of watchful waiting [36].
The College, in addition, has a research and optimal patient care division
as one of its principle enterprises and this is outlined in the article by Jones
mentioned earlier [30]. Finally, the leaders of the residency programs around
the world need to establish the importance of these principles within their
education systems. Programs are increasingly becoming educationally driven
rather than apprentice-oriented, and part of the curriculum needs to be the
principles associated with EBSP and clinical epidemiology.

References
[1] Sackett DL, Strauss SE, Richardson WS, et al. Evidence-based medicine: how to practice
and teach EBM. 2nd edition. London: Churchill Livingstone; 2000.
EVIDENCE-BASED SURGERY 15

[2] Martinez E, Pronovost P. Evidence-based anaesthesiology. In: Dawes M, Davies P, Gray A,


et al, editors. Evidence-based surgery. 2000; p. 646.
[3] Dawes M. Randomized controlled trials. In: Dawes M, et al, editors. Evidence-based
practice: a primer for health care professionals. London: Churchill Livingstone; 1999.
p. 52.
[4] Ball CM, Phillips RS. Appendix 1: levels of evidence. In: Ball CM, Phillips RS, editors. Acute
medicine. London: Churchill Livingstone; 2001. p. 641.
[5] Canadian Task Force on the Periodic Health Examination. The periodic health examination.
CMAJ 1979;121:1193–254.
[6] Sackett DL. Rules of evidence and clinical recommendations on use of antithrombotic
agents. Chest 1986;89(Suppl 2):2S–3S.
[7] Cook DJ, Guyatt GH, Laupacis A, et al. Clinical recommendations using levels of evidence
for antithrombotic agents. Chest 1995;108(S4):227S–30S.
[8] Horton R. Surgical research or comic opera: questions, but few answers. Lancet 1996;347:
984–5.
[9] Hall JC, Hall JL. Randomisation in surgical trials. Surgery 2002;132:513–8.
[10] Howes N, Chagla L, Thorpe M, et al. Surgical practice is evidence-based. Br J Surg 1997;
84(9):1220–3.
[11] Kingston R, Barry M, Tierney S, et al. Treatment of surgical patients is evidence-based. Eur J
Surg 2001;167:324–30.
[12] Wells SA Jr. Surgeons and surgical trialsdwhy we must assume a leadership role. Surgery
2002;132:519–20.
[13] Classen DL, et al. The timing of prophylactic administration of antibiotics and the risk of
surgical-wound infection. N Engl J Med 1992;326:281–6.
[14] Burke JP. Maximising appropriate antibiotic prophylaxis for surgical patients: an update
from LDS Hospital, Salt Lake City. Clin Infect Dis 2001;33(Suppl 2):S78–83.
[15] Wasey N, Baughan J, de Gara CJ. Prophylaxis in elective colorectal surgery: the cost of ig-
noring the evidence. Can J Surg 2003;46:279–84.
[16] Bohnen JMA. Why do surgeons not comply with ‘‘best practice’’? Can J Surg 2003;46:251–2.
[17] Petrowsky H, Demartines N, Rousson V, et al. Evidence-based value of prophylactic drain-
age in gastrointestinal surgery: a systemic review and meta-analyses. Ann Surg 2004;240:
1074–85.
[18] Nelson R, Tse B, Edwards S. Systematic review of prophylactic nasogastric decompression
after abdominal operations. Br J Surg 2005;92:673–80.
[19] Dexter PR, Perkins S, Overhage JM, et al. A computerized reminder system to increase the
use of preventive care for hospitalized patients. N Engl J Med 2001;345:965–70.
[20] Lenfant C. Clinical research to clinical practicedlost in translation? N Engl J Med 2003;349:
868–74.
[21] Leape L, Berwick DM. Five years after ‘‘To err is human: what have we learned?’’ JAMA
2005;293:2384–90.
[22] Kehlet H, Wilmore DW. Fast track surgery. In: Souba WW, Fink MP, Jurkovich OJ, et al,
editors. ACS surgery. Available at: www.acssurgery.com. Accessed August 19, 2005.
[23] Basse L, Hjort Jakobse D, Billesbolle P, et al. A clinical pathway to accelerate recovery after
colonic resection. Ann Surg 2000;232:51–7.
[24] Greene PS, Baumgartner WA. Cardiac surgery. In: Gordon T, Cameron JL, editors.
Evidence-based surgery. Decker BC, Hamilton, Canada. 2000.
[25] Nugent WC, Schults WC. Playing by the numbers: how collecting outcomes data changed
my life. Ann Thorac Surg 1994;58(6):1866–70.
[26] O’Connor GT, Plume SK, Olmstead EM, et al. A regional intervention to improve the hos-
pital mortality associated with coronary artery bypass graft surgery. The Northern New
England Cardiovascular Disease Study Group. JAMA 1996;275(11):841–6.
[27] Institute of Medicine. To err is human: building a safer health system. Washington (DC): In-
stitute of Medicine; 1999.
16 MEAKINS

[28] Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century.
Washington (DC): Institute of Medicine; 2001.
[29] Headline, The Times of London. August 13, 2004. p. 1.
[30] Carthey J, de Leval MR, Reason JT. The human factor in cardiac surgery errors and near
misses in a high technology medical domain. Ann Thorac Surg 2001;72:300–5.
[31] Jones RS. Requiem and renewal. Ann Surg 2004;240:395–404.
[32] Meakins JL. Innovation in surgery: the rules of evidence. Am J Surg 2002;183(4):399–405.
[33] Sutton DN, Wayman J, Griffen SM. Learning curve for oesophageal cancer surgery. Br J
Surg 1998;85:1399–402.
[34] McCulloch P, Taylor I, Sasako M, et al. Randomized trials in surgery: problems and possible
solution. BMJ 2002;324:1448–51.
[35] Lilford R, Braunholz D, Harris J, et al. Trials in surgery. Br J Surg 2004;91:6–16.
[36] Molyneux A, Kerr R, Stratton I, et al. International Subarachnoid Aneurysm Trial (ISAT)
of neurosurgical clipping versus endovascular coiling in 2143 patients with ruptured intra-
cranial aneurysms: a randomised trial. Lancet 2002;360:1267–74.
[37] Barkun JS, Barkun AN, Sampalis JS, et al. Randomized controlled trials of laparoscopic ver-
sus mini-cholecystectomy. Lancet 1992;2:1116–9.
[38] Neumayer L, Giobbie-Hurder A, Jonasson O, et al. Open mesh versus laparoscopic mesh
repair of inguinal hernia. N Engl J Med 2004;350:1819–27.

You might also like