Evidence-Based Surgery
Evidence-Based Surgery
Evidence-Based Surgery
Evidence-Based Surgery
Jonathan L. Meakins, OC, MD, DSc, FRCS(Hon),
FRCS(C,Glas)
Nuffield Department of Surgery, John Radcliffe Hospital, University of Oxford,
Headington, Oxford OX3 9DU, England, UK
0039-6109/06/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.suc.2005.10.004 surgical.theclinics.com
2 MEAKINS
day. The classic debating trick on rounds: ‘‘In this week’s journal, X, not Y,
is the way to manage this clinical scenario.’’ Yet in reality, the best evidence
is usually a summary of all the evidence that will assist solving the clinical
problem with the elimination of bias. The detailed process for a simple ques-
tion in a particular patient might be different from the same clinical problem
in a population. The steps, however, are essentially the same:
1. Define the question.
2. Search for the evidence.
3. Critically appraise the literature.
4. Apply the results: on a patient or a population.
5. Evaluate the outcome.
Closing the circledevaluation of outcomedis as important as any of the
other steps.
There are specific tools that must be mastered to implement this ap-
proach to any clinical issue. Just as dissecting with a scalpel is a learned skill
and somewhat different from using Metzenbaum scissors, search methodol-
ogy has multiple techniques that one can use to identify the same collection
of articles. Multiple techniques can be learned, but as with most tools, a fa-
vorite approach or technique will emerge. Searching is an important tool to
recognize and use well.
The next key tool is critical appraisal. This does not mean evaluation of
the center (reputable), the authors (well-known to me or by reputation) or
the level of agreement of the conclusions with our preformed and often
well-established thoughts. It means using a structured framework to evalu-
ate the literature (evidence). When asking a question relating to a specific
patient and clinical problem [2], the article must address three questions:
1. Is the evidence from this randomized, controlled trial (RCT) valid?
2. If valid, is it important?
3. When valid and important, can it be applied to the patient or problem at
hand?
We must always separate the statistically significant from the clinically
important. Box 1 provides the template for evaluating an article and facili-
tates its use in answering the above three questions:
A useful concept to calculate from the critical appraisal is the number
needed to treat (NNT). The NNT is the number of patients treated to
achieve the primary goal in a patient. For surgeons deciding upon an oper-
ation; the number needed to harm (NNH) is as important, and ought to be
integrated into any clinical or operative decision [1].
In their article on finding and appraising evidence elsewhere in this issue,
McCulloch and Badenoch outline the creation of a critically appraised topic
or CAT. This is another tool of great use, because once done it can be
updated as new evidence surfaces. Most approaches to CAT making and
application of results do depend on a number of statistical calculations.
EVIDENCE-BASED SURGERY 3
Table 1 lists some of the terms to define outcomes and used to calculate
measures of the importance of findings. To be clear, if the NNT to benefit
from an operation were 30 and NNH were also 30, serious consideration
to nonoperative therapy would be appropriate. The chance of benefit is
low and equal to that of harm. If the NNT were 5, thinking might be
very different.
The terms listed in Table 1 seem daunting and a trifle irritating, but they
do assist in understanding the final assessment of the value of an interven-
tion [3]. Systematic reviews and meta-analyses as well as results of clinical
trials will use these terms to clarify the magnitude of a treatment effect or
its absence.
Table 1
Measures of outcome and useful terms
ARR Absolute risk reduction
CER Control event rate
CI Confidence interval
EER Experimental event rate
OR Odds ratio
QALY Quality-adjusted life-year
ROC Receiver-operating characteristic
RR Relative risk
RRR Relative reduction in risk
Adapted from Dawes M. Randomised controlled trials. In: Dawes M, Davies P, Gray A,
et al. Evidence-based practice: a primer for health care professionals. London: Churchill Living-
stone; 1999. p. XI.
4 MEAKINS
are established. Checklists and care maps are integral to these patient man-
agement approaches. Monitoring of outcomes is important. Indeed, the pro-
cess is identical to the five steps of EBS identified at the beginning of this
article. The evaluation step allows, perhaps guarantees, continuous quality
improvement. Patient education is a key component and makes the patient
an integrated participant. The system can plan patient discharge following
a colon resection on the third postoperative day [23].
The fast-track approach has been applied to a number of procedures, in-
cluding aortocoronary bypass and joint replacements of the hip and knee.
Although the development of these programs requires considerable energy
and flattening of the clinical hierarchy, they can be achieved. The evidence
suggests that the clinical outcomes are more than satisfactory [23]. Failure of
wide adoption of these approaches speaks in part to the fact that it is diffi-
cult, and that a traditional clinical approach that appears to work is very
difficult to modify.
Cardiac surgeons have been exposed to publication of their results in the pop-
ular press. It is uncertain if this has led to significant improvement in regional
results or transfer of high-risk patients elsewhere. An approach to registry
data, evidence, and continuous quality improvement has been demonstrated
by the Northern New England Cardiovascular Disease Study Group
(NNECVDSG). They have demonstrated shortened length of stay, reduced
costs, and improved outcome with a judicious mix of their own risk-adjusted
data and application of best evidence. All six units in the NNECVDSG have
demonstrated benefits, and all have contributed to improved outcomes [24–26].
that 44,000 to 98,000 deaths a year were the result of medical error. Although
these are estimates, they nonetheless caught the attention of policy makers,
government, and patient advocacy groups, as well as the medico-
legal fraternity. Patient safety issues are therefore very high on the agenda
of a knowledge-management conscious profession. In the United Kingdom,
the Times of London recently headlined ‘‘Blundering hospitals kill 40,000 ev-
ery year’’ [29]. That over 50% of medical errors are pharmacological does
not leave procedure-based medicine in the clear. Errors in hospital are either
those of omission or commission if one excludes technical complications or
complications associated with patient disease or comorbidity. If there is an
error of omission or commission, there is a knowledge base against which de-
cisions can be tested either preoperatively, operatively, or postoperatively. It
is coordination of those evidence bases that are required, and their applica-
tion rather more routinely to common problems. Recognizing that this inter-
feres with physician autonomy, the issues of patient safety will not long
tolerate the desire for professional independence versus continuous quality
improvement evidence-based surgical care.
It is likely that some of the solutions will be modeled after the airline indus-
try, which has defined the importance of checklists as well as persistent and
ongoing cross-checking. This demands a flattening of the traditional hierar-
chy present in surgical practice, in which junior members of the teams or non-
physicians are unwelcome in the identification of problems. There is indeed
evidence that the flattening of the hierarchy is beneficial, as demonstrated in
pediatric cardiac surgery through the work of Professor de Leval and col-
leagues [30]. Some of the solutions will incorporate the use of checklists in
the operating room, where the surgical team, the anesthetist, and the nursing
team all agree on the name of the patient, the operation to be done, the site and
side of the operation, whether or not antibiotics have been given or are to be
given, the use of heparin for thromboembolic prophylaxis, the presence of
a catheter, and the presence or absence of drains, tubes, and techniques of
pain control in the postoperative period. The checklist would be adapted to
procedure and discipline, incorporating other relevant details.
The medical profession in all developed countries is feeling increasingly
threatened by malpractice suits. The EBM movement has touched the legal
profession as it uses the evidence to question medical practice when there
has been an adverse event. The two Institute of Medicine (IOM) reports fur-
ther identifying medical errors and issues of patient safety have used data to
support their contentions [27,28]. If evidence-based practice is to become the
standard of care rather than local practice, not an unreasonable scenario,
the surgical profession must learn EBS and EBSP [29].
open surgical approach [36]. There will be increasing studies of this sort, and
these are of course suitable for RCTs to define which is the best approach.
The standards required to ensure that comparable groups and techniques
are being compared will continue to be difficult. Nevertheless it is obvious
that procedures can be compared. The majority of studies evaluating
surgery have been comparisons to best medical management, the surgical
procedure having been established as safe and effective via observational
studies or cohort studies.
Although the difficulties in RCTs have been identified in the past [32,34,35],
it is useful to point out that some of these are quite significant (see Box 3). The
equipoise required to do a procedure-based RCT is significant. It is required
on the part of both the patient and the surgeon. When conducting an RCT of
laparoscopic cholecystectomy, it became quickly apparent that the patients
did not have equipoise with respect to the procedure. Patients had been suffi-
ciently biased that their wish was to have the ‘‘minimally invasive approach.’’
Not long after patients lost their equipoise, surgeons did as well. The study
had to be terminated [37]. In addition, bias with respect to patient selection
and observer evaluation is a significant variable in this setting, and blinding
both patient and evaluator to procedure done can be quite difficult. When dur-
ing the learning curve should the RCT be done? The learning curve can
be quite protracted in some procedures, those of a complex technical or skill
level, whereas others are complex as a result of the entire team requiring train-
ing in the production of standardized results [30,33]. There is the question of
effectiveness versus efficacy. That is, in the hands of a master surgeon, what
would the results be compared with those of surgeons representing average
ability in the community. Standardization of technique is enormously diffi-
cult, as is seen in the recent Veteran’s Administration study of laparoscopic
versus open hernia repair [38]. Continuous monitoring was required to ensure
that both operations were done in a completely standardized manner. It is al-
most inevitable that surgeons will constantly modify their techniques and
thereby create difficulties with the result to the end point. Finally, a part of
the problem within surgery relates to a lack of education in the clinical tools
of clinical epidemiology.
All of this having been said, it is the surgeon’s responsibility to define the
evidence base upon which our clinical practice is founded, and some of these
issues will need to be faced up to squarely.
References
[1] Sackett DL, Strauss SE, Richardson WS, et al. Evidence-based medicine: how to practice
and teach EBM. 2nd edition. London: Churchill Livingstone; 2000.
EVIDENCE-BASED SURGERY 15
[28] Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century.
Washington (DC): Institute of Medicine; 2001.
[29] Headline, The Times of London. August 13, 2004. p. 1.
[30] Carthey J, de Leval MR, Reason JT. The human factor in cardiac surgery errors and near
misses in a high technology medical domain. Ann Thorac Surg 2001;72:300–5.
[31] Jones RS. Requiem and renewal. Ann Surg 2004;240:395–404.
[32] Meakins JL. Innovation in surgery: the rules of evidence. Am J Surg 2002;183(4):399–405.
[33] Sutton DN, Wayman J, Griffen SM. Learning curve for oesophageal cancer surgery. Br J
Surg 1998;85:1399–402.
[34] McCulloch P, Taylor I, Sasako M, et al. Randomized trials in surgery: problems and possible
solution. BMJ 2002;324:1448–51.
[35] Lilford R, Braunholz D, Harris J, et al. Trials in surgery. Br J Surg 2004;91:6–16.
[36] Molyneux A, Kerr R, Stratton I, et al. International Subarachnoid Aneurysm Trial (ISAT)
of neurosurgical clipping versus endovascular coiling in 2143 patients with ruptured intra-
cranial aneurysms: a randomised trial. Lancet 2002;360:1267–74.
[37] Barkun JS, Barkun AN, Sampalis JS, et al. Randomized controlled trials of laparoscopic ver-
sus mini-cholecystectomy. Lancet 1992;2:1116–9.
[38] Neumayer L, Giobbie-Hurder A, Jonasson O, et al. Open mesh versus laparoscopic mesh
repair of inguinal hernia. N Engl J Med 2004;350:1819–27.