C
C2/C3 Traumatic
Spondylolisthesis
▶ Hangman’s Fracture
Calcium Channel Antagonist
Poisoning
▶ Calcium Channel Blocker Toxicity
Calcium Channel Blocker Toxicity
ADEEL ABBASI, FRANCIS DEROOS
Emergency Medicine, Hospital of the University of
Pennsylvania, Philadelphia, PA, USA
Synonyms
Calcium channel antagonist poisoning
Definition
Calcium channel blockers (CCBs) are widely used
throughout the world and are commonly prescribed for
the treatment of hypertension as well as dysrhythmias,
migraine headaches, Raynaud phenomenon, esophageal
spasm, and post-subarachnoid hemorrhage vasospasm.
While CCBs are relatively well tolerated therapeutically,
in overdose, these agents can lead to significant hemodynamic instability including hypotension and bradycardia.
For the most severely poisoned patients, there is no consistently reliable treatment available. Therefore, management decisions must be individualized on a case-by-case
basis and the physiologic response to each intervention
should be careful monitored and considered while the
treatment continues.
All calcium channel blockers act by antagonizing
▶ voltage-sensitive calcium channels (L-type) which are
involved in excitation-contraction coupling in both
the myocardial and vascular smooth muscle as well as the
spontaneous depolarization and conduction within the SA
node, the AV node, and the conduction tissue in the myocardium. While these calcium channels are also present on
skeletal smooth muscle cells, CCBs have little effect on these
tissues function because these cells rely almost exclusively
on intracellular calcium stores rather than calcium influx
for their contractility needs [1].
In general, CCBs are well absorbed orally and are
hepatically metabolized, predominantly by the CYP3A
subgroup of the cytochrome P450 enzyme system. This
metabolism can by saturated in overdose, potentially
prolonging the half-life and duration of activity of these
drugs. CCBs are highly protein bound and have relatively
large volumes of distribution making it unlikely that
hemodialysis or even hemoperfusion would be of any
value in treating an overdosed patient [2].
In therapeutic doses, CCBs reduce calcium influx
into vascular smooth muscle cells resulting in a decrease
in the baseline contractility or tone of the peripheral vascular smooth muscle and, ultimately, a reduction in
peripheral vascular resistance and blood pressure. In
a typical myocardial cell, this reduction in calcium influx
also results in decreased contractility. However, in the
specialized conduction cells within the myocardium, this
reduction in the influx of positively charged calcium both
decreases the rate of spontaneous depolarization (phase 0)
of the SA and AV nodes, and it reduces the electrically
induced depolarization essential in cardiac conduction
(Purkinje tissue). In therapeutic dosing, this reduces the
resting heart rate as well as the conduction through the AV
node, and may suppress spontaneous depolarizations initiated by abnormal or diseased myocardium as well as their
propagation and thereby suppressing dysrhythmias [1].
Effects in Poisoned Patients
In poisoned patients, the physiologic effects that have
just been described for therapeutic dosing become exaggerated resulting in hypotension (most common) and
Jean-Louis Vincent & Jesse B. Hall (eds.), Encyclopedia of Intensive Care Medicine, DOI 10.1007/978-3-642-00418-6,
# Springer-Verlag Berlin Heidelberg 2012
444
C
Calcium Channel Blocker Toxicity
bradydysrhythmias. The clinical symptoms and presentation of CCB toxicity depends predominantly on the degree
of cardiovascular compromise with symptoms ranging
from fatigue, dizziness, and postural light-headedness,
seen early and in milder cases, to confusion, syncope,
and shock, seen later and in more severe cases. Myocardial
chronotropy, dromotropy, and inotropy may become
impaired with initially causing sinus bradycardia but progressing to AV conduction abnormalities, idioventricular
rhythms, or complete heart block [2].
Although in overdose all CCBs are capable of causing
severe cardiovascular compromise and death, there are
some subtle differences in physiologic manifestations
depending on the particular agent. The CCBs with the
most significant myocardial effects, verapamil and diltiazem, have the most profound inhibitory effects on the SA
and AV node. Because of this, these two agents, especially
verapamil, are responsible for the majority of CCB overdose deaths. In contrast, nifedipine and the other
dihydropyridines have little myocardial binding. They
may initially produce a hypotensive patient with relatively
normal or even increased heart rate before progressing
into a bradycardic rhythm if the poisoning is severe
enough.
Consequences of severe cardiogenic shock such as
seizures, cerebral and bowel ischemia, and renal failure
are all associated with severe CCB poisoning. Notably,
severe CNS depression without cardiogenic shock is
uncommon. In fact, any overdose cases involving CCBs
in which altered mental status is a predominant feature in
the setting of relatively normal vital signs, a coingestant
should be strongly considered. In addition, hyperglycemia
is often seen in severely poisoned patients which is, in part,
due to the impairment of calcium into the b-islet cells and
insulin secretion
The degree of toxicity present ultimately depends on
multiple factors including which CCB is ingested, the total
dose of the ingestion, the product formulation, and the
patient’s underlying cardiovascular health. The timing of
presentation of CCB toxicity can be as early as 2–3 h postingestion but can be significantly delayed for 8–12 h when
sustained release products are involved. Sustained release
formulations are particularly difficult to manage because
of this potential delay in onset of hemodynamic changes
combined with the continued and prolonged duration of
absorption, and, often, the large amount of the CCB
ingested [1].
Treatment
Given the high mortality associated with CCB toxicity,
patients presenting with CCB overdose should be started
on treatment immediately, starting if applicable, with gastrointestinal decontamination. If the patient is able to
cooperate, activated charcoal should be given orally at
a recommended dose of 1g/kg to help reduce systemic
absorption from the gastrointestinal tract.
Multiple doses of activated charcoal (MDAC), in
a reduced dose of 0.5 g/kg and without a cathartic, should
be repeated every 1–2 h in ingestions involving sustained
release CCBs. This is an attempt to fill the gastrointestinal
tract with charcoal in an attempt to rapidly adsorb the
CCB as it is slowly, but continuously, released from its
formulation. Orogastric lavage should be considered if the
ingestion involves a large dose of CCB, if the patient presents within 1–2 h postingestion, or if they are critically ill.
Orogastric lavage may increase vagal tone and potentiate
any bradydysrhythmias.
▶ Whole bowel irrigation (WBI) with polyethylene
glycol in poisonings involving sustained-release formulations should also be strongly considered, even in asymptomatic patients. The dosing is 1–2 L/h via a nasogastric
tube in adults and 300–500 mL/h in children and may be
the most effective method of removing the large gastrointestinal reservoir of the CCB from the patient before it is
systemically absorbed. This should be continued until the
rectal effluent is clear. Both MDAC and WBI are important gastrointestinal decontamination methods and
should be initiated as early as possible in cases involving
sustained-release CCBs, even in well-appearing patients
and particularly children, in an attempt to avoid progressive toxicity [3].
Pharmacotherapy should focus on improving and
supporting both cardiac output as well as peripheral
vascular tone. There is no single drug or regimen that
has been consistently effective. A crystalloid bolus of
10–20 mL/kg of normal saline for hypotension and atropine 0.1 mg/kg for bradycardia are reasonable starting
points for each manifestation respectively and may be
initially stabilizing for mildly poisoned patients but are
often inadequate in moderate to severe poisonings where
multiple modalities are often needed simultaneously.
A reasonable approach to a CCB-poisoned patient,
after a fluid bolus and dose of atropine, may be to treat
with a calcium bolus as well as a catecholamine infusion
while you prepare to administer ▶ hyperinsulinemic
euglycemia therapy (HIET). Recently, lipid emulsion
therapy has been used successfully and should be
strongly considered in significantly poisoned patients
[1]. Drugs including glucagon and phosphodiesterase
inhibitors may have limited efficacy and should be considered secondary adjuncts in the most critically ill
patients.
Calcium Channel Blocker Toxicity
Calcium
Calcium administration transiently improves myocardial
inotropy and chronotropy and reverses the hypotension
seen in CCB toxicity, and should be given early in
bradycardic or hypotensive patients. It also improves the
action of atropine if given concurrently. The exact dosing
is unclear but a reasonable initial bolus is approximately
13–25 mEq of Ca2+ IV (10–20 mL of 10% calcium chloride
or 30–60 mL of 10% calcium gluconate), followed either
by repeat boluses every 15–20 min up to 3–4 doses
or a continuous infusion of 0.5 mEq/kg/h of Ca2+
(0.2–0.4 mL/kg/h of 10% calcium chloride or 0.6–1.2 mL
of 10% calcium gluconate). Although there is no difference in the efficacy of calcium chloride or calcium gluconate, the calcium salt administered should be chosen
carefully as 1 g of calcium chloride contains 13.4 mEq of
calcium, which is more than three times the 4.3 mEq
found in 1 g of calcium gluconate. If repeat dosing or
continuous infusions of calcium are used, the serum
concentrations of calcium and phosphate should be
monitored for hypercalcemia or hypophosphatemia. In
addition, intravenous calcium may also cause nausea,
vomiting, flushing, constipation, confusion, and angina.
Catecholamines
Catecholamines are indicated in any hypotensive CCBpoisoned patient. Mechanistically, it is logical to select an
agent, such as norepinephrine, that has both b1-adrenergic
effects and a1-adrenergic effects as a first line drug.
Assessing the patient’s cardiac output and systemic vascular resistance will allow more refined catecholamine
choices. Dopamine is not recommended as a first-line
agent because it is predominantly an indirect acting pressor that acts by stimulating the release of norepinephrine
from the distal nerve terminal rather than by direct a- and
b-adrenergic receptor stimulation and these presynaptic
catecholamines are often depleted in severely stressed
patients.
C
Follow this by infusions of dextrose at 0.25–0.5 g/kg/h and
insulin at 0.5 u/kg/h. This insulin infusion rate should be
increased every 30–60 min if there is no hemodynamic
improvement. Serum glucose should be monitored hourly
throughout HIEG treatment [4].
Lipid-Emulsion Therapy
Calcium channel blockers are highly lipophilic agents, and
lipid emulsion therapy is emerging as a promising new
adjunctive therapy for the management of CCB toxicity.
The efficacy of this treatment is likely a pharmacokinetic
one in that the highly lipophilic drugs are tightly bound to
the fat emulsions and thereby lower the free serum drug
levels. A recommended initial bolus infusion of a 20%
lipid emulsion at 1 mL/kg over 1 min, repeated every
3–5 min to a maximum of 3 mL/kg, should be followed
by a drip of 0.25/mL/kg/min [5]. This treatment has
gained great favor in the anesthesiology literature for the
treatment of iatrogenic bupivacaine poisoning, and many
institutions have protocols that can be extrapolated to
CCB-poisoned patients.
Adjunctive Hemodynamic Support
In a few cases of severe CCB poisonings, the bradydysrhythmias and severe hypotension can be refractory to any
and all pharmacologic therapy, so successful treatment
may require more invasive measures such as cardiopulmonary bypass or extracorporeal membrane oxygenation.
These modalities are technically demanding and only
available at tertiary care centers; however, if implemented
appropriately, they have been shown to provide the hemodynamic support needed until the CCB is metabolized
and eliminated, and baseline myocardial function is
restored. Cardiac pacing may be attempted but is often
unhelpful because patients with severe bradydysrhythmias
from CCB poisoning are also like to have dramatically
impaired cardiac contractility as well. Therefore, even if
the heart rate is successfully improved, the cardiac output
remains poor
Insulin and Glucose
The use of insulin and glucose, often termed
hyperinsulinemic euglycemia (HIEG) therapy, has become
the treatment of choice for severe CCB poisonings. The
rational for this use is multifactorial and includes CCB
poisoning inhibiting insulin release, forcing the normally
free fatty acid–dependent myocardial tissue to become
predominantly carbohydrate dependent as well as resistant
to insulin [4]. In addition, insulin itself has positive inotropic effects. Dosing recommendations, based on
published clinical experience, include an initial bolus of
25–50 g of dextrose (0.5–1 g/kg) and 0.1 U/kg of insulin.
Evaluation and Assessment
Any patient suspected of a CCB overdose should be immediately evaluated, even if there are no symptoms or signs of
toxicity at the time of initial presentation. This is paramount given the seriousness and potentially fatal nature of
CCB poisoning. Even with early and aggressive management, patients who present asymptomatic can deteriorate
rapidly developing cardiogenic shock. This is especially
true for pediatric patients who can be poisoned with
very small doses [3]. Furthermore, with higher-dose
and extended-release preparations available, the clinical
445
C
446
C
Calcium Heparin
presentation of toxicity can be significantly delayed for up
to 12–15 h post ingestion.
Intravenous access and continuous electrocardiographic monitoring should be initiated immediately
upon arrival of the patient. In patients exhibiting any
evidence of cardiovascular compromise, early central
venous access and arterial catheterization is strongly
recommended to allow for more accurate hemodynamic
monitoring and guide therapy. Initial treatment should
begin with adequate oxygenation and airway protection as
clinically indicated. Given the potential for rapid deterioration of a severely poisoned patient, and the need for
aggressive critical therapies, early control of the airway
should be obtained.
A 12-lead ECG should be obtained promptly to assess
for dysrhythmias and conductional abnormalities, and
repeated at least every 1–2 h for the first several hours. If
the patient’s condition improves over time, ECGs can be
repeated at longer intervals.
Careful assessment of the degree of hypoperfusion and
its sequelae, if any, may include a chest radiograph, pulse
oximetry, serum chemistry analysis for metabolic acidosis
and renal function, and monitoring urine output. Assays
for various CCB serum concentrations are not routinely
available and are not used to manage patients after overdose. If a patient presents with bradycardia of unclear
origin, assessing electrolytes, particularly potassium and
magnesium, renal function, and a digoxin concentration,
is warranted.
All patients who have overdosed with CCBs who manifest any consistent signs or symptoms should be admitted
to an intensive care setting. In addition, because of the
possibility of significant delayed toxicity, cases involving
sustained-release formulations should be admitted for
24 h to a monitored setting, even if they are asymptomatic.
This is particularly important for toddlers and small
children in whom even one or a few tablets may produce
significant toxicity [3]. Only patients with a reliable history involving an immediate release formulation of the
CCB, who received appropriate gastrointestinal decontamination, who had a consistently normal ECG over
several hours of monitoring and who are asymptomatic,
can be safely referred directly for further psychiatric
assessment as needed.
regain their baseline neurologic function, their after-care
will be limited to treating any complications from their
hospital stay. Myocardial and peripheral vascular function
should return to its baseline function. Patients with intentional ingestions, regardless of the degree of toxic manifestations, typically will require formal psychiatric evaluation.
Prognosis
While severe calcium channel blocker toxicity can cause
profound cardiogenic collapse and death, if treatment is
not delayed and invasive hemodynamic support is available, even severely poisoned patients can be supported for
days with subsequent full recoveries. Multiple case reports
document severely poisoned patients who were treated
with seemingly extraordinary measures such as several
days of extracorporeal membrane oxygenation or cardiopulmonary bypass despite minimal neurologic function,
who make a complete recovery and regain their baseline
cardiac and neurologic function. Many hypothesize that
CCBs’ unique neuroprotective effects may explain these
remarkable results.
References
1.
2.
3.
4.
5.
DeRoos F (2010) Calcium-channel blockers. In: Goldfrank’s toxicologic emergencies, 9th edn. McGraw-Hill, New York, pp 911–921
Kerns W (2007) Management of b-adrenergic blocker and calcium
channel antagonist toxicity. Emerg Med Clin North Am 25:309–331
Arroyo AM, Kao LW (2009) Calcium channel blocker toxicity.
Pediatr Emerg Care 25:532–538
Patel NP, Pugh ME, Goldberg S, Eiger G (2007) Hyperinsulinemic
euglycemia therapy for verapamil poisoning: a review. Am J Crit Care
16:498–503
Jamaty C, Bailey B, Larocque A et al (2010) Lipid emulsions in the
treatment of acute poisoning: a systematic review of human and
animal studies. Clin Toxicol 48:1–27
Calcium Heparin
▶ Heparin
California Valley Fever
▶ Coccidioidomycosis
After-care
The disposition of patients following treatment of calcium
channel blocker toxicity will depend on the extent of their
recovery. Patients who sustain any permanent neurologic
injury will need appropriate care including rehabilitation.
Otherwise, for those who make a complete recovery and
Candida Infection
▶ Candidiasis
Candidiasis
Candidemia
▶ Candidiasis
Candidiasis
JOSÉ ARTUR PAIVA, J. M. PEREIRA
UAG da Urgência e Cuidados Intensivos, Hospital Sao
Joao and Medical School, University of Porto,
Porto, Portugal
Synonyms
Candida infection; Candidemia; Invasive candidiasis
Definition
Although there are no strict definitions for nonimmunocompromised, critically ill patients, Invasive
Candidiasis (▶ IC) encompasses a wide variety of severe
or invasive diseases that excludes superficial or less
severe diseases, like oropharyngeal and esophageal
candidiasis, and includes four overlapping forms:
candidemia, acute disseminated candidiasis, chronic
disseminated candidiasis, and deep organ candidiasis.
Essentially, all forms of IC probably begin as an episode
of candidemia, but the clinical presentations of these
four forms are different enough to make this classification useful. Therefore: (a) candidemia means the
isolation of Candida from one or more blood specimens;
given its high mortality and morbidity, essentially all
patients with candidemia, even those with a single culture, should receive therapy; (b) acute disseminated candidiasis usually presents as candidemia, but the special
feature of this form is that spread to several organs,
namely liver, kidney, spleen, eyes, brain, and heart, is
apparent; (c) chronic disseminated candidiasis (previously
hepatosplenic candidiasis) occurs almost exclusively following prolonged episodes of bone marrow dysfunction
and neutropenia; the liver, spleen, and sometimes kidney
are prominently infected with Candida, and blood cultures are rarely positive at this point, although presumably they were positive at the time infection was initiated;
(d) deep organ candidiasis in which, at the time of presentation, the blood is sterile and focal infection of the
specific organ is the only manifestation, although an
episode of candidemia must have led to seeding of the
affected area.
C
447
Treatment
Epidemiology
Candida spp. infections can no longer be considered as
rare infections restricted to neutropenic or immunocompromised patients. All types of patients are now concerned,
particularly those with severe underlying disease or critical
illnesses that need aggressive diagnostic or treatment procedures. Increased survival in patients with severe diseases,
more aggressive use of surgery, invasive procedures and
immunosuppression, and also increased use of broad spectrum antibacterial agents led to an increasing incidence of
candidemia in Europe and in the USA.
Candida species are the most common cause of invasive fungal infections (70–90%) and are generally reported
to be the fourth most prevalent pathogen isolated in blood
cultures or deep-site infections, although this prevalence
varies depending on the population surveyed [1].
▶ ICU candidiasis represents one third of all IC. The
incidence of candidemia, although rather variable from
unit to unit, ranging from 0,5 to 2,22 per 10,000 patient
days, is tenfold higher in the ICU than in the wards, and
Candida species are responsible for around 10% of all
ICU-acquired infections worldwide. In the recent SOAP
study, Candida spp. accounted for 17% of all sepsis in the
ICU and for 20% of all ICU-acquired sepsis.
A marked increase in the proportion of non-albicans
Candida isolates has been reported in several countries,
usually accounting for 40–60% of cases. This observation
correlated with the increasing use of azoles for prophylaxis
or empirical treatment. However, the association of previous fluconazole use with the isolation of non-albicans
strains has been shown in some studies but not proven in
many more. The increasing incidence of non-albicans
Candida species is important, as some studies show
that candidemia due to non-albicans species, especially
C. glabrata, C. tropicalis and C. krusei, are associated
with higher mortality. The fact that C. glabrata has
reduced susceptibility, and C. krusei intrinsic resistance
to fluconazole, may have to do with this higher mortality
and must be taken into account for the empiric therapeutic choice (Table 1). In fact, Kovacicova et al. found a significantly higher attributable mortality in patients infected
with fluconazole-resistant strains [1].
However, there are important geographic and demographic variations in terms of the prevalence of species of
Candida. For instance, C. glabrata is the second most
prevalent species, following albicans, in North America
and Northern Europe, but not in Southern Europe, Asia,
and Latin America, where C. parapsilosis occupies that
position. This fact may also have therapeutic implications
C
448
C
Candidiasis
Candidiasis. Table 1 Epidemiological distribution and common susceptibility patterns of Candida species
Common susceptibility patterns
Species
Frequency (%)
Amphotericin B
5-FC
Fluconazole and
itraconazole
Voriconazole and
posaconazolea
Echinocandinsb
C. albicans
40–60
S
S
S
S
C. glabrata
20–30
S to I
S
S-DD to R
S to S-DD?
S
C. Krusei
5–10
S to I
I to R
R
S to S-DD?
S
C. lusitaniae
0–5
R
S
S
S
S
C. parapsilosis
10–20
S
S
S
S
S to I?
C. tropicalis
20–30
S
S
S
S
S
S
5-FC 5-fluorocytosine, S susceptible, I intermediate, S-DD susceptible does-dependent (dose needs to be increased to achieve therapeutic efficacy),
R resistant
a
Although voriconazole and posaconazole are active in vitro, in vivo, and in early clinical experience against C. glabrata and C. krusei, their efficacy
against these classically azole-resistant organisms hasn’t been clearly established
b
Minimum inhibitory concentrations of the echinocandins are higher for C. parapsilosis than for other Candida species
as C. parapsilosis may have reduced susceptibility to
echinocandins and, therefore, azoles are the preferred
agents (Table 1).
Timing
Kumar et al. showed that median time to initiation of
effective antimicrobial therapy in septic shock is significantly higher for Candida (35.1 h) than for bacteria
(5.5 h). He also demonstrated that survival decreased
12% per hour of delay of initiation of adequate antifungal
therapy in patients with fungal sepsis and shock [2].
Morrell et al. evaluated the impact of delayed antifungal
therapy in mortality. Time to initiation of empiric antifungal therapy was measured in 12-h increments, and
a significant mortality benefit was observed when therapy
was started within 12 h of the drawing of the first positive
blood culture [2, 4]. Garey et al. showed that early – within
24 h – antifungal initiation was associated with significantly less mortality rate and that there was a progressive
mortality increase with increasing delays in initiation of
therapy [2, 4]. More recently, Parkins et al. found that
early adequate empiric antifungal therapy was associated
with a significant reduction in mortality [4]. Taur et al.
subdivided time from collection of blood cultures to initiation of antifungal therapy in three periods: incubation
period (time from collection to positivity), provider notification period (time from blood culture positivity to
provider notification), and antifungal initiation period
(from provider notification to the administration of the
first dose of antifungal). In this study, in cancer patients
with candidemia, the incubation period (median 32.1 h)
accounted for a significant amount of time compared with
the provider notification (median 0.3 h) and antifungal
initiation times (median 7.5 h), and its duration was
associated with inhospital mortality. Therefore, as modern
blood culture systems still require around 24–48 h of
incubation to positivity, new strategies are needed to
shorten the incubation time.
Which Antifungal Drug?
“Old” (fluconazole and polyenes) and “new” (secondgeneration azoles and echinocandins) antifungals for the
management of candidemia and other forms of IC differ
from each other in terms of spectrum, pharmacokinetics
and pharmacodynamics, efficacy, interactions, and side
effects. Two main factors should be taken into account in
the choice of the antifungal: the species of Candida and the
host (focus, hemodynamic stability, organ dysfunction,
previous use of azoles, concomitant drugs).
Triazoles
Triazoles exert their effects within the fungal cell membrane. The inhibition of cytochrome P450 (CYP)dependent 14-a-demethylase prevents the conversion of
lanosterol to ergosterol. This mechanism results in the
accumulation of toxic methylsterols and resultant inhibition of fungal cell growth and replication.
Fluconazole remains one of the most prescribed
triazoles because of its excellent bioavailability, tolerability, and side-effect profile. More than 80% of ingested
drug is found in the circulation, and its absorption is not
affected by food consumption, gastric pH, or disease state.
Almost 60–70% is excreted unchanged in the urine; therefore, the dose should be adjusted in patients with
a reduced clearance of creatinine. Only 10% is protein
bound, and it also exhibits excellent tissue penetration,
Candidiasis
namely, in the central nervous system, where CSF levels are
80% of matched serum levels [4]. Fluconazole is active
(fungistatic) against most Candida spp. with the exception
of Candida krusei (intrinsic resistance because of an
altered cytochrome P-450 isoenzyme). Candida glabrata
can be resistant or dose-dependent susceptible (12 mg/kg/
day). For IC, a loading dose of 12 mg/kg followed by
a daily dose 6 mg/kg should be administered since higher
doses seem to be associated with a better outcome [3].
Although fluconazole has substantially fewer drug–drug
interactions than other triazole compounds, it may
increase serum levels of phenytoin, warfarin, rifabutin,
benzodiazepines, cyclosporine, glipizide, and glyburide.
On the other hand, fluconazole levels are reduced with
concomitant use of rifampin [4].
Voriconazole is a low molecular weight water-soluble
second-generation triazole with a chemical structure similar to fluconazole. It has a potent fungistatic activity
against Candida spp. usually with lower MICs compared
to fluconazole. Voriconazole is available in intravenous
(▶ IV) and oral formulations. This last formulation has
an excellent bioavailability which is reduced with fatty
foods by 80%. Like fluconazole, CSF and vitreous
penetration is excellent. In adults weighing more than
40 kg, the recommended oral dosing regimen includes
a loading dose of 400 mg twice daily on day 1, followed
by 200 mg twice daily. Intravenously, after a loading dose
of 6 mg/kg twice daily, a maintenance dose of 3–4 mg/kg
IV every 12 h is recommended [3]. In patients with a creatinine clearance lower than 50 ml/min, IV voriconazole
should not be used as the risk of accumulation of
cyclodextrine, to which the drug is complexed, exists.
Oral voriconazole does not require dosage adjustment
for renal failure, but it is the only triazole that requires
dosage reduction for patients with moderate-to-severe
liver failure [3]. In adults, voriconazole presents a
nonlinear hepatic metabolism. Polymorphisms within
CYP2C19 are responsible for interpatient serum concentrations differences. The unpredictability of patient enzymatic activity has generated an interest in the routine use
of voriconazole serum-level determination. During first
week of treatment, serum levels should be kept between
1 and 5.5 mg/l, not only to prevent treatment failures,
but also to reduce toxicity, mainly neurotoxicity. In IC,
its clinical use has been primarily for step-down oral
therapy in patients with C. krusei and fluconazoleresistant but voriconazole-susceptible C. glabrata infections. Voriconazole is typically well tolerated, but some
patients experience abnormal vision (up to 23%; usually
transient and infusion related, without sequelae), skin
rash, and transaminase elevation [5].
C
Despite having in vitro activity against Candida spp.
that is similar to voriconazole, posaconazole is not
recommended for primary IC therapy. It is currently
available only as an oral suspension with high oral bioavailability, especially when given with fatty foods [3].
Polyenes
Amphotericin B and nystatin are the currently available
polyenes, but nystatin is limited to topical use.
Amphotericin B binds to ergosterol within the fungal cell
wall membrane. This process disrupts cell-wall permeability by forming oligodendromes functioning as pores with
subsequent efflux of potassium and intracellular molecules causing fungal death. Amphotericin B deoxycholate
(▶ Amb-d) demonstrates a rapid fungicidal in vitro activity against almost all Candida spp. with the exception of
Candida lusitaniae, but is associated with high toxicity. To
avoid amphotericin B deoxycholate–induced nephrotoxicity, several lipid formulations were developed: liposomal
amphotericin B (▶ L-Amb), amphotericin B lipid complex, and amphotericin B colloidal dispersion. These lipid
formulations are generally less toxic but equally effective
as Amb-d [5]. The peak serum level to mean inhibitory
concentration ratio is the best predictor of outcome. All
formulations are highly protein bound, have long halflives, and are widely distributed into tissues, but exhibit
poor CSF penetration. The exact route of elimination of
amphotericin B is not known and, despite its nephrotoxicity, no dose adjustment is necessary in patients with
renal failure. Renal toxic effects of Amb-d are associated
with a sixfold increase in mortality and a significant
increase in hospital costs. Infusion-related reactions
(fever, chills, hypotension, and hypoxemia) are also
frequently observed [3, 5].
For most IC, the usual dosage of Amb-d is 0.5–0.7 mg/
kg/day, but dosages as high as 1 mg/kg/day should be
considered for infections caused by less susceptible species
such as C. glabrata or C. krusei. The typical dosage for lipid
formulations is 3–5 mg/kg/day [3].
Echinocandins
Echinocandins (caspofungin, anidulafungin, micafungin)
are the most recently introduced class of antifungals. They
inhibit the synthesis of b-1,3 glucan by inhibiting the
activity of glucan synthase. This mechanism impairs cellwall integrity and leads to osmotic lysis. They are fungicidal drugs, active against albicans and non-albicans
species, and susceptibility differences between the different
agents in this class are minimal. C. parapsilosis and
C. guilliermondii demonstrate less in vitro susceptibility
to echinocandins than do most other Candida spp. related
449
C
450
C
Candidiasis
to amino acid polymorphism in the main subunit of
glucan synthase (Fks1). However, association between
▶ MIC and treatment outcome is inconsistent [4]. Considering that echinocandin efficacy is predicted by peak to
MIC ratios (five- to tenfold), they are administered once
daily. Although echinocandin resistance is uncommon, it
may occur during therapy. Several studies reported
a decrease in microbial kill at higher doses and supraMIC concentrations: the paradoxical effect. However, its
mechanism and clinical implications are unknown. This
class of antifungals is only available in IV formulations due
to its poor oral absorption. They are highly protein bound,
have long half-lives, and their vitreal and CSF penetration
is negligible [4]. Caspofungin is metabolized by both
hepatic hydrolysis and N-acetylation, and inactive metabolites are then eliminated in the urine. Micafungin is
metabolized by nonoxidative metabolism within the liver,
and anidulafungin undergoes unique nonenzimatic degradation. All echinocandins have few side effects (phlebitis,
headache, abdominal pain, diarrhea, elevated liver transaminases) and do not need dosage adjustment in patients
with renal failure or dialysis. It is recommended to reduce
caspofungin dosage in patients with moderate-to-severe
hepatic impairment [3, 5]. No significant drug interactions
were described for anidulafungin. Caspofungin has several
drug interactions with agents metabolized through the
cytochrome P450 system. As serum levels are reduced in
the presence of rifampin, phenytoin, carbamazepine, and
phenobarbital, caspofungin dosage should be increased
to 70 mg/day in patients taking these medications.
Tacrolimus serum levels may decrease with concomitant
administration of this echinocandin [4]. Micafungin may
increase levels of sirolimus, nifedipine, and cyclosporine.
For IC, a loading dosage for caspofungin (70 mg/day) and
anidulafungin (200 mg/day) is necessary. The maintenance dosage for caspofungin, micafungin, and
anidulafungin is 50, 100, and 100 mg/day, respectively [3].
Antifungal Therapy: A Patient-Based
Approach
All current antifungals have been shown to be either equivalent or non-inferior to each other in several studies that
included critically ill patients [3, 4]. In these clinical trials,
success of therapy ranged from 60% to 83%. The high
incidence of adverse events with polyene led to a higher
incidence of therapy discontinuation. Due to this potential
for toxicity, several international recommendations considered fluconazole and echinocandins as first-line therapy for
IC, leaving polyenes as a valid alternative [2, 3].
The hemodynamic status of the patient is an important criterion for selection of empiric antifungal therapy.
In hemodynamically stable patient without organ dysfunction, fluconazole is a safe choice. Alternative drugs
are echinocandins or amphotericin B. In contrast, hemodynamically unstable patients with severe sepsis or septic
shock should be treated with a fungicidal, broad spectrum
agent with a good safety profile and, therefore, an
echinocandin is the first choice. Alternatively, a lipid
formulation of amphotericin B may be used [2, 3].
The likelihood of a patient being infected with an azoleresistant Candida spp. is very difficult to predict but must
be taken into account. Colonization by an azole-resistant
species, previous exposition to an azole or admission to an
ICU with a high prevalence (>15–20%) of these species
should lead the physician to prescribe an echinocandin, or
as an alternative amphotericin B, and avoid azole [2].
The presence of organ dysfunctions is an important
issue. Fluconazole dosage should be reduced in patients
with renal dysfunction, and IV voriconazole should not be
used in patients with creatinine clearance lower than
50 ml/min. Caspofungin and voriconazole dosages should
be adjusted in patients with liver impairment [3].
As azoles and echinocandins, except anidulafungin,
have important drug–drug interactions, concomitant
therapy should also influence antifungal choice. An adequate penetration of antifungal to the source of infection
is crucial. For instance, azoles penetrate well in the CNS
and in the eye while echinocandins do not. Higher dosages
may be necessary for the treatment of fungal endocarditis
if an echinocandin is used [3].
Candida spp. ability to adhere to inert and biological
surfaces is associated with virulence. Echinocandins and
polyenes are the only classes of antifungals with high
capacity to act in Candida biofilms. Intravenous catheter
removal is strongly recommended for non-neutropenic
patients with candidemia. This strategy is associated not
only with shorter duration of candidemia but also with
reduced mortality [2, 3].
The concept of transition or step-down is also
recommended. If the patient is clinically stable and the
isolate is azole-susceptible, a switch from an echinocandin
or an amphotericin B formulation to fluconazole is indicated. Voriconazole is recommended as step-down oral
therapy for selected cases of IC due to Candida krusei or
voriconazole-susceptible Candida glabrata [2, 3].
In the management of documented IC, an echinocandin
is the preferred agent for the treatment of C. glabrata
infections. For infection due to C. parapsilosis, fluconazole
is recommended. Yet, if the patient initially received an
echinocandin, is clinically improving, and follow-up cultures are negative, continuing the use of an echinocandin
is reasonable. IC by C. albicans or C. tropicalis may be
Candidiasis
treated with fluconazole as long as the patient is not in
severe sepsis or septic shock [2, 3].
Regarding deep organ candidiasis, namely endocarditis, meningitis, osteomyelitis, and endophtalmitis,
amphotericin B with or without 5-flucytosine is the preferred treatment in unstable patients. Fluconazole may be
used in stable patients or for step-down therapy in these
situations [3, 5].
Combination Therapy
The rationale for the use of combination therapy is based
on the hypothesis that efficacy can be improved when
drugs with different mechanisms of action are used. The
combination of antifungals may be used in forms of deep
organ candidiasis as stated above. In a study recently
conducted by Rex et al. comparing fluconazole with
amphotericin B to fluconazole alone for patients with
candidemia, combination therapy resulted in a better
response rate (69% vs. 56%), especially in patients with
APACHE II score between 10 and 22, and more rapid
clearance of Candida from blood, but amphotericin B
was associated with significant toxicity [2]. Another
study has shown that combination therapy of the antibody
to Heat Shock Protein (HSP) 90 with L-Amb is superior to
L-Amb in monotherapy [5]. In contrast, the usefulness of
adding echinocandins to fluconazole may be limited due
to a possible antagonism demonstrated in an in vitro
Candida biofilm model [5]. To date, the use of combination antifungal therapy in patients with IC is not
recommended, and further studies are required [2].
Duration
In candidemia without obvious metastatic complications,
treatment should be continued for 2 weeks after the last
positive blood culture and resolution of symptoms. However, the duration of antifungal therapy must be
prolonged in endophtalmitis, CNS, and osteoarticular
and cardiovascular Candida infections [3].
Evaluation and Assessment
The diagnosis of IC is still a major challenge in the ICU, and
it is often made late in the course of the infection. Clinical
manifestations are often nonspecific, and, frequently, it is
hard to differentiate colonization from infection. The current “gold standard” for the diagnosis of IC is either
a positive culture specimen from a sterile site or characteristic histopathology. These two methods have limited
sensitivity. Blood cultures are known to be negative for
around 50% of patients with IC, and improvements in
blood culture technique have increased the sensitivity to
70%, at the best.
C
The difficulties of clinically recognizing Candida infections together with the paramount importance of early
initiation of treatment favored the search for predictive
factors of fungal infection on which early empiric antifungal treatment should be based. Recognized risk factors for
IC are: severity of illness (APACHE II score), neutropenia,
colonization with Candida spp., presence of central
venous catheter, parenteral nutrition, ICU length of
stay 7 days, prior abdominal surgery, previous broad
spectrum antibiotherapy, hemodyalisis or renal failure,
and cancer chemotherapy [1].
In order to improve the risk factor–driven approach,
several authors have focused on combining risk factors
to develop predictive algorithms and scoring systems
that may help physicians to identify patients who will
benefit from early antifungal therapy. Pittet et al. in
a prospective cohort study identified two independent
risk factors that predicted subsequent invasive Candida
infection: the severity of illness assessed by the APACHE II
score and the intensity of Candida spp. colonization
defined as the colonization index (threshold for intervention set at 0.5). The corrected index (product of
the colonization index times the ratio of the number of
distinct body sites showing heavy growth to the total
of distinct body sites growing Candida spp.) with a threshold of 0.4 was associated with a 100% sensitivity and
specificity [1, 4].
In a retrospective cohort analysis with prospective
validation, Dupont et al. developed a predictive score for
the isolation of yeast from peritoneal fluid in critically ill
patients with peritonitis. In patients with three of four
independent risk factors (female gender, upper GI tract
origin, intraoperative cardiovascular failure, and antimicrobial therapy at least 48 h before onset of peritonitis),
the positive and negative predictive values for isolation of
yeast were 67% and 72%, respectively. Leon et al. based on
a large prospective, cohort, observational, and multicentre
study developed the bedside “Candida score”: total parenteral nutrition (1 point), surgery (1 point), multifocal
colonization (1 point), and severe sepsis/septic shock
(2 points). A Candida score 3 points was associated
with a 7.75-fold increased likelihood of proven IC and
accurately predicted (sensitivity 81% and specificity
74%) patients who could benefit from early antifungal
therapy and is highly improbable if a Candida colonized
non-neutropenic critically ill patient has a Candida
score <3 [1, 4, 5].
Ostrosky-Zeichner et al. developed a prediction rule
that can be applied to 10% of patients who stay in the
ICU 4 days. The presence of at least one major risk
factor (previous antibiotherapy or presence of central
451
C
452
C
Candidiasis
venous catheter) and at least two minor risk factors (total
parenteral nutrition, dialysis, any major surgery, pancreatitis, steroids, use of other immunosuppressive agents)
was associated with a low sensitivity (34%) but with
a high negative predictive value (97%) [1, 4, 5].
At present, no single predictive rule provides a gold
standard algorithm for IC, and further prospective validation in a clinical setting is necessary.
New methods to avoid delays in appropriate antifungal therapy are therefore needed. (1,3)-b-D-glucan
(▶ BG) is a cell-wall component of most fungi, except
Zygomycetes and Cryptococcus, which is released during
tissue invasion. BG test seems to be a promising tool for
early diagnosis of IC given its high sensitivity (from 55%
to 100%) and specificity (78–100%). Positive results occur
not only in patients who have candidiasis, but also in
aspergillosis, gastrointestinal colonization with Candida
spp., endemic mycoses, and Pneumocystis jiroveci pneumonia. However, its use in the critically ill patient has
two main limitations: it was not yet validated in nonneutropenic patients and there is a significant rate of
false positive results (bacteremia, surgical gauze, albumin,
hemodyalisis, and antibiotics such as piperacilin) [1, 4]. In
addition, the cutoff for positive result is not well defined
ranging from 20 to 75 pg/ml.
Leon et al. showed that procalcitonin increased the
predictive value of “Candida score,” as patients with
multifocal colonization by Candida spp., staying more
than 7 days in the ICU, that develop IC showed significantly higher values of this biomarker.
The detection of Candida DNA by ▶ PCR holds great
promise as a sensitive and potentially rapid diagnostic test,
but, unfortunately, methodologies have not been standardized and only limited evaluations have been performed in
clinical specimens. McMullan et al. conducted a prospective
study of 145 consecutive non-neutropenic patients admitted to a single adult ICU. Serum was drawn twice weekly
and fungal DNA amplified using a real-time PCR capable
of detecting Candida spp. This assay showed a high sensitivity (71–99%) and specificity (99–100%) and an excellent positive (83–100%) and negative (99–100%)
predictive value. These data suggest that this assay may
perform well for the rapid diagnosis of candidemia in
non-neutropenic adults, providing results on the same
day [1, 4].
Since both time and distinction between albicans and
non-albicans species are important new techniques are
necessary. Actually, the rapid identification and differentiation of Candida albicans from Candida glabrata can be
achieved within 3 h using commercial nucleic acid fluorescent in situ hybridization (PNA FISH) technique [4].
After-care
Once a patient has been started on antifungal treatment,
it is advisable to repeat blood cultures after 4–5 days
to monitor response and breakthrough infections. All
patients with candidemia should undergo funduscopic
examination within the first week after initiation of therapy to rule out endophtalmitis, which occurs in about
10% of patients with candidemia and impacts on antifungal selection and duration of therapy.
Patients showing suboptimal responses in spite of
adequate antifungal therapy should be evaluated for several common causes of therapeutic failure, namely: lack of
removal of an intravascular catheter, presence of other
vascular niduses (e.g., an infected heart valve or
endovascular graft), seeding of a protected site (e.g.,
endophtalmitis, osteomyelitis, and hepatosplenic disease)
or other prosthetic devices (e.g., artificial joints and peritoneal dialysis catheters).
Prognosis
Invasive candidiasis is associated with a crude mortality rate
of around 60%. As underlying diseases contribute to mortality, the estimated “attributable” mortality is usually
reported as 40–49%. However, attributable mortality varies
depending on study design: 20–50% in retrospective casecontrol studies and 5–7% in prospective clinical trials [4].
Tumbarello et al. in a retrospective analysis, identified
three risk factors for mortality: inadequate antifungal
therapy, infection with biofilm-forming Candida species,
and APACHE III score [1]. In the study performed by
Morrell et al. APACHE II score prior use of antibiotics
and initiation of antifungal therapy more than 12 h after
the first positive blood culture were independent determinants of hospital mortality [4].
IC and candidemia are also associated with a high ICU
(12.7 days) and hospital stay (15.5 days) and with
increased costs [1]. The extra cost of an episode of
candidemia in adults has been estimated as 44,000 USD
and 16,000€. These data underscore the need for improved
means of prevention and treatment of candidemia.
References
1.
2.
Guery BP, Arendrup MC, Auzinger G, Azoulay E, Sá MB, Johnson
EM, Müller E, Putensen C, Rotstein C, Sganga G, Venditti M, Crespo
RZ, Kullberg BJ (2009) Management of invasive candidiasis and
candidemia in adult non-neutropenic intensive care unit patients:
part I. Epidemiology and diagnosis. Intensive Care Med 35:55–62
Guery BP, Arendrup MC, Auzinger G, Azoulay E, Sá MB, Johnson
EM, Müller E, Putensen C, Rotstein C, Sganga G, Venditti M, Crespo
RZ, Kullberg BJ (2009) Management of invasive candidiasis and
candidemia in adult non-neutropenic intensive care unit patients:
part II. Treatment. Intensive Care Med 35:206–214
Capillary Refill
3.
4.
5.
Pappas PG, Kauffman CA, Andes D, Benjamin DK Jr, Calandra TF,
Edwards JE Jr, Filler SG, Fisher JF, Kullberg BJ, Ostrosky-Zeichner L,
Reboli AC, Rex JH, Walsh TJ, Sobel JD (2009) Clinical practice
guidelines for the management of candidiasis: 2009 update by the
Infectious Diseases Society of America. Clin Infect Dis 48:503–535
Playford EG, Eggimann P, Calandra T (2008) Antifungals in the ICU.
Curr Opin Infect Dis 21:610–619
Hollenbach E (2008) Invasive candidiasis in the ICU: evidence based
and on the edge of evidence. Mycoses 51(2):25–45
CAP
Community-acquired pneumonia: Pneumonia occurring
in any patient who does not meet the criteria for HCAP,
HAP, or VAP.
Capillaries
▶ Microcirculation
Capillary Refill
BRIAN G. HARBRECHT
Department of Surgery, University of Louisville,
Louisville, KY, USA
Synonyms
Circulation; Microvascular perfusion; Perfusion
Definition
Capillary refill is a subjective, noninvasive assessment of
peripheral cutaneous perfusion used to evaluate the adequacy of the regional or systemic circulation. A test of
capillary refill involves manual compression, typically on
the nailbed or distal skin of an extremity, to blanch the
skin followed by rapid release of the pressure. If it takes
<2 s for the skin to return to normal pink coloration,
capillary refill is adequate. Delayed return of normal coloration to the area of compression (>2 s) suggests an
abnormality in either the regional or systemic circulation.
By definition, capillary refill measures or assesses
the status of the perfusion of the skin of an extremity.
In clinical practice, it is often used to evaluate the presence
C
or absence of shock. The ability of capillary refill to reflect
the adequacy of the systemic circulation is based on the
body’s compensatory responses to shock [1]. When global
tissue hypoperfusion is present due to hypovolemia, cardiac dysfunction, or other causes, increased sympathetic
activation leads to a1-adrenergic-mediated peripheral
vasoconstriction. This peripheral vasoconstriction increases
peripheral arteriolar resistance and shunts blood from the
less essential peripheral vascular beds (skin, splanchnic
organs) to more essential visceral organs such as the heart
and the brain [1]. Increased sympathetic tone also constricts capacitance vessels in selected vascular beds to
increase venous return. These responses contribute to
the pale appearance of the skin and its cool, clammy
consistency to the touch.
Several environmental and patient factors can introduce variability into assessments of capillary refill [2, 3].
These factors include patient age, gender, and the ambient
temperature the patient is exposed to. Much of the variability in capillary refill between individuals, however,
appears to be due to factors that are difficult to define.
Despite the subjective element in its interpretation and the
relative nonspecific nature of the test, assessment of capillary refill remains a commonly performed component of
the physical examination of patients. It has been reported
to correlate well with hypovolemia in selected populations
of patients such as infants. Capillary refill remains
a component of the physical assessment of injured patients
in the Advanced Trauma Life SupportR course from the
American College of Surgeons and is included in guidelines for the assessment of perfusion in critically ill
patients [4, 5].
Technical issues can interfere with the accuracy of
using capillary refill as an index of systemic perfusion.
Severe hypothermia can induce intense peripheral vasoconstriction that can interfere with capillary perfusion of
peripheral tissues even though the intravascular volume
may be adequate. Adequate illumination is essential to
determine when normal coloration returns to the skin
after compression. While generally not a problem in the
Emergency Department or Intensive Care Unit, this limitation hinders applicability of capillary refill in the
prehospital setting, at night, or in austere environments
such as military field triage. This limitation can be significant since a technically simple, readily available test to
assess perfusion may be most useful in these environments
where physical examination is the only tool available to
assess the patient. As mentioned above, variability
between individuals can also exist independent of the
status of the systemic circulation due to patient-specific
factors that remain difficult to define. The assessment of
453
C
454
C
Capillary Refill
capillary refill is particularly useful in the field of orthopedics where casts, splints, and braces may limit access to
peripheral pulses to assess perfusion. The examiner should
keep in mind, however, that peripheral vascular disease,
peripheral vascular injuries, or other disorders of regional
blood flow can interfere with the ability of capillary refill
to reflect the status of the systemic circulation.
As technology has improved, a variety of modalities
have been developed to assess peripheral perfusion. These
technologies measure different endpoints in peripheral
tissues that reflect distal tissue circulation. Their ability
to measure systemic perfusion is based, in part, on the
same compensatory physiologic responses that govern
capillary refill. These modalities include near-infrared
spectroscopy (NIRS) to measure peripheral muscle tissue
oxygen saturation (StO2), microprobes, or transcutaneous
sensors to measure arterial pH, arterial oxygen pressure,
and arterial carbon dioxide pressure, and laser Doppler
flowmetry to measure cutaneous capillary blood flow.
Clinical trials on the use of NIRS to monitor StO2 as an
index of the adequacy of shock resuscitation in trauma
patients have been performed and the devices are commercially available for clinical use. Laser Doppler
flowmetry has also been utilized clinically in selected centers, primarily to monitor microvascular perfusion of flaps
in free tissue transfer operations (free flaps). One could
even consider sublingual capnometry, gastric tonometry,
and oxygen consumption/oxygen delivery-based goaloriented therapy as extremely sophisticated technologies
designed to measure tissue or peripheral perfusion analogous to the capillary refill test [3]. Unfortunately, none of
these modalities are universally accepted for assessing the
adequacy of the peripheral circulation or as an index of the
adequacy of resuscitation from shock in all cases. Several
of these technologies continue to undergo active investigation in both the clinical and the basic science environment. Whether these tools will prove to be more useful
than simple clinical assessment of the patient remains
undetermined.
and clammy versus warm and dry), and the appropriate
clinical setting for a patient in shock. Once shock is
suspected, resuscitative maneuvers should be implemented
while an etiology is sought. The clinician should keep in
mind that the body’s compensatory mechanisms to circulatory disturbances will act to restore intravascular volume
and maintain perfusion to key visceral systems through
increased heart rate, increased contractility, and activation
of neuroendocrine responses. Hypotension is a relatively late
development when these compensatory mechanisms have
been overwhelmed. The presence of shock should not be
equated with hypotension since significant hypoperfusion
can occur before systemic blood pressure falls.
As previously discussed, the clinician needs to be
aware of potential confounders that can result in abnormal capillary refill in the face of adequate intravascular
volume. Hypothermia, peripheral vascular disease, age,
and poor ambient light can all interfere with the ability
of capillary refill to reflect the status of the systemic circulation. One should keep in mind that constricting casts or
bandages, proximal peripheral vascular injuries, or proximal vascular thromboses may produce regional abnormalities of perfusion in the face of normal systemic
circulation. Assessment of the opposite extremity or
a different peripheral vascular bed will prove useful in
these cases.
As with many other tests used to evaluate perfusion
and shock resuscitation, a single measurement may provide useful information, but serial assessments over time
are frequently optimal to gauge the response to therapy.
Other parameters to assess perfusion and the systemic
circulation are discussed in greater detail in other sections
of this work. Repetitive assessment of a number of clinical
endpoints (capillary refill, heart rate, urine output, base
deficit, etc.) will often help the clinician to determine
whether shock persists or homeostasis is being restored.
References
1.
Differential Diagnosis
When used to assess the adequacy of peripheral perfusion
as an index of the systemic circulation, a normal capillary
refill test reassures the clinician that the systemic perfusion
is adequate enough to perfuse the least essential part of the
body. Abnormalities of capillary refill should heighten
one’s suspicion for inadequate perfusion, but they are
fairly nonspecific. Additional parameters of diminished
perfusion should be sought such as altered mental status
from decreased cerebral perfusion, location and quality of
peripheral pulses, character of the skin to palpation (cool
2.
3.
4.
5.
Harbrecht BG, Forsythe RM, Peitzman AB (2008) Management of
shock. In: Feliciano DV, Mattox KL, Moore EE (eds) Trauma,
6th edn. McGraw-Hill, New York
Anderson B, Kelly AM, Kerr D, Clooney M, Astat DJ (2008) Impact of
patient and environmental factors on capillary refill time in adults.
Am J Emerg Med 26:62–65
Lima A, Bakker J (2005) Noninvasive monitoring of peripheral
perfusion. Intensive Care Med 31:1316–1326
Lima A, Jansen TC, van Bommel J, Ince C, Bakker J (2009) The
prognostic value of the subjective assessment of peripheral perfusion
in critically ill patients. Crit Care Med 37:934–938
Brierley J, Carcillo JA, Choong K et al (2009) Clinical practice
parameters for hemodynamic support of pediatric and neonatal
septic shock; 2007 update from the American College of Critical
Care Medicine. Crit Care Med 37:666–688
Cardiac and Endovascular Infections
Capnograph
▶ End-Tidal CO2
Capnography
▶ End-Tidal CO2
▶ Pulse Oxymetry and CO2 Monitoring
Capnometry
▶ End-Tidal CO2
C
Definition
Infective endocarditis (IE) is defined as infection involving
the endocardium. Although any part of the endocardial
surface may be involved, the heart valves are affected most
frequently. Endocarditis may also occur at the site of a
septal defect or a site where the endocardium has been
disrupted by abnormal flow or intracardiac devices. The
term infective endocarditis is now preferred to the older
terminology, bacterial endocarditis, as it is recognized that
a wide variety of pathogens may cause endocarditis. The
pathologic lesion at the site of infection is the vegetation
which consists of fibrin, platelets, and the offending
microorganism; a paucity of inflammatory cells is present.
Injury to the endothelium results either in direct infection
by organisms present, even transiently, in the blood stream
or may result in the formation of a platelet-fibrin thrombus that may then become secondarily infected.
Treatment
Capsaicin- 8-methyl-N-vanillyl-6nonenamide
Capsaicin- 8-methyl-N-vanillyl-6-nonenamide, is the
active component of chili peppers, plants which belong
to the genus Capsicum. It is an irritant for animals and
produces a sensation of burning in tissues that it contacts.
Capsaicin selectively binds to a protein known as TRPV1
that is located on the membrane of heat and pain sensing
neurons. Prolonged activation of these neurons depletes
presynaptic substance P, one of the body’s neurotransmitters for pain and heat, and the sensation of pain is reduced.
Carbonic Anhydrase Inhibitors
▶ Diuretics for Management of AKI
Cardiac and Endovascular
Infections
DONALD P. LEVINE, PATRICIA D. BROWN
Department of Medicine, Wayne State University,
Detroit, MI, USA
Synonyms
Bacterial endocarditis; Endocarditis; Fungal endocarditis
455
Comprehensive, evidenced based guidelines for the
diagnosis and management of IE are published by the
American Heart Association (AHA) which were last
updated in 2005 [1]. Probably the most important
development impacting the initial empiric therapy of
suspected IE is the emergence of Staphylococcus aureus as
the most common etiology of native valve IE in most
centers, reflecting the fact that a significant proportion
of IE cases are now health care associated infections [2].
Methicillin resistant S. aureus (MRSA), both communityacquired and health care associated strains, must be considered a potential etiology of IE, particularly in patients
whose severity of illness is sufficient to warrant admission
to the intensive care unit. Because receipt of initial empiric
therapy that covers the causative organism is an important
predictor of favorable outcome in critically ill patients
with sepsis, it is anticipated that even patients with
suspected IE will receive broad-spectrum antimicrobial
therapy initially. Once the diagnosis is confirmed and the
causative organism identified, antibiotic therapy should
be revised to a regimen known to be effective for the
treatment of IE due to the isolated pathogen. The recommendations discussed below are targeted toward patients
with native valve IE (NVE); the treatment of prosthetic
valve infective endocarditis (PVE) is discussed separately.
Viridans Group Streptococci and
Streptococcus bovis
The appropriate regimen for IE caused by viridans
streptococci and S. bovis depends on the minimum inhibitory concentration (MIC) to penicillin for the isolate.
Increasing penicillin MICs among these streptococci is
C
456
C
Cardiac and Endovascular Infections
well described; therefore it is imperative that MIC values
be available and reviewed before antibiotic therapy is
adjusted. Highly susceptible isolates (MIC 0.12 mg/ml)
can be treated with aqueous crystalline penicillin
G sodium (12–18 million units per day, given by continuous infusion or divided into 4 or 6 equal doses) or
ceftriaxone (2 g every 24 h) for 28 days. The duration
of therapy may be shortened to 14 days if gentamicin
(3 mg/kg every 24 h) is used; however, this “short course”
regimen should not be used in patients with cardiac or
extra-cardiac complication of IE or in patients at increased
risk of aminoglycoside related nephrotoxicity. Vancomycin
for 28 days is an alternative in patients with severe penicillin
allergy. Clinicians are reminded that the strong association
between S. bovis IE and colonic lesions (including malignancy) mandates an evaluation of the gastrointestinal
tract once the patient’s clinical condition has stabilized.
Viridans streptococci and S. bovis isolates with penicillin MIC > 0.12 to 0.5 mg/ml should be treated with
penicillin or ceftriaxone for 28 days with single daily dose
gentamicin for the first 14 days of therapy.
Viridans streptococci with penicillin MIC > 0.5 mg/ml
along with Abiotrophia, Granulicatella, and Gemella species should be managed as for enterococcal IE (discussed
below). If vancomycin is used, combination with gentamicin is not necessary.
IE due to S. pyogenes can be treated with 28 days of
penicillin, as outlined above. Cefazolin or ceftriaxone are
alternatives; vancomycin should only be utilized in cases
of severe B-lactam allergy. IE due to groups B, C, or
G streptococci is managed similarly; some experts do
recommend the addition of gentamicin to the regimen
for the first 14 days of therapy and consideration of
a more prolonged (42 day) total course of treatment for
these three pathogens.
Although uncommon, S. pneumoniae remains an
important pathogen in IE. Isolates with penicillin MICs up
to 4 can be successfully treated with high dose (up to
24 million units/day) of penicillin; if the patient has concomitant meningitis, cefotaxime or ceftriaxone must be
used for isolates with penicillin MICs 0.1 (provided
the isolate is susceptible to these agents); penicillin and
cephalosporin resistant isolates are generally managed with
vancomycin in combination with cefotaxime or ceftriaxone.
intrinsic property of enterococci; therefore serous infections such as IE due to enterococci are optimally managed
with the addition of an aminoglycoside for synergy. IE due
to strains susceptible to penicillin and gentamicin should
be treated with ampicillin (12 g daily, divided into six
equal doses) or aqueous crystalline penicillin G sodium
(18–30 million units daily, continuously or divided into
six equal doses) plus gentamicin 3 mg/kg daily in two or
three divided doses (adjusted for peaks of 3–5 mg/ml with
a trough of <1 mg/ml). Four weeks of therapy is sufficient
for patients whose symptoms have been present less than
3 months; 6 weeks of therapy is recommended for those
with symptoms more than 3 months. If the organism
is sensitive, vancomycin can be substituted in patients
with penicillin allergy; however, these patients should
receive 6 weeks of therapy, regardless of the duration of
symptoms. Streptomycin should be used for isolates
that have high level resistance to gentamicin, but not to
streptomycin; 15 mg/kg every 24 h divided into two doses
is recommended in patients with normal renal function.
Optimal therapy of isolates that demonstrate
susceptibility to penicillin but high level resistance to
gentamicin and streptomycin is not well established.
Several studies support the use of high dose ampicillin
(12 g/day) in combination with ceftriaxone (2 g every
12 h) in these cases; therapy should be given for 6 weeks.
This regimen may also be a reasonable alternative for
patients with aminoglycoside susceptible isolates who
develop progressive nephrotoxicity during therapy.
Optimal therapy for enterococcal isolates that are
resistant to penicillins, vancomycin, and aminoglycosides
is also unknown. For infections due to Enterococcus
faecium, the AHA guidelines recommend either linezolid
(1,200 mg/day in two divided doses) or quinopristindalfopristin (22.5 mg/kg per day divided into three equal
doses) for a minimum of 8 weeks. Resistant E. faecalis
infections may be treated with ceftriaxone plus ampicillin
or imipenem-cilastatin plus ampicillin for a minimum of
8 weeks. Surgery should be a strong consideration for
the management of these infections for which synergistic
antimicrobial therapy is not possible. An increasing
number of case reports have documented successful
treatment with daptomycin in such cases, although
therapeutic failures have also been reported and additional
data are clearly needed.
Enterococci
Enterococcal isolates suspected of causing IE must
undergo testing for penicillin (or ampicillin) and vancomycin MICs as well as testing for the presence of high
level resistance to gentamicin and streptomycin. Relative
resistance to penicillin (ampicillin) and vancomycin is an
Staphylococci
As discussed above, S. aureus is now the most common
cause of IE in the developed world and patients critically
ill with suspected IE should receive initial empiric therapy
that includes coverage for this pathogen, including the
Cardiac and Endovascular Infections
possibility of MRSA. Although typically considered important pathogens mainly in early PVE, coagulase-negative
staphylococci (CoNS) have emerged as important pathogens
in NVE, causing almost 8% of such infections in noninjection drug users with IE in a recent large prospective
study. Almost half of NVE due to CoNS is health care
associated; medical comorbidities, long-term intravenous
catheter use, and recent invasive procedures appear to be
risk factors. Among the CoNS, S. lugdunensis appears to be
particularly virulent, often associated with metastatic
infection as well as periannular extension of the infection.
Surgical treatment may be required more frequently
in patients with IE due to CoNS than in patients with
S. aureus infections.
Patients with IE due to methicillin-susceptible S. aureus
(MSSA) should be treated with nafcillin (12 g/day divided
into four or six equal doses); cefazolin (6 g/day divided
into three equal doses) is an alternative for patients
with non-life threatening penicillin allergy. Vancomycin
can be used in patients with severe B-lactam allergy.
Although clinicians may be tempted to substitute vancomycin for a B-lactam, particularly in patients with reduced
renal function because of the convenience of less frequent
dosing, vancomycin is inferior to the B-lactams for the
treatment of susceptible isolates, therefore this practice is
not acceptable. It is very important to note that while the
AHA guidelines recommend vancomycin dosing to
achieve serum trough concentrations of 10–15 mg/ml,
a more recently published consensus review recommends
a vancomycin target trough of 15–20 mg/ml for serious
infections such as IE [3]. The AHA guidelines list the
addition of 3–5 days of gentamicin therapy as optional,
noting that a clinical benefit of initial aminoglycoside
therapy in S. aureus IE has not been proven. Recently,
initial low dose gentamicin for S. aureus bacteremia and
native valve IE was associated with significant risk of
nephrotoxicity. Given the lack of data regarding benefit
in this setting, we do not recommend it.
Vancomycin is the recommended therapy for IE due to
MRSA. However, a growing body of evidence indicates
that patients with serious infections due to MRSA whose
isolates have vancomycin MICs > 1 mg/ml respond less
favorably to vancomycin therapy than those due to isolates
with lower MICs. Daptomycin achieved clinical success
rates that were non-inferior to vancomycin for bacteremia
and right-sided endocarditis due to MRSA; data for
the use of daptomycin in the treatment of left-sided IE
is derived mainly from observational studies and case
reports. There are very limited data to support the use of
other agents for MRSA IE. Success has been reported with
the use of trimethoprim-sulfamethoxazole, doxycycline,
C
minocycline, linezolid, and quinopristin-dalfopristin.
Clearly, the optimal management of MRSA IE, especially
infections due to isolates with higher vancomycin MICs,
remains to be defined.
Despite in vitro susceptibility, the addition of rifampin
to the regimen for the treatment of native valve IE due to
S. aureus is not recommended.
In general, 6 weeks of therapy is recommended for
patients with S. aureus IE; patients with uncomplicated
infections can be treated with 4 weeks of therapy. Injection
drug users (IDUs) with uncomplicated right-sided IE due
to MSSA can be successfully managed with a 2 week course
of nafcillin in combination with an aminoglycoside; the
presence of septic pulmonary emboli does not preclude
the use of “short course” therapy in this setting.
NVE due to CoNS should be treated with regimens
similar to those outlined above, based on the in vitro
susceptibility data.
Gram-Negative Pathogens
Native valve IE due to organisms of the HACEK
group (Haemophilus, Actinobacillus, Cardiobacterium,
Eikenella, and Kingella) should be treated with ceftriaxone
(2 g daily); ampicillin-sulbactam and ciprofloxacin are
alternatives. A 4 week course of antibiotic therapy is
recommended. Non-HACEK Gram-negatives account
for less than 2% of cases of NVE. Therapy should be
based on in vitro susceptibility data; surgery is frequently
required for successful management.
Culture Negative Infective Endocarditis
Blood cultures may be negative in patients with IE
due to the presence of a fastidious bacterial pathogen, a
non-bacterial pathogen or the receipt of antibiotic therapy
before blood cultures are obtained. The latter reason is
probably most common, particularly among patients who
are critically ill on presentation. The importance of assuring that blood cultures are obtained, even in the most
critically ill patient, prior to the administration of antibiotics cannot be over emphasized. Options for the empiric
therapy of culture negative NVE include ampicillinsulbactam plus gentamicin or vancomycin plus gentamicin plus ciprofloxacin. A recent case series of patients with
culture negative endocarditis underscored the importance
of aminoglycoside therapy in the management of this
infection; patients who did not receive an aminoglycoside
containing regimen had a significantly higher mortality.
Fungal Endocarditis
The majority of cases of fungal IE are due to Candida species.
C. albicans is most common in non-IDUs; non-albicans
457
C
458
C
Cardiac and Endovascular Infections
candida are more common in IDUs. Recommendations
for management of fungal endocarditis are based mainly
on expert opinion. The most current recommendations
can be found in the guidelines for the management of
candidiasis from the Infectious Diseases Society of America (available at www.idsociety.org). Initial therapy should
consist of amphotericin B, either a standard or a liposomal
preparation, with or without five flucytosine, or an
echinocandin. Fungal endocarditis remains a strong indication for valve replacement.
Prosthetic Valve Endocarditis
PVE occurs in 1–6% of patients with a prosthetic valve.
Mechanical and bioprosthetic valves have similar rates of
infection overall; however, mechanical valves have
a higher rate of infection during the first 3 months after
implantation. S. aureus has emerged as the most common
infecting agent, followed by CoNS and streptococci. Initial
empiric antibiotic therapy for critically ill patients
with suspected PVE will likely include broad-spectrum
coverage for both Gram positive and Gram-negative
pathogens; the regimen chosen should always include
coverage for MRSA. If S. aureus or CoNS are confirmed,
nafcillin or vancomycin should be utilized, based on
susceptibility results. Rifampin (900 mg daily in three
divided doses) and gentamicin should be added, although
gentamicin may be discontinued after 2 weeks. Prolonged
(at least 6 weeks) therapy will be required. Therapy for
viridans streptococci and S. bovis isolates with penicillin
MIC 0.12 mg/ml is the same as that outlined for
native valve infections except that short course (2 week)
regimens should not be used. The addition of gentamicin
(single daily dose) to a B-lactam for the first 2 weeks of
therapy is optional and the total duration of therapy
should be 6 weeks. PVE due to viridans streptococci and
S. bovis isolates with penicillin MIC > 0.12 mg/ml should
be treated with penicillin or ceftriaxone plus gentamicin
(single daily dose) for 6 weeks; vancomycin should only
be utilized for patients with severe B-lactam allergy. The
treatment of enterococcal PVE is the same as for native
valve infections; all regimens should be given for
a minimum of 6 weeks.
Therapy for culture negative PVE depends on
whether the onset is early (less than 1 year since valve
replacement) or late. Empiric therapy for early culture
negative PVE should include vancomycin, gentamicin
(3 mg/kg daily in three divided doses), cefepime, and
rifampin. For late PVE, the regimens outlined for native
valve culture negative IE may be used, with the addition
of rifampin.
Anticoagulation
Anticoagulation has not been shown to provide benefit in
patients with native valve IE, and active IE is considered
a strong contraindication to anticoagulation because of
the potential risk of bleeding from unrecognized central
nervous system (CNS) mycotic aneurysms. The use
of anticoagulation in patients with PVE is much more
controversial. The AHA guidelines recommend continuing anticoagulation in patients with PVE, except in
patients with S. aureus infections who have experienced
a CNS embolic event. Anticoagulation may be cautiously
resumed once these patients have completed 2 weeks of
appropriate antibiotic treatment.
Surgery
In a recently published multicenter cohort study of IE,
almost 50% of patients underwent valvular surgery for the
management of their infection. Despite widespread use
and the belief that surgery improves outcomes in selected
patients, there are virtually no data from randomized
controlled clinical trials regarding appropriate indications
and timing of surgery. Congestive cardiac failure (CHF) is
the most common indication for surgery in IE and the
clinical condition of the patient, not the duration of antibiotic therapy, dictates the timing of surgery. Surgical
intervention should also be considered for infections due
to resistant pathogens for which optimal bactericidal
therapy cannot be devised (e.g., vancomycin resistant
enterococci) and patients with left-sided IE who remain
bacteremic after a week of appropriate antimicrobial therapy, provided that a metastatic focus of infection has been
excluded as a cause of persistent bacteremia (4). Other
generally accepted indications for surgery include one or
more major embolic events in patients with left-sided IE,
paravalvular extension of infection, and valve perforation
or rupture. Fungal IE has long been considered a strong
indication for valve replacement; however, the availability
of newer and less toxic antifungal agents and the use
of oral azoles for long-term suppressive therapy has
resulted in clinical success.
The availability of transesophageal echocardiography
(TEE) has resulted in additional recommendations for
surgery including persistence of a vegetation after
a systemic embolic event, anterior mitral leaflet vegetations (particularly those >10 mm in size), and increase
in the size of a vegetation despite appropriate antibiotic
therapy.
In addition to the indications listed above, surgical
therapy should be considered for PVE due to S. aureus,
S. lugdunensis, and early PVE due to other CoNS.
Cardiac and Endovascular Infections
Indications for surgical treatment are likely prevalent
among patients with IE who require admission to the
intensive care unit. The perception that the patient is
“too sick” to undergo surgery often results in a delay of
a potentially lifesaving procedure. Decisions regarding
surgical intervention must be made with input from the
intensivist, cardiologist, infectious diseases specialist, and
the surgeon. The timing of surgical intervention in those
who have had a CNS embolic event, especially if hemorrhagic, is particularly problematic. In addition to the
team outlined above, input from the neurologist or
neurosurgeon will be essential to optimize management
for these patients.
Evaluation
Although it remains an uncommon infection, advances in
medical technology and care have expanded the number
of patients who are at risk for IE. As many as 20% of
individuals with IE have no recognized preexisting cardiac
condition that places them at increased risk for the
infection. The diagnosis should be considered in any
patient with persistent bacteremia, evidence of a systemic
embolic event, or evidence of infection in the setting of
a predisposing cardiac lesion. Up to 25% of IE cases are
health care associated infections. The presenting features
of IE may include stroke and other embolic phenomenon,
evidence of metastatic infection such as musculoskeletal
infection or splenic abscess, or CHF but most patients
have nonspecific manifestations of infection. Elderly
patients with IE are more likely to have been hospitalized
for an invasive procedure before the onset of infection and
have lower rates of embolic events, immune phenomena,
and septic complications. Older individuals are likely to
present acutely with infection due to virulent pathogens
such as S. aureus; the classic peripheral stigmata of IE have
become far less common as a presenting manifestation of
the disease. Nevertheless, meticulous examination of the
patient who presents with evidence of sepsis may reveal
a conjunctival, retinal, or subungual (splinter) hemorrhage or even Janeway lesions or Osler’s nodes, findings
that suggest the diagnosis even before blood cultures turn
positive or the results of echocardiography are available.
The majority (up to 85%) of individuals with IE will have
an audible murmur. Mitral valve involvement is more
common than aortic valve infection. Tricuspid valve IE is
a well-recognized complication of IDU; however, in several recent series left-sided IE was more common than
right-sided infections in this patient population. Tricuspid
valve IE also occurs in non-IDUs with central venous
catheters. Patients with right-sided IE often present with
C
pulmonary manifestations due to septic pulmonary
emboli. The possibility of IE should be seriously considered in all patients with S. aureus bacteremia. Risk factors
for valve infection in these patients include an unknown
source of bacteremia, presence of a prosthetic valve, persistent fever, and persistent positive blood cultures. The
risk of IE in patients with community-onset enterococcal
bacteremia is also high.
Because of the requirement to provide specific pathogendirected antimicrobial therapy for a prolonged course, the
necessity of ensuring that a microbiologic diagnosis is confirmed cannot be overemphasized. Two sets of blood cultures
should be obtained prior to the initiation of empiric antimicrobial therapy. One set of blood cultures is defined as
a blood sample drawn at a single time from a single site,
regardless of how many bottles or tubes are submitted from
that sample. In total, 3–4 sets of blood cultures should be
obtained during the first 24 h of evaluation. In patients who
have no or limited peripheral venous access, cultures may be
obtained via an intravascular device; however, a sample
obtained in this manner represents a single set of blood
cultures, even if obtained from more than one port. At
least two sets of blood cultures should be obtained on each
subsequent day to document the persistence or clearing of
bacteremia.
While blood cultures remain the single most important diagnostic test in the evaluation of patients with
suspected IE, echocardiography, particularly TEE, has significantly improved both diagnosis and earlier recognition
of complications of IE. The sensitivity of transthoracic
echocardiography (TTE) for the diagnosis of IE is
60–65%; the sensitivity of TEE is 90–95%. Both have
a specificity of greater than 90%. The superior sensitivity
of TEE is even more significant in the evaluation of PVE.
In patients at high risk of IE or for whom the clinical
suspicion of IE is moderate to high, TEE should be the
initial imaging procedure chosen; the procedure can be
safely performed even in patients who are critically ill.
Previously, the definite diagnosis of IE required confirmation of infection based on specimens obtained at the
time of valve replacement surgery. With the advent of
echocardiography, a definite diagnosis can now be made
based on a constellation of clinical, microbiologic and
echocardiographic findings. The modified Duke criteria
are now widely accepted for the diagnosis of IE [4].
Pacemakers and Implantable CardioverterDefibrillators
As the number of accepted indications for the use of permanent pacemakers and implantable cardioverter-defibrillators
459
C
460
C
Cardiac Contractility
has increased, cardiac device–related infections (CDIs) have
become more common. CDIs may be confined to the
generator pocket, or may include wire infections complicated by endocarditis. The majority of patients with CDIs
present with localized findings of infection at the site of the
generator pocket; however, the absence of such findings
does not exclude the device as a potential source of sepsis.
TTE lacks sufficient sensitivity to evaluate for device-related
infection. S. aureus is implicated most often; infections
due to CoNS, enterococci, Gram-negatives, and candida
also occur.
Successful treatment of CDIs in association with
positive blood cultures requires complete removal of
the device, especially in patients with IE. The mortality
rate for device-related IE is as high as 66% without
device removal, but is as low as 18% with complete
removal and appropriate antimicrobial therapy. Emergent device removal is particularly important in the
management of patients with severe sepsis. Baddour
and colleagues devised an algorithm for the management
of these infections that is a useful guide [5]. The device
may not be safely re-implanted until the generator pocket
has been adequately debrided and the blood cultures are
negative.
Prognosis and After-care
In-hospital mortality for IE is 15–20%; the 1 year mortality may be as high as 40%. Risk factors for in-hospital
death include increasing age, CHF, infection due to
S. aureus or CoNS, the presence of mitral valve vegetations, paravalvular complications, surgery indicated but
not performed, and PVE. Surgical treatment for IE and
infection due to S. viridans is associated with a decreased
risk of in-hospital mortality. For right-sided IE, there
is tremendous disparity in the risk of mortality based
on the risk factor for acquisition. Overall mortality is
very low in IDUs, but much higher in patients with
right-sided IE due to intravascular devices. IE that is
health care associated is an independent predictor of
both in-hospital and 1 year mortality from the infection
as is IE due to S. aureus. In addition, there is a significant
difference between the risk of mortality in right-sided vs.
left-sided IE in IDUs. A TEE should always be performed,
even in patients with clear evidence of right-sided infection (such as septic pulmonary emboli) because concomitant infection of the left-sided valves may also be
present.
Because a prior episode of IE is one of the strongest
risk factors for subsequent episodes of IE, these patients
must receive prophylactic antibiotics as recommended in
the AHA guidelines for the prevention of IE.
References
1.
2.
3.
4.
5.
Baddour LM, Wilson WR, As B et al (2005) Infective endocarditis:
diagnosis, antimicrobial therapy and management of complications:
a statement for healthcare professionals from the Committee on
Rheumatic Fever, Endocariditis and Kawasaki Disease, Council on
Cardiovascular Disease in the Young, and the Councils on Clinical
Cardiology, Stroke and Cardiovascular Surgery and Anesthesia,
American Heart Association: endorsed by the Infectious Diseases
Society of America. Circulation 111:394–434
Fowler VG, Miro JM, Hoen B et al (2005) Staphylococcus aureus
endocarditis: a consequence of medical progress. J Am Med Assoc
293:3012–3021
Rybak M, Lomaestro B, Rotschafer JC et al (2009) Therapeutic
monitoring of vancomycin in adult patients: a consensus review of
the American Society of Health-System Pharmacists, the Infectious
Diseases Society of America and the Society of Infectious Diseases
Pharmacists. Am J Health Syst Ph 66:82–98
Js L, Sexton DJ, Mick N et al (2000) Proposed modifications to the
Duke criteria for the diagnosis of infective endocarditis. Clin Infect
Dis 30:633–638
Sohail MR, Uslan DZ, Khan AH et al (2007) Management and
outcome of permanent pacemaker and implantable cardioverterdefibrillator infections. J Am Coll Cardiol 49:1851–1859
Cardiac Contractility
LARA WIJAYASIRI1, ANDREW RHODES2, MAURIZIO CECCONI3
1
Department of Anaesthesia, St. George’s Hospital,
London, UK
2
Department of Intensive Care, St. George’s Hospital,
London, UK
3
Department of General Intensive Care, St. George’s
Hospital, London, UK
Synonyms
Inotropy
Definition
Cardiac contractility can be defined as the tension developed and velocity of shortening (i.e., the “strength” of
contraction) of myocardial fibers at a given preload and
afterload. It represents a unique and intrinsic ability of
cardiac muscle to generate a force that is independent of
any load or stretch applied.
Characteristics
Factors increasing cardiac contractility – positive inotropic effect [1]:
● Sympathetic nervous system activation
● Circulating endogenous catecholamines
Cardiac Contractility
C
● Drugs – inotropic agents, digoxin, calcium ions (Ca2+)
● Metabolic – hyperthermia, hypercalcaemia
● Heart rate – as heart rate increases (e.g., during exercise), contractility increases (this occurs up to a certain
point beyond which the tachycardia impairs normal
cardiac function). This phenomenon is known as the
Treppe or Bowditch effect. It is thought to be mediated
by an increase in cytoplasmic Ca2+ due to reduced
reuptake by the sarcoplasmic reticulum secondary to
a reduction in the diastolic time.
C
Factors reducing cardiac contractility – negative
inotropic effect [1]:
● Parasympathetic nervous system activation (e.g., vagal
maneuvres)
● Drugs – b adrenoceptor antagonists
● Metabolic – hypothermia, hypoxia, hypercapnia,
hyperkalemia, hypocalcemia
● Pathological states – diastolic and systolic dysfunctions
Assessment
It is very difficult to clinically assess cardiac contractility
in vivo. One method involves measuring the rate of change
of ventricular pressure with respect to time (dP/dt) and
then using the maximum rate of pressure rise (peak dP/dt)
to compare contractility of the heart.
Another method involves the use of serial pressure –
volume (PV) loops to obtain end-systolic pressure–volume relationship (ESPVR) curves (Fig. 1). These methods
are quite invasive and clinically are not practical [2].
Direct, real-time visualization of myocardial wall
motion and blood ejection patterns using echocardiography and Doppler allow an assessment of the functional
status of the heart to be made. With echocardiography,
two useful parameters are ejection fraction and shortening
fraction. The left ventricle (LV) ejection fraction (normal
range between 55% and 75%) is defined by (LV diastolic
volume – LV systolic volume)/LV diastolic volume. The
shortening fraction ratio measures the change in diameter
of the LV between its contracted and relaxed states (LV
end-diastolic diameter – LV end-systolic diameter)/LV
end-diastolic diameter. These measurements can give an
idea of heart performance, but they cannot provide
objective assessments of myocardial contractility.
The concept of contractility can be illustrated using
force–velocity curves (where the term “force” represents
the afterload to the heart and “velocity” refers to the speed
of myocardial muscle shortening) (Figs. 2–4) [3]. A heart
with a good contractility responds to volume loading in a
different way to a heart with impaired contractility (Fig. 5).
461
Cardiac Contractility. Figure 1 Pressure volume loop for
normal left ventricle. Where: EDP is end-diastolic point (when
mitral valve closes), ESP is end-systolic point (when aortic valve
closes), ESV is end-systolic volume and EDV is end-diastolic
volume. Increasing contractility moves the ESP up and to
the left, while decreasing contractility moves it down and
to the right
Cardiac Contractility. Figure 2 Force–velocity curve for an
isolated myocardial fiber: as the force (afterload) reduces, the
velocity of muscle contraction increases until a maximal
velocity (Vmax) is achieved at zero afterload (in reality, Vmax
cannot be obtained experimentally because the myocardium
does not contract in the absence of any load, and therefore
this value is obtained by extrapolation)
C
Cardiac Disease
Cardiac Contractility. Figure 3 Effects of increasing preload
on force–velocity curve for an isolated myocardial fiber. As
preload gradually increases, the isometric tension within the
myocardial fiber increases as dictated by the Frank–Starling
mechanism of the heart (length–tension relationship).
However, Vmax remains unchanged, demonstrating the fact
that it does not depend on the length of the muscle fiber (i.e.,
preload) from which contraction is initiated
Vimax
ty
tili
Vmax
c
tra
n
Co
Velocity
462
Cardiac Contractility. Figure 5 Modification of Frank–
Starling curve. A heart with normal contractility (N) and
a failing heart with poor contractility (F) have different abilities
to respond to volume loading and hence increase their stroke
volumes by different amounts
affected, right ventricular end-diastolic pressures rise
which can lead to an increase in right atrial pressures
and venous congestion resulting in peripheral edema,
ascites, and hepatomegaly. Quite often left-sided systolic
dysfunction will eventually cause right-sided systolic
dysfunction, and this is commonly termed biventricular
failure or congestive cardiac failure.
There are numerous causes of systolic dysfunction
including coronary artery disease (myocardial ischemia
and infarction), valvular heart disease, dilated cardiomyopathy, myocarditis, amyloidosis, drugs (e.g., ethanol
excess and cocaine) and toxins (e.g., sepsis).
References
1.
Afterload
Cardiac Contractility. Figure 4 As contractility increases, the
curve is shifted up and to the right with an increase in both
Vmax and isometric tension. This increase in Vmax (V’max) is of
particular significance as it is a measure of cardiac contractility
that is unrelated to changes in preload or afterload
2.
3.
Parrillo JE, Dellinger RP (2008) Critical care medicine: principles of
diagnosis and management in the adult, 3rd edn. Mosby Elsevier,
Philadelphia, pp 39–52
Wigfull J, Cohen AT (2005) Critical assessment of haemodynamic
data. CEACCP 5(3):84–88
Klabunde RE (2005) Cardiovascular physiology concepts, 1st edn.
Lippincott Williams and Wilkins, Philadelphia, pp 81–85
Cardiac Disease
Systolic Dysfunction
Systolic dysfunction, often termed ventricular failure,
refers to an impairment in ventricular contractility which
results in a reduced stroke volume and hence inadequate
cardiac output. If the left ventricle is affected, left ventricular end-diastolic pressures gradually rise which can lead
to an increase in left atrial and pulmonary pressures
resulting in pulmonary edema. If the right ventricle is
▶ Congenital Heart Disease in Children
Cardiac Doppler
▶ Echocardiography
Cardiac Magnetic Resonance Imaging
Cardiac Failure in Children
JONATHAN R. EGAN, MARINO S. FESTA
The Children’s Hospital at Westmead, Westmead,
Australia
Definition
Inability of the heart to meet the metabolic needs of the
body as a result of an inability to sustain an effective
cardiac output.
Characteristics
Cardiac failure can occur as a result of a myriad of cardiogenic causes in the setting of congenital cardiac disease,
which can be grouped into four main categories. Cardiac
failure resulting from acquired disease is discussed
subsequently.
Increased Pulmonary Blood Flow
An atrial, ventricular, or large vessel communication (e.g.,
patent ductus arteriosus (PDA)) results in shunting of
blood and a volume load on the systemic left ventricle.
This leads to ventricular failure and pulmonary venous
congestion.
Left Ventricular Outflow Obstruction
In the setting of aortic stenosis or coarctation there can be
early myocardial failure in the neonatal, infant, or childhood age groups – depending on the degree of obstruction.
Valvular Regurgitation
A volume load on the ventricle leads to forward delivery
failure and backward obstruction to venous inflow.
Right Ventricular Failure
Isolated right ventricular failure occurs in the setting of
pulmonary embolism, pulmonary hypertension, or
chronic respiratory failure.
Management
An A, B, C approach to stabilization is required – it is
important to consider the effect of excessive inspired oxygen upon pulmonary vascular resistance, which can exacerbate left-to-right shunting and worsen pulmonary
venous congestion. Providing positive end expiratory
pressure (PEEP) via a bag and mask, with an initial FiO2
of 0.3–0.4 maximum should prove both safe and beneficial. A carefully observed trial of noninvasive ventilation
C
may improve oxygenation and the work of breathing.
Subsequently intubation can be performed if considered
necessary. Intubation of a neonate or child with cardiac
failure can be risky and should be undertaken by senior
trained intensivists/anesthetists. Induction drugs that can
lead to pronounced reductions in systemic vascular resistance and myocardial contractility – such as thiopentone
or propofol are best avoided. Ketamine is a good alternative. Apart from optimizing oxygenation and induction
drugs, the hemodynamic status of the child should be
preemptively stabilized with administration of fluid
boluses (5–10 ml/kg of normal saline) and vasopressors
(5 mcg/kg/min dopamine). Vasopressor and inodilator
therapy can be modified following stabilization, full
assessment, and provision of central venous and arterial
lines. Initially reducing the systemic vascular resistance
with either dobutamine and milrinone or levosimendan
infusions – as systemic perfusion pressure permits and
then transitioning to captopril will optimize myocardial
performance.
It is important to determine the underlying lesion(s)
and any contributing factors – viral pneumonitis/bronchiolitis through careful history, examination, echocardiography, and other directed investigations. Management of
fluid balance, energy requirements, and expenditure will
provide a foundation on which to add diuretic, inotropic,
and vasodilator therapy. Surgical repair maybe indicated
and the timing of this depends on overall patient stability
and local resources.
Cardiac Magnetic Resonance
Imaging
CHADWICK D. MILLER1, DANIEL W. ENTRIKIN2,
W. GREGORY HUNDLEY3
1
Department of Emergency Medicine, Wake Forest
University Baptist Medical Center School of Medicine,
Winston-Salem, NC, USA
2
Department of Radiology and Internal Medicine,
Section on Cardiology, Wake Forest University School of
Medicine, Winston-Salem, NC, USA
3
Department of Internal Medicine, Section on Cardiology
and Department of Radiology, Wake Forest University
School of Medicine, Winston-Salem, NC, USA
Synonyms
Cardiac MR; Cardiac MRI; CMR
463
C
464
C
Cardiac Magnetic Resonance Imaging
Definition
Cardiac magnetic resonance imaging (CMR) is the use of
magnetic resonance (MR) techniques to obtain images of
the heart. As with all MR technology, tissues are subjected
to a strong magnetic field that orients the protons of
hydrogen atoms so that they rotate, or “precess,” in a
uniform manner. These hydrogen atoms are then exposed
to a radio signal, commonly referred to as a pulse
sequence, which transiently changes their orientation.
After the radio signal, the protons revert to their original
precession patterns and create a signal that is captured to
create images. The time to reversion to their orderly precession is dependent on tissue composition and therefore
the resulting signal patterns are specific to the tissue composition. As disease states change tissue composition, the
tissues provide a different signal allowing the determination of normal and disease states.
By utilizing ECG-gating, signals obtained from CMR
can be processed to create both still and motion images of
the heart. Further, imaging can be conducted during rest,
or during cardiac stress. Finally, contrast agents can be
used to exploit subtle differences in cellular and tissue
composition or function.
Pre-existing Condition
Overview
CMR has seen increased use over the past decade. Newer
imaging techniques and scanner technologies have greatly
enhanced the quality and diagnostic accuracy of CMR.
Common indications for CMR are discussed in the section
immediately below. However, in the critically ill patient,
obtaining CMR testing is associated with several logistical
challenges that are discussed in the Application section.
Because of these challenges, other imaging modalities are
often preferred over CMR. Circumstances when CMR
may be strongly considered in the critically ill patient are
further discussed under the heading “Clinical circumstances in which CMR would be useful in critically ill
patients.”
Common Indications and Appropriateness
Criteria
Criteria developed by a multidisciplinary panel provide
guidance to determine when CMR is considered an appropriate diagnostic test [1]. These appropriate indications
are summarized in the paragraphs below.
Evaluation of Acute Chest Pain
CMR in combination with pharmacologic vasodilator
(adenosine, Persantine) perfusion imaging or inotropic
stimulation (dobutamine) wall motion imaging can be
used to detect inducible myocardial ischemia. Multiple
studies have demonstrated that stress CMR has equal or
higher accuracy compared to other stress testing modalities with sensitivity ranging from 86% to 96% and specificity from 83% to 100% for detecting 50% coronary
artery luminal narrowings [2, 3]. Patients unable to exercise or who have ECGs that are not interpretable are
particularly well suited for CMR imaging. CMR to evaluate acute chest pain is inappropriate in low risk patients
with interpretable ECGs and the ability to exercise, as well
as in patients with high pretest probability of CAD combined with positive biomarkers or ST-segment deviation.
All uses of MR angiography to evaluate for CAD as a cause
of acute chest pain are considered inappropriate.
Evaluation of Cardiac Structure and
Function
CMR is well suited to provide information on cardiac
anatomy and function. CMR is particularly useful when
technically limited echo images have been obtained. CMR
can appropriately be used to determine left ventricular
function after myocardial infarction (AMI) or in patients
with heart failure, assess for congenital heart disease
including anomalous coronary arteries, and evaluate
native and some prosthetic valves. Furthermore, CMR is
useful and appropriate to evaluate for cardiomyopathies,
myocarditis, pericardial disease, cardiac thrombus,
cardiac masses, and aortic dissection. Although aortic
dissection can be detected by cardiac MRI, the aorta is
extra-cardiac and is not further discussed in this chapter.
Application
Equipment
CMR exams are commonly performed on commercially
manufactured MRI machines with specialized software to
obtain cardiac images. 1.5 Tesla machines are widely used,
with some institutions adopting machines with stronger
magnetic fields. Power injectors are ideal if perfusion
imaging is to be performed. Specialized monitoring
equipment that is MRI compatible is required.
Policies and Procedures
Policies and procedures must be in place for patient
screening and emergency response. Patients must be
screened for MRI compatibility. Pacemakers, defibrillators, and ferrous implants are generally not compatible
with MRI. Emergency response plans should be well delineated in the event the patient’s condition deteriorates
during the exam.
Cardiac Magnetic Resonance Imaging
Components
CMR consists of several components that must be tailored
to the clinical question.
Stress Agents
Stress testing is commonly used to assess acute chest pain.
The “stress” component can be either a vasodilator or an
inotropic agent such as dobutamine. Vasodilators such as
adenosine are used in conjunction with perfusion imaging
techniques to capture stress myocardial perfusion images,
which can then be compared with similar rest myocardial
perfusion images obtained in the absence of the vasodilator and other infarct “delayed enhancement” techniques.
By comparison, dobutamine stress examinations rely on
the identification of a regional left ventricular wall motion
abnormality that occurs when the patient has achieved
target heart rate (target heart rate = [220 age]0.85)
during peak pharmacologic stress. The detection of perfusion defects or wall motion abnormalities during stress
is suggestive of significant coronary stenosis.
Use of Gadolinium: Perfusion Imaging and
Delayed Enhancement
Gadolinium containing contrast agents are used to
enhance the information provided from CMR. The presence of gadolinium contrast agents modifies the signal
emanating from nearby protons. By exploiting differences
in normal and abnormal gadolinium distribution within
the myocardium both perfusion imaging and delayed
enhancement imaging can aid in the diagnosis of underlying disease states. For instance, in the normal heart there
should be no perceptible difference in the perfusion of
a
b
C
gadolinium through the myocardium during rest or stress
perfusion imaging with vasodilators such as adenosine.
The presence of a perfusion defect during adenosine stress
perfusion is strongly suggestive of inducible ischemia, and
the myocardial segments involved are predictive of the
vascular territory affected by a high-grade flow-limiting
stenosis, as demonstrated in Fig. 1.
Delayed enhancement imaging exploits the fact that
various disease states allow abnormal accumulation of gadolinium within myocardial tissue. Because of this, inflammation and cell death from acute myocardial infarction or
acute myocarditis, scarring from old myocardial infarction
or prior myocarditis, and infiltrative processes that result
in myocardial scarring (such as sarcoidosis, amyloidosis,
and hypertrophic cardiomyopathy) can all be identified
with delayed enhancement imaging. In the setting of
acute inflammation as can be seen with myocarditis
or acute infarction it is the leaky basement membranes of
the myocardial microvasculature, expanded extracellular
space related to edema and leaky cell membranes that
allow excessive accumulation of gadolinium within affected
myocardial territories. In the setting of chronic scarring
from prior infarct or prior myocarditis, ventricular
remodeling results in the deposition of a fibro-fatty infiltrate in the area of scarring. The collagenous matrix of this
infiltrate expands the extracellular space and traps gadolinium in these regions of scarring. And finally, with the
various infiltrative cardiomyopathies, there is typically an
abnormal accumulation of proteins and/or disordered
array of myocytes also associated with regions of fibrosis
and scarring that result in accumulation of gadolinium.
In all of these settings, delayed enhancement imaging
c
Cardiac Magnetic Resonance Imaging. Figure 1 (a) Short axis image obtained through the mid-ventricle during adenosine
stress perfusion. The white arrows demonstrate a large region of decreased signal intensity indicative of an area of decreased
perfusion involving the lateral wall segments. (b) Catheter angiogram demonstrating injection of the left main coronary artery.
The white arrow demonstrates a critical stenosis of the proximal left circumflex coronary artery, while the white arrowhead
demonstrates the limited perfusion of a large obtuse marginal branch from the circumflex. (c) Repeat angiogram
following percutaneous transluminal coronary intervention with stent (white asterices) placement in the proximal obtuse
marginal branch. Note restoration of normal flow in the distal vessels (small white arrows)
465
C
466
C
Cardiac Magnetic Resonance Imaging
sequences allow clear recognition and delineation of disease
regions of myocardium when compared with adjacent normal myocardium.
T2-Weighted Images
T2-weighted image sequences are able to detect myocardial
edema. Myocardial edema is an early marker of myocardial
ischemia or inflammation. The use of T2 weighted images
allows the early determination of myocardial infarction, at
times before chemical evidence is present in the blood. The
combined use of T2-weighted imaging sequences and
delayed enhancement imaging sequences can be used to
discriminate between several types of myocardial injury
including, acute inflammation in the setting of myocarditis,
acute injury in the setting of acute myocardial infarction,
and chronic scarring in the setting of remote myocardial
injury. Figure 2 demonstrates features of a severe left
anterior descending (LAD) territory infarction.
Nephrogenic systemic fibrosis – The use of gadolinium
containing contrast agents has been linked to nephrogenic
systemic fibrosis, which can be a progressive fatal condition. The relationship between gadolinium containing
contrast agents and nephrogenic systemic fibrosis resulted
in a boxed warning from the FDA against use in patients
with acute or chronic renal insufficiency (glomerular filtration rate <30 ml/min), acute renal insufficiency of any
severity due to hepatorenal syndrome, or in the perioperative liver transplant period. Because this is a rapidly
evolving area, readers are encouraged to consult the
most recent guidance on the risk of nephrogenic systemic
fibrosis.
Claustrophobia – MRI is conducted in a closed
environment. Therefore it may not be tolerated by
patients with severe claustrophobia unless sedation is
provided.
Logistical Barriers to CMR Use
Complications
Flying objects – Metallic objects in the room or entering
the room will be forcefully attracted to the magnet. This
can cause severe injury or death.
Burns – Unrecognized implanted metallic objects can
cause tissue heating, neural stimulation, or skin burns.
Implanted device malfunction – Some devices are MRI
compatible, such as some ventriculo-peritoneal shunts,
but require programming after the scan is completed.
Others are not compatible, such as defibrillators, and can
cause death if MRI is conducted.
a
b
Ultrasound or computed tomography is often used for
patients in the first 24 h of an acute illness due to their
wide availability. Even in some instances where CMR may
be the preferred test, CMR is less commonly used due to
logistical challenges performing the procedures in critically ill patients.
The logistical challenges associated with CMR include
the need to have a scanner capable of CMR imaging,
expertise and support to perform these examinations,
and skilled readers to provide high quality expert interpretation. Critical illness adds complexity for a multitude
c
d
Cardiac Magnetic Resonance Imaging. Figure 2 (a) Short axis T1-weighted image of the heart at the level of the mid left
ventricle before the administration of IV gadolinium. (b) Similar T1-weighted image shortly after the administration of gadolinium
demonstrates increased signal intensity within the anterior (A) and anterolateral (AL) wall segments, representative of early
accumulation of gadolinium within the territory of the LAD coronary artery. (c) T2-weighted short axis image at the same
level demonstrating increased signal intensity in the same distribution (small white arrows) representative of myocardial edema in
the LAD territory. (d) Delayed enhancement image at the same level demonstrating extensive delayed enhancement in the
LAD territory (small white arrows) indicative of progressive accumulation of gadolinium in the myocardium related to acute LAD
territory infarction. In this particular instance the patient suffered a ST-elevation myocardial infarction (STEMI) secondary
to complete occlusion of the LAD resulting in profound ischemia; the black subendocardial regions (white asterices) within this
distribution are representative of regions of complete microvascular occlusion in the LAD territory
Cardiac Markers for Diagnosing Acute Myocardial Infarction
of reasons. First, life-sustaining equipment must be nonferrous and MRI compatible. Second, imaging times can
last 30–60 min, which may be impossible in patients with
hemodynamic instability. Third, patients must lie flat
during the exam, which can exacerbate some disease processes. Fourth, critically ill patients commonly have renal
insufficiency. Renal insufficiency increases the risk of
developing nephrogenic systemic fibrosis after administration of gadolinium containing contrast agents that are
commonly used in CMR. Finally, CMR may not be available emergently when it is needed.
Clinical Circumstances in Which CMR Would
Be Useful in Critically Ill Patients
1. Patients with new onset heart failure of uncertain
etiology when echocardiography is unavailable or
nondiagnostic.
CMR will identify myocardial edema, inflammation, wall motion, ventricular function, and can distinguish acute from chronic myocardial infarction.
This information can provide supporting or refuting
evidence for AMI, myocarditis, cardiomyopathies,
cardiotoxic effects of therapy, restrictive pericardial
disease, and valvular dysfunction.
2. Concern of AMI or ACS in patients with a noninterpretable ECG and nondiagnostic cardiac markers.
Patients with bundle branch blocks or other
conditions preventing an accurate ECG assessment,
or continuous unrelieved symptoms may benefit
from CMR imaging. While bedside echo is often
used to determine ejection fraction and to assess
for regional wall motion abnormalities, CMR may
also serve in similar capacity. This may be particularly
useful in patients with a complicated revascularization history. CMR can be used to assess for edema,
obtain resting wall motion and perfusion, and
delayed enhancement. These latter features help to
characterize tissue and define the etiology of left or
right ventricular wall motion abnormalities. In addition, these imaging strategies have previously been
shown to accurately detect MI and can do so before
elevation of cardiac markers [4]. Early acquisition of
this information may allow early planning of treatment strategies.
3. Clinical history concerning for a cardiac thrombus
or mass.
In patients with suspected intracardiac thrombus,
CMR can accurately depict the presence of an
intracavitary thrombus within the heart, and commonly offers superior visualization of the apical
regions that may improve detection in the setting of
C
467
apical thrombus. CMR is also capable of identification
and characterization of mass lesions intrinsic to the
heart, including not only benign lesions but also primary and metastatic neoplasms.
C
Conclusion
CMR is able to provide a comprehensive evaluation for
cardiac disease. CMR exams are able to assess for structural and functional disease of the heart with high accuracy. However, the logistical challenges associated with
obtaining a CMR exam in patients with critical illness
limits its use in these patients. However, there are several
scenarios in which care providers, despite these logistical
challenges, may choose to perform CMR imaging over
other imaging modalities.
References
1.
2.
3.
4.
Hendel RC, Patel MR, Kramer CM, ACCF/ACR/SCCT/SCMR/
ASNC/NASCI/SCAI/SIR et al (2006) Appropriateness criteria for
cardiac computed tomography and cardiac magnetic resonance
imaging: a report of the American College of Cardiology Foundation
Quality Strategic Directions Committee Appropriateness Criteria
Working Group, American College of Radiology, Society of
Cardiovascular Computed Tomography, Society for Cardiovascular
Magnetic Resonance, American Society of Nuclear Cardiology,
North American Society for Cardiac Imaging, Society for Cardiovascular Angiography and Interventions, and Society of Interventional
Radiology. J Am Coll Cardiol 48:1475–1497
Ingkanisorn WP, Kwong RY, Bohme NS et al (2006) Prognosis of
negative adenosine stress magnetic resonance in patients presenting
to an emergency department with chest pain. J Am Coll Cardiol
47:1427–1432
Nagel E, Lehmkuhl HB, Bocksch W et al (1999) Noninvasive diagnosis of ischemia-induced wall motion abnormalities with the use of
high-dose dobutamine stress MRI: comparison with dobutamine
stress echocardiography. Circulation 99:763–770
Cury RC, Shash K, Nagurney JT et al (2008) Cardiac magnetic
resonance with T2-weighted imaging improves detection of patients
with acute coronary syndrome in the emergency department.
Circulation 118:837–844
Cardiac Markers for Diagnosing
Acute Myocardial Infarction
JAMES MCCORD
Henry Ford Hospital Center, Detroit, MI, USA
Synonyms
Cardiac troponin I; Cardiac troponin T; Creatine kinaseMB; Myoglobin
468
C
Cardiac Markers for Diagnosing Acute Myocardial Infarction
Definition
Cardiac markers are proteins that are released from myocardial cells during acute myocardial infarction (AMI).
Characteristics
Cardiac Markers
Creatine Kinase-MB
Prior to the use of cardiac troponin I (cTnI) and cardiac
troponin T (cTnT), creatine kinase-MB (CK-MB) was
the most common marker used in the evaluation of individuals for possible acute myocardial infarction (AMI).
Creatine (CK) is a dimer composed of two subunits,
M and B. Skeletal muscle is predominantly composed
of CK-MM, and CK-BB is mainly in brain and kidney.
CK-MB, which comprises 20–30% of cardiac muscle, is
released into the circulation during myocardial injury that
occurs during AMI. Although CK-MB is predominantly
located in the myocardium, 1–3% of the CK in skeletal
tissue is CK-MB; smaller quantities of CK-MB are also
located in other tissues such as intestine, diaphragm,
uterus, and prostate. The use of CK-MB in the diagnosis
of AMI is limited by low specificity in the setting of trauma
or renal insufficiency. A relative index has been used,
which is a function of the amount of CK-MB relative to
total CK. The use of the relative index does improve
specificity but decreases sensitivity in the diagnosis of
AMI. In the setting of AMI, CK-MB becomes elevated in
the circulation 3–6 h after symptom onset, and can remain
elevated for 24–36 h.
Cardiac Troponin I and T
The cardiac troponins (cTn) are proteins that modulate
the interaction between actin and myosin in myocardial
cells. There are isoforms of cTnI and cTnT that are
unique to cardiac tissue, which has allowed specific assays
to be developed that measure only the cardiac forms.
Most of the cTn is bound to the contractile apparatus
in the myocardium but 3% of cTnI and 6% of cTnT exist
free in the cytoplasm. The initial elevation of cTnI and
cTnT is likely due to the free cTn, while the more
prolonged elevation is secondary to the degradation of
cTn bound to the contractile apparatus. The early release
kinetics of cTnI and cTnT are similar becoming elevated
3–6 h after the initiation of an AMI. However, cTnI and
cTnT may remain elevated for 4–7 and 10–14 days,
respectively. There is a standard assay used for cTnT so
there is consistent reporting of values. However, at present there is no such standardization for the different
cTnI assays, and cTnI is released in various forms.
Different assays detect these forms in varying degrees
leading up to a 20-fold difference in measurement for
the same cTnI serum concentration. These different cTnI
assays, with different cut-points and measured absolute
values, can lead to clinical confusion when a patient is
transferred from one hospital to another. Also cTnT, as
compared to cTnI, is more commonly elevated in
patients with renal insufficiency.
In the evaluation of patients for possible AMI, cTnI
and cTnT have numerous advantages over CK-MB, and
are the recognized preferred cardiac markers to be used in
evaluating such patients [1]. In addition to having better
specificity, the cTn have higher sensitivity detecting AMI.
Patients that previously would have been diagnosed with
unstable angina with normal CK-MB values may have
minor myocardial necrosis that can be detected by an
abnormal cTnI or cTnT. With some of the newer more
sensitive assays the number of patients with ACS classified
as AMI will increase further. Multiple studies have consistently shown that elevated cTn is associated with adverse
events: higher mortality, recurrent MI, and need for
urgent revascularization. Even minor cTn elevations are
associated with high-risk angiographic findings: extensive
atherosclerosis, visible thrombus, complex lesions, and
slower coronary flow. Patients with ACS and an elevated
cTn benefit from aggressive pharmacologic therapy and
revascularization.
The recommended cut-point for an elevated cTn is
the 99th percentile of a normal reference population at
a precision level of<10% coefficient of variation [1, 2].
The coefficient of variation is a measure of precision and
defined as the standard deviation/mean when a sample is
run multiple times on the same assay. In the past cTn
assays where not able to meet the precision requirement
at low values so only higher levels were reported as
abnormal, but newer assays are more precise at low levels
and guidelines recommend reporting these low levels as
abnormal. Although cTn elevation is very specific for
myocardial injury it does not indicate the mechanism
of myocardial injury. When cardiac markers have been
measured the diagnosis of AMI requires an elevated
marker (preferably cTn) and at least one of the following:
ischemic electrocardiographic changes, symptoms consistent with myocardial ischemia, or a new wall motion
abnormality with cardiac imaging. Many acute conditions may lead to myocardial stress and damage with
elevated cTn.
Myoglobin
Myoglobin is a protein that is found in all tissues. Myoglobin is a smaller molecule as compared to CK-MB
Cardiac Markers for Diagnosing Acute Myocardial Infarction
or cTn, and has been used as an early marker in the
identification of AMI as it can be detected 1–2 h after
symptom onset. However, the sole use of myoglobin has
significant limitations in that the levels may normalize in
patients that present>24 h after symptom onset, and has
low specificity for AMI in the setting of renal insufficiency or muscle trauma. Considering its low specificity,
and rapid rise and fall, myoglobin has usually been used
in combination with either CK-MB or cTn. An elevated
myoglobin is also associated with a worse prognosis in
both patients with ACS and non-ACS even after adjusting
for cTn elevation. The reason for this association is
unclear and there is no known specific therapy that
should be given to a patient based on an elevated
myoglobin.
Serial Measurement of Cardiac Markers
The measurement of cardiac markers at presentation in
the Emergency Department is not sufficiently sensitive to
exclude AMI, and markers in general need to be measured
serially over time. Guidelines recommend that cardiac
markers, preferably cTn, should be measured over 6–9 h
[1, 2]. Patients that present 8 h after their last symptoms
only need one cTn to be measured. In a study of 383
consecutive patients with nondiagnostic electrocardiograms, no high-risk clinical features, and normal CK-MB
values at presentation had CK-MB and cTnT measured at
0, 4, 8, and 12 h. All patients that were identified by
elevated CK-MB and cTnT at 12 h had elevation of both
CK-MB and cTnT at 8 h. Thus, this study suggests that
measurement of cardiac markers beyond 8 h does not
improve sensitivity. Another study of 773 patients who
were evaluated for possible ACS had cTnI measured at
presentation and at least 6 h after symptom onset. In this
study there was one death and one AMI at 30 days in the
602 patients that had all normal cTnI, yielding an adverse
event rate of 0.3%.
Multi-marker Strategies and Dynamic
Change in Markers
Studies have taken advantage of the different release kinetics of various cardiac markers to be used in combination
to more rapidly exclude MI. A marker that rises early
during MI, such as myoglobin, combined with one that
becomes elevated later, CK-MB or cTn, enables AMI to
be identified earlier, and therefore more rapidly excluded.
In a study of 817 patients evaluated in the Emergency
Department for possible ACS had CK-MB, cTnI, and
myoglobin measured at 0, 1.5, 3, and 9 h. There were
65 patients diagnosed with AMI. The combined sensitivity
for myoglobin and cTnI at 90 min was 96.9% with a
C
negative predictive value of 99.6%. The measurement
of CK-MB and sampling at 3 h did not improve sensitivity.
In another study the combined sensitivity of CK-MB and
myoglobin was 100% at 4 h.
A dynamic change in individual cardiac markers, or
a combination of markers, can identify patients with AMI
earlier. In a study of 817 patients the combined sensitivity
of cTnI, myoglobin, and a change in myoglobin (defined as
a>than 20 ng/ml increase) had a combined sensitivity of
97.3% at 90 min. In a study of over 1,000 patients evaluated in the Emergency Department the combined sensitivity of CK-MB, cTnI, and a change in myoglobin (defined
as a>25% increase), had 100% sensitivity at 90 min.
In addition, a study of 975 patients demonstrated a change
of CK-MB of >0.7 ng/ml over 2 h and had a higher
sensitivity for MI at 93.2%, when compared to
a change of myoglobin of >9.4 ng/ml over the same time
period of only 77%. Most institutions employ a simple
single point cut-point strategy as opposed to a change in
cardiac maker strategy over time, likely because a single
cut-point approach is simpler.
Improved Troponin Assays: Sensitivity,
Precision, and Implications for CK-MB/
Myoglobin
Until recently most cTn assays did not meet the stringent
precision recommendation of<10% coefficient of variation at the 99th percentile as advised in the consensus
document the Universal Definition of MI in 2007 [2].
The newer more sensitive and precise assays have implications for the utility of CK-MB and myoglobin measurement, and the required time period for serial testing.
Many earlier studies used a CK-MB definition of MI and/
or older cTn assays that were less sensitive and precise
than assays presently available, which makes protocols
based on these studies inapplicable for present practice.
There have been some studies that foreshadowed how
cardiac markers will be used in the era of these newer
cTn assays. In a retrospective study from stored specimens of 258 patients, samples drawn at presentation and
then hourly for 6 h in 1996 demonstrated that there was
no significant difference between the number of AMIs
identified at 3 h compared to 6 h. In a multicenter trial
published in 2009, 718 patients had measurement of
cardiac cTn by four more contemporary assays at 0, 1,
2, 3, and 6 h [3]. The sensitivity for AMI (using a cTnbased definition) with these four assays at presentation
ranged from 85% to 95%. The overall diagnostic utility of
all four assays was very high at 3 h with an area under the
curve of 0.98 as measured by receiver operator characteristic curve analysis, and was not improved by blood
469
C
470
C
Cardiac MR
sampling at 6 h. The implication of this study is that
with these newer assays sampling is adequate at 3 h and
measurement at 6 h is not required. With the introduction
of these new cTn assays patients with ACS that were
identified as unstable angina will be reclassified as AMI.
A study using a research assay that is not commercially
available demonstrated that in patients with unstable
angina and normal cTnI values using a contemporary
cTnI assay, 44% had elevated cTnI at presentation and
82% at 8 h [4].
The advantage of myoglobin has been its early release
in the setting of AMI enabling the early identification of
myocardial necrosis. Several single center studies have
shown that the newer more sensitive cTn assays can identify MI as early as myoglobin. This was confirmed in the
Reichlin study where neither myoglobin nor CK-MB measurement improved early diagnostic utility (as measured
by area under the curve) when added to sensitive cTn
measurement. Presently many institutions use CK-MB in
combination with cTnI, although CK-MB does not
improve diagnostic accuracy when added to cTn measurement. Even when cTn is used as the sole marker evaluating
patient with possible AMI, CK-MB may be helpful in
identifying reinfarction in patients that have sustained
a definite AMI and have recurrent symptoms several days
after presentation when cTn values are still elevated and
CK-MB may have normalized. However, newer studies
suggest that following a change in cTn after recurrent
symptoms may be able to replace CK-MB measurement
to identify recurrent MI. The Universal Definition of AMI
recommends a change in cTn values greater than 20% over
6 h after recurrence of symptoms to identify recurrent
AMI. The role of an early change in either myoglobin or
CK-MB needs to be studied further in the context of the
new cTn assays, but recent studies suggest there is no use
for myoglobin or CK-MB (using an absolute cut-point) in
evaluating patients with possible or definite MI.
Although low-level cTn detection by these new assays
enable a more rapid detection of myocardial necrosis,
and therefore a more rapid exclusion of AMI, these
low-level elevations have lower specificity for AMI. Elevation of cTn is specific for myocardial necrosis but does
not determine the mechanism of injury. Conditions that
are well-known to be associated with cTn elevations with
the older cTn assays (such as pulmonary embolism, sepsis, heart failure, hypertensive crisis, and many others)
will find a higher frequency of cTn elevations in these
conditions with the new assays. Ambulatory, asymptomatic patients with a history of chronic kidney disease,
heart failure, left ventricular hypertrophy, or diabetes
more commonly have cTnT elevation [5]. In this era of
the new ultrasensitive cTn assays historical features, electrocardiographic changes, and cardiac imaging studies will
be even more important in determining which patients
have suffered an AMI.
References
1.
2.
3.
4.
5.
Morrow DA et al (2007) National academy of clinical biochemistry
laboratory medicine practice guidelines: clinical characteristics and
utilization of biochemical markers in acute coronary syndromes.
Circulation 115(13):e356–e375
Thygesen K et al (2007) Universal definition of myocardial infarction.
Circulation 116(22):2634–2653
Reichlin T et al (2009) Early diagnosis of myocardial infarction with
sensitive cardiac troponin assays. N Engl J Med 361(9):858–867
Wilson SR et al (2009) Detection of myocardial injury in patients
with unstable angina using a novel nanoparticle cardiac troponin I
assay: observations from the PROTECT-TIMI 30 Trial. Am Heart J
158(3):386–391
Wallace TW et al (2006) Prevalence and determinants of troponin T
elevation in the general population. Circulation 113(16):1958–1965
Cardiac MR
▶ Cardiac Magnetic Resonance Imaging
Cardiac MRI
▶ Cardiac Magnetic Resonance Imaging
Cardiac Output (CO)
▶ Cardiac Output, Measurements
Cardiac Output Monitor
▶ Esophageal Doppler
Cardiac Output Monitoring
▶ Cardiac Output, Measurements
Cardiac Output, Measurements
Cardiac Output, Measurements
GIORGIO DELLA ROCCA, MARIA GABRIELLA COSTA
Department of Anesthesia and Intensive Care Medicine,
Medical School of the University of Udine, University of
Udine, Udine, Italy
Synonyms
Arterial Pulse Cardiac Output (APCO); Cardiac Output
(CO); Cardiac output monitoring; Continuous Cardiac
Output (CCO); Pulse Contour Cardiac Output (PCCO)
Definition
The function of the heart is to transport blood to the cells
of the body, deliver oxygen, nutrients and chemicals, and
removing cellular wastes in order to ensure their survival
and proper function. In certain tissues, the perfusion of
blood can have additional important functions. In the
kidneys, sufficient blood flow is required for maintaining
proper excretory function; in the gastrointestinal tract, it is
important for glandular secretion and for nutrient absorption; and in the skin, changes in blood flow play a crucial
role in the control of body temperature. Thus, each tissue
has a certain requirement for blood flow and the cardiac
output (CO) must keep in step with these needs. In
human physiology, CO represents the volume of blood
expelled by the ventricles per minute. It is calculated as the
product of stroke volume (SV) and the heart rate (HR),
expressed as liters of blood per minute (CO = SV ∗ HR).
In the healthy human adult, resting cardiac output is
estimated to be slightly greater than 5 L/min. It may
increase with anxiety or exercise and as much as fivefold
with exercise.
Pre-existing Condition
The stroke volume of the left ventricle is ultimately determined by the interaction between its preload, the contractile state of the myocardium, and the afterload faced by the
ventricle. Unfortunately, there is no simple measure of the
“contractile state” and consequently no single equation
exists that is able to describe the relationship between
these three parameters. The fact that “preload” (or rather
the stretch) on myocardial fibers at the end of diastole has
a significant effect on the subsequent force of contraction
was first recognized by Otto Frank toward the end of the
nineteenth century. This fundamental relationship has
since been analyzed in great detail and the adjustment of
preload by blood volume transfusion or depletion remains
one of the most important therapeutic maneuvers in acute
C
cardiovascular medicine. In practice, the adjustment of
cardiac preload can be achieved via various approaches:
Circulating blood volume can be increased by the
administration of fluid, or reduced by the use of diuretics
and/or fluid restriction.
Venous return can be varied by the adoption of a headdown or head-up posture.
Venous capacitance can be altered through the use of
vasoconstrictor or vasodilator therapy.
In its strictest sense, the term “contractility” refers to
the inotropic state of the myocardium – that is, the force
and velocity with which the myocardial fibers contract.
This can be easily measured in an isolated muscle preparation under specified loading conditions, but it is notoriously difficult to measure in humans. In clinical practice,
various contraction-phase indices are used, such as the
velocity of fiber shortening, the peak rate of rise in ventricular pressure and the end-systolic pressure-to-volume
ratio, but they are all affected to a greater or lesser degree
by loading conditions.
The “chronotropic” or “rate” state of the intact heart
should also be incorporated into any clinical definition of
“contractility” because variations in the pulse rate can
have obvious and important effects upon CO, and manipulation of the pulse rate through the use of positive or
negative chronotropes can be an important therapeutic
maneuver in sick patients. It is not possible to make any
precise measurements of contractility with a pulmonary
artery catheter (PAC), although it is possible to make
reasonable inferences about the contractile state through
the use of ventricular function curves. This concept was
developed by Barash and colleagues and they have
described the use of a “Hemodynamic Tracking System”
which defines the relationship between left ventricular
stroke work index (LVSWI) and pulmonary artery occlusion pressures (PAOP) in patients with normal, slightly
depressed, or severely depressed ventricular function.
Adjustment of both the inotropic and chronotropic state
of the heart through the use of inotropic drugs is commonly practised in critically ill medicine.
In physiological terms, afterload can be defined as “the
sum of all forces which oppose ventricular muscle shortening during systole” – although in a clinical sense it is
probably more useful to consider systemic vascular resistance as a more appropriate definition. In isolated cardiac
muscle, an inverse relationship exists between afterload
and the initial velocity of muscle shortening. This would
suggest a potential dependence of CO afterload. Yet, in the
intact human, the output of the normal heart is relatively
unaffected by changes in vascular resistance until the point
when afterload becomes quite extreme. This is probably
471
C
472
C
Cardiac Output, Measurements
because an increase in afterload leads to an almost immediate, secondary increase in preload by the “damming up”
of the blood within the left ventricle. In turn, this increases
end-diastolic volume and enhances contractility
according to the Frank-Starling mechanism. On the contrary, if myocardial function is severely depressed, CO may
become crucially afterload-dependent.
Thus, “sick” hearts can be considered as being relatively preload independent and afterload dependent, while
the reverse is true for “healthy”’ hearts. As a result,
“afterload reduction” (reduction of systemic vascular
resistance by the use of appropriate vasoactive drugs) is
of the great benefit in those whose myocardial function is
most depressed.
The role played by blood viscosity and, indirectly,
hemoglobin concentration in determining systemic vascular resistance (SVR) is often overlooked. Although
hemodilution is not commonly used as a therapeutic
maneuver for reducing afterload, inadvertent hemodilution is often concomitant of serious illness. Hematocrit
and fibrinogen are the most important determinants
of blood viscosity and therefore make a significant
contribution towards vascular resistance. As blood is
a non-Newtonian fluid, no simple expression relating
SVR to hematocrit and fibrinogen levels exists; however, it
is easy to demonstrate the completely passive increase in
venous return and CO which occur during hemodilution.
Finally, it should not be forgotten that the degree of
ventricular interdependence can also influence ventricular
performance. The position of the interventricular septum
(IVS) can alter the compliance of each ventricle under
altered loading conditions with secondary effects on contractility. This effect is not usually important, but it can
become so in conditions such as tension pneumothorax,
cardiac tamponade, right ventricular infarction, and during mechanical ventilation in critically ill patients.
The measurement of cardiac output, as first described
by Fick in 1870 (although only put into practice in
1959), also makes an evaluation of respiratory exchange
possible: that is, a measure of the delivery of oxygen to
the tissues.
The Fick principle involves calculating the oxygen
consumed over a given period of time by measuring the
concentration of oxygen in venous blood and in arterial
blood. Cardiac output can be calculated from the following measurements: VO2 consumption per minute, using
a spirometer (with the subject rebreathing the same air)
and a CO2 absorber; the concentration of oxygen in blood
taken from the pulmonary artery (representing mixed
venous blood); the concentration of oxygen in blood
taken from a cannula in a peripheral artery (representing
arterial blood).
We know that:
VO2 ¼ ðCO CaÞ
ðCO CvÞ
where Ca is the concentration of oxygen in arterial blood
and Cv is the concentration of oxygen in venous blood.
Thus, rearranging the above, it is also possible to
calculate cardiac output:
CO ¼ ðVO2 =½Ca
CvÞ 100:
Whilst it is considered to be the most accurate method
for the measurement of CO, the Fick method is invasive,
requires time for analyzing the blood samples and making
accurate oxygen consumption measurements is difficult.
Moreover, the calculation of the arterial and venous blood
oxygen concentrations is a straightforward process.
Almost all oxygen in the blood is bound to hemoglobin
molecules in the red blood cells. Measuring the content of
hemoglobin in the blood and the percentage of saturation
of hemoglobin (and therefore the oxygen saturation of the
blood) is a simple process that is readily available to
physicians. Using the fact that each gram of hemoglobin
can carry 1.36 mL of O2, the concentration of oxygen in
the blood (either arterial or venous) can be estimated
using the following formula:
CaO2 ¼ ðHb g=dLÞ 1:36 SatO2 =100
þð0:0032 PaO2 torrÞ
CvO2 ¼ ðHb g=dLÞ 1:36 SatO2 =100
þð0:0032 PvO2 torrÞ
The Fick method is considered to be the “gold standard”
for measuring cardiac output, but it is not useful in clinical
practice as a bedside technique. In current clinical practice, dilution technology is more commonly used.
The dilution technique method was initially described
using an indicator dye and assumes that the rate at which
the indicator is diluted reflects the CO. The method measures the concentration of a dye at different points in the
circulation. The dye is usually administered via an intravenous injection and the blood subsequently sampled at
a downstream site, typically in a systemic artery. The dye
dilution cardiac output measurement is based on the
Stewart–Hamilton equation; more specifically, the CO is
equal to the quantity of indicator dye injected divided by
the area under the dilution curve measured downstream:
The indicator method has been further developed
with the indicator dye being replaced with cooled
Cardiac Output, Measurements
fluid and the change in temperature being measured at
different sampling sites; this method is known as
thermodilution (TD).
The pulmonary artery catheter (PAC) was the first
clinical device enabling the bedside measurement of cardiac output using the thermodilution technique and since
its introduction in 1970 by Swan, Ganz and colleagues, it
has been considered as a “clinical standard” for cardiac
output assessment despite there being no true reference
technique for the clinical determination of CO. The
thermodilution method involves the injection of a small
amount (10 mL) of cold saline at a known temperature
into the pulmonary artery, the temperature of which is
measured using the same catheter. The calculation of CO
is again based on the Stewart–Hamilton equation:
CO ¼ ðVðTb
TiÞK1K2Þ=ðTbðtÞdtÞ
Where: CO = cardiac output, V = volume of injectate,
Tb = blood temperature, Ti = injectate temperature, K1 =
catheter constant, K2 = apparatus constant, Tb(t)dt =
change in blood temperature over a given time.
The Stewart-Hamilton equation should theoretically
be used under conditions of constant flow.
Usually, the measurements are repeated three or five
times and then averaged to improve accuracy. Under optimal conditions, the coefficient of variation for repeated
bolus TD measurements is less than 10%. There are many
sources of inaccuracy in the method: the cardiac output
derived from PAC (COpa) is influenced by significant
variations in respiration, and hence from the phase of
the mechanical breath during which the injection is
made. Mechanical ventilation was also shown to cause
a high incidence of significant tricuspid insufficiency and
mild to severe vena caval backward flow, which, like other
valvular regurgitations, may reduce the accuracy of COpa
measurements.
The insertion of a PAC is a procedure associated with
a number of known complications. Catheter insertion can
result in arterial injury, pneumothorax, and arrhythmias.
The catheter can be associated with potentially fatal pulmonary artery hemorrhage, thromboembolism, sepsis,
and endocardial damage.
Following its introduction into clinical practice and
for the following 20 years, intermittent thermodilution
was the only device available for measuring CO.
Since the late 1970s, PAC monitoring of CO has
expanded rapidly and broadly in clinical practice for its
use in several subgroups of patients; the receiving patients
include those undergoing cardiac surgery and those with
sepsis and acute respiratory distress syndrome (ARDS).
C
The appropriate indications necessitating PAC monitoring have been debated for many years. The potential
benefits of using the device are well known. For example,
its use in measuring important hemodynamic indices
(e.g., pulmonary artery occlusion pressure, CO, mixed
venous oxygen saturation) allows for improved accuracy
in the determination of the hemodynamic status of critically ill patients compared to that possible by clinical
assessment alone. The additional information it provides
can also be important when caring for patients with confusing clinical scenarios in whom errors in fluid management and drug therapy can result in severe consequences.
In surgical patients, PAC data often help evaluate hemodynamic changes that may lead to serious perioperative
complications. Preoperative PAC data are claimed to be
helpful in determining whether or not it is safe for highrisk patients to proceed with surgery. Unfortunately, the
impact of PAC monitoring in patients during anesthesia
and intensive care upon clinical outcomes remains
uncertain.
The American Society of Anesthesiologists (ASA)
established the Task Force on Pulmonary Artery Catheterization in 1991 in order to examine the evidence on the
benefits and risks arising from the use of PAC in the
various settings encountered by anesthesiologists. By the
time the Society’s guidelines had been ascertained in 1992
and published in 1993, and several groups had issued
statements on the appropriate indications and on competency requirements for hemodynamic monitoring. These
groups included the American College of Physicians, the
American College of Cardiology, the American Heart
Association Task Force on Clinical Privileges in Cardiology, a panel established by the Ontario Ministry of Health,
and an expert panel from the European Society of Intensive Care Medicine. In 1996, a milestone study performed
by Connors and colleagues made clinicians reconsider the
invasiveness and utility of PAC. The ASA therefore
reconvened the Task Force on Pulmonary Artery Catheterization in 2000 in order to review its 1993 guidelines,
consider the evidence and the concerns over the use of
PAC that had emerged in the interim and issue an updated
guideline that was subsequently published in 2003 [1].
Due to criticisms of PAC and research yielding negative judgments over its use, clinicians have started to move
to less invasive, time inexpensive, easy to use, and continuous techniques.
Another dilution technique is the transpulmonary
indicator dilution technique (TPID). TPID is a less invasive technique developed in the 1980s and the PiCCO
system is the oldest and the most studied less invasive
473
C
474
C
Cardiac Output, Measurements
device based on TPID technology. A central venous catheter for the injection of a thermal indicator is required,
together with an arterial thermistor-tipped catheter normally placed into the femoral artery. The TPID technique
works with 15–20 mL of either cold or room temperature
injectate. Intermittent cardiac output is calculated from an
arterial thermodilution curve in the usual way using
the Stewart-Hamilton equation. Cardiac output by intermittent TPID has been widely validated against the
intermittent TD [2]. Since transpulmonary thermodilution
is less invasive than pulmonary artery thermodilution, the
transpulmonary cardiac output (COart) technique is more
often used, particularly when cardiac output monitoring is
necessary over a long period of time. The TPID method is
not suitable for patients with severe peripheral vascular
disease, those undergoing vascular surgery, or those showing other contraindications opposing femoral artery
cannulation.
The LiDCO system is also based on the TPID technique and uses lithium as a tracer. The lithium dilution
technique is performed using 0.3 mL of lithium injected
into either a central or a peripheral vein. The resulting
lithium concentration–time curve is recorded by withdrawing blood (4.5 mL/min) through a special disposable
sensor, attached to the patient’s arterial line, which consists of a lithium-selective electrode in a flow-through cell.
The voltage across the lithium-selective membrane is digitized online and recorded via a computer that converts
the voltage signal into a lithium concentration. The
Stewart-Hamilton curve allows the cardiac output
(COLi) to be measured from the indicator dilution curve.
COLi is calculated according to the equation:
COLi ¼ LiCl 60=AUC ð1
PCVÞ
where LiCl is the concentration of lithium chloride
(mmol), AUC is the area under the primary dilution
curve and PCV is the packed cell volume, which can be
calculated when the patient’s hematocrit is known. The
lithium dilution technique is of sufficient accuracy when
there is constant blood flow, homogeneous mixing of the
blood, and when there is no loss of indicator between the
site of injection and the detection site [3].
This technique cannot be performed in patients
receiving lithium therapy. It is also difficult to use in the
operating theatre, where the use of muscle relaxants
containing quaternary ammonium ions can interfere
with the lithium sensor and therefore the TPID should
be performed with adequate time before or after muscle
relaxant administration.
An advantage of the lithium indicator dilution cardiac
output technique is that no central venous line is
necessary. This is because the indicator bolus can also be
applied via a peripheral line even if, to the best of our
knowledge, only one clinical study has been performed
using a peripheral venous line for lithium injection.
The continuous cardiac output measurement was
recently introduced in order to provide a continuous or
semicontinuous evaluation of cardiac output.
Continuous CO measurements can be obtained using
a modified PAC with an embedded heating filament
(Edwards Lifesciences, Irvine, California, USA), which
releases small thermal pulses every 30–60 s following
a pseudorandom binary sequence. The resulting changes
in pulmonary artery temperature are measured via a distal
thermistor and matched with the input signal. Cross correlation of input and output signals allows for CO values
to be calculated with time from the resulting TD wash-out
curve. Every 60 s, a trended continuous CO (CCO) measurement is displayed, which reflects the average course of
the CO over the previous 3–6 min. As relatively small
quantities of heat are used to calculate CO, sudden
changes in temperature or infusion of high quantities of
cold infusate can influence the accuracy and reliability of
the method. Hyperthermia does not influence the accuracy of CCO monitoring, although a relative increase in
bias is reported for measurements taken immediately after
a hypothermic cardiopulmonary bypass (CPB). (i.e., for
Opti-Q, Abbott, Abbott Park, IL and Vigilance catheters,
Edwards LifeSciences, Irvine, CA).
Pulse contour (or wave) analysis is based upon the
principle that vascular flow can be predicted by means of
the arterial pressure wave form that is itself a result of an
interaction between stroke volume and the systemic vascular system. Thus, resistance, compliance, and characteristic impedance at the site of signal detection have to be
considered. Different models have been used to address
these issues in the various pulse wave analysis devices
currently available (PiCCO plus, Pulsion Medical Systems,
Munich, Germany; PulseCO, LiDCO Ltd, London, UK;
FloTrac/Vigileo, Edwards LifeSiences, Irvine, CA, MostCareTM, Pressure-recording-analytical-method-PRAM;
Vytech HealthTM, Padova, Italy).
Pulse contour analysis initially used an algorithm
based on the Wesseling algorithm. Over recent years, this
algorithm has been evolved in a number of steps into what
is today integrated into the PiCCO monitor. For the
calculation of continuous cardiac output (PCCO) the
system uses a calibration factor (cal) determined by
thermodilution cardiac output measurement and heart
rate (HR), as well as the integrated values for the area
under the systolic part of the pressure curve (P(t)/SVR),
the aortic compliance (C(p)) and the shape of the pressure
Cardiac Output, Measurements
curve, represented by the change of pressure over time
(dP/dt). This algorithm is described as follows:
ð
SV ¼ cal HR
½PðtÞ=SVR þ CðpÞ dP=dtdt
systole
This algorithm uses the TPID technique to convert the
PCCO derived from the algorithm into a more accurate
“calibrated” value. The calibrated algorithm is then able to
track stroke volume in a continuous manner.
Continuous cardiac output measured using the
PiCCO monitor has been studied and compared to the
TD from the PAC in different clinical fields, and these
comparisons confirm the PCCO system as being accurate
and precise [2]. However, it has also been shown to have
some limitations, particularly during periods of hemodynamic instability.
The pulse power analysis obtained from the LiDCO
System (PulseCO) is different from the classic pulse contour analysis. It is based on the hypothesis that a change in
the power in the vascular system (i.e., the arterial tree)
during systole is due to the difference between the amount
of blood entering the system (stroke volume) and the
amount of blood flowing out peripherally. It is based on
the principle of conservation of mass/power and the
assumption that following the correction for compliance
and calibration there is a linear relationship between net
power and net flow. This algorithm takes the entire beat
into account, thus tackling the problem of the reflected
waves, and uses a so-called autocorrelation to define
which part of the “change in power” is determined by
the stroke volume. Autocorrelation is a mathematical
function used to analyze signals that tend to be formed
of repeated cycles across time (similar to a Fourier transformation), as is clearly the case for SV in human physiology. In this way, all the curve is analyzed and SV
continuously recorded. When SV is established, the CO
can be easily calculated by multiplying SV by HR.
Initially, the algorithm transforms the arterial pressure
waveform into a standardized volume waveform (in arbitrary units) using the formula:
DV=DP ¼ calibration 250 e
k:p
where V = volume, P = blood pressure, k = curve
coefficient.
The number 250 represents the saturation value in mL,
that is, the maximum additional value above the starting
volume, at atmospheric pressure, that the aorta/arterial
tree can fill to. Autocorrelation uses the volume waveform
and derives the period of the beat plus a net effective beat
power factor, proportional to the nominal stroke volume
ejected into the aorta. This nominal stroke volume is then
C
calibrated in order to be equalized to a measured SV. Until
the calibration is performed the system behaves as if the
calibration factor is 1. Following calibration, a calibration
factor of the ratio between the arbitrary CO and the
measured CO can be derived. In theory, the calibration
factor should be constant in the patient unless significant
hemodynamic changes occur. The lithium dilution technique measures CO that is then used to calibrate the pulse
pressure algorithm: the PulseCO. The continuous cardiac
output of LiDCO has been validated in several studies in
cardiac surgery, in major surgery and in liver transplant
patients. This new algorithm has, so far, proven to be
reliable in surgical and intensive care patients [3].
The Vigileo system represents the newest arterial
pulse wave analysis device (Arterial Pressure Cardiac
Output – APCO). This device does not use a dilution
technique to calibrate the algorithm as it is an uncalibrated
technique. The algorithm gets all the information it needs
to calculate the arterial impedance from the analysis of
the arterial pressure waveform together with the patient’s
demographic (age, sex, height, and weight). The system
can use any arterial line already in situ. However, the signal
needs to be sampled by a specific transducer, the FloTrac.
The FloTrac algorithm analyses the pressure waveform
at one hundred times per second over 20 seconds, capturing 2,000 data points for analysis. According to the manufacturer, the algorithm is primarily based on the standard
deviation of the pulse pressure waveform, as follows:
APCO ¼ f ðcompliance; resistanceÞ sp HR
where sp is the standard deviation of the arterial pressure,
HR is the heart rate, and f (compliance, resistance) is
a scaled factor proportional to vascular compliance and
peripheral resistance. This function is also referred to as Х.
The calculation of X in the first version of the software was
executed every 10 min, whereas in the second version, the
software recalculates X every minute and CO is computed
every 20 s. The standard deviation of the arterial pressure
waveform is computed on a beat-to-beat basis using the
following equation:
p
sp ¼ ½1=ðN 1Þ SðN 1; k¼0Þ ðPðkÞ PavgÞ2
where P(k) is kth pulse pressure sample in the current
beat, N is the total number of samples, and Pavg is the
mean arterial pressure.
Compliance and resistance are derived from the analysis of the arterial waveform. The hypothesis retains that
the shape of the arterial pressure wave, in terms of its
degree of kurtosis or skewness, can be used to calculate
the effects of compliance and peripheral resistance upon
475
C
476
C
Cardiac Output, Measurements
blood flow. Additional parameters, such as the pressure
dependent Windkessel compliance Cw , heart rate and the
patient’s body surface area (BSA) are also included in
order to take other specific patient characteristics into
account.
Despite the fact that the Vigileo system represents
a revolution in the field of pulse pressure analysis, being
a real “plug and play” tool, an assessment of the performance of the algorithms (two versions of the software have
already been released in less than 3 years) is still underway.
It can already be said, however, that some authors have
found good agreement between the Vigileo system and
intermittent thermodilution, while others have reported
poor limit of agreement [4].
Beat-to-beat values of uncalibrated CO can also be
obtained using the pressure recording analytical method
(PRAM). This new method is based on the mathematical
analysis of the arterial pressure profile changes. It allows
for the continuous assessment of SV from the pressure
signals recorded in the radial and femoral arteries. Based
on the perturbation theory from physics, and applied to
this issue of physiology, all the elements determining CO
can be taken into consideration simultaneously and in
a beat-to-beat manner. Sampled at 1000 Hz, the detected
pressure curve it must be submitted to a form of analysis;
the result is the calculation of actual (beat-to-beat)
stroke volume; with no constant value of impedance
and as it is derived from an external calibration neither
pre-estimated in vivo nor in vitro data are required. In
contrast to the bolus TD technique, PRAM is less invasive, easier to use and provides continuous data. To date,
PRAM has been used in volunteers, during vascular and
cardiac surgery and in patients with congestive heart
failure but there have been no studies comparing PRAM
with the TD technique under hyperdynamic clinical
conditions.
Non-invasive Techniques
Nowadays, several types of Doppler techniques are commercially available for the estimation of CO by measurement of aortic blood flow (ABF) [5]. An ultrasound beam
directed along the ABF is reflected, caused by the moving
red blood cells, with a shift in frequency (the Doppler
effect) that is proportional to the blood flow velocity
according to the equation:
Fd 1=4 2 f 0 ¼ C V cos u
where Fd is the change in frequency (Doppler shift), f 0 is
the transmitted frequency, V is the blood flow velocity,
and u is the angle between the direction of the ultrasound
beam and the blood flow. CO is estimated by multiplying
the blood flow velocity by the cross-sectional area (CSA)
of the aorta at the insonation point. The esophageal
Doppler probe is introduced either orally or nasally and
placed at the level of the descending aorta. This technique
has some advantages over the classical suprasternal technique, the most important being a more stable positioning
of the probe once the descending aorta is insonated. Three
models of esophageal CO monitoring systems are commercially available and differ from each other in some
important ways. Two systems use a built-in nomogram
to obtain a measurement of the descending aortic diameter (CardioQ, Deltex Medical, Chicester, Sussex, UK;
Medicina TECO, Berkshire, UK), whereas the other system uses M-mode echocardiography for this purpose
(HemoSonic, Arrow International, Reading, PA). By
rotating the esophageal Doppler probe, the best Doppler
image possible can be achieved. ABF is calculated by
multiplying ABF velocity by the CSA of the descending
aorta and the heart rate. The limitations of this technique
are turbulent flow, negotiation of blood flow to the upper
part of the body, and the angle of insonating the aorta.
Moreover, the technique is poorly tolerated in awake,
nonintubated patients and cannot be used in patients
with an esophageal disorder. Once a Doppler probe is in
place, transesophageal echocardiography (TEE) cannot be
performed. In summary, esophageal Doppler-derived ABF
is a semi-invasive approach, which enables trend monitoring of CO. The statistical limit of agreement of this
technique are greater compared to invasive technique.
However, in contrast to most other techniques, it has
been demonstrated in subsets of patients that hemodynamic treatment according to Doppler-derived CO measurements leads to a decrease in perioperative morbidity
and length of stay in intensive care units.
Doppler flow measurements obtained with transthoracic echocardiography (TTE) or TEE can also be used
to estimate CO. Their accuracy depends upon image quality, sample site, angle of insonation, the profile of the
blood flow velocity distribution, the signal-to-noise ratio
of the blood flow velocity, and the possibility of measuring
the diameter of the vessel and the shape of the cardiac
valve. Most often, measurements of blood flow velocity
and CSA are performed by both TTE and TEE at the level
of a cardiac valve or the right ventricular (RVOT) or left
ventricular outflow tract (LVOT). The best results are
usually obtained by the transaortic approach using the
triangular shape assumption of aortic valve opening
and CO determination at the LVOT. In summary, Doppler echocardiography is technically demanding, timeconsuming, and requires a skilled operator. It is a safe,
fairly reproducible and reasonably accurate method for
Cardiac Output, Measurements
measuring CO in selected patients, provided the signal
quality is adequate during recording.
The ultrasonic cardiac output monitor (USCOM Pty
Ltd., Coffs Harbour, NSW, Australia) is a noninvasive
transcutaneous device that provides cardiac output by
continuous-wave. It was introduced for clinical use in
2001 and is based on continuous-wave Doppler ultrasound. The flow profile is obtained by using a transducer
(2.0 or 3.3 MHz) placed on the patient’s chest in either the
left parasternal position to measure transpulmonary
blood flow or the suprasternal position to measure
transaortic blood flow. A standard ultrasound conducting
gel is used. This flow profile is presented as a time–velocity
spectral display that shows variations of the blood flow
velocity against time. Once the optimal flow profile is
obtained, the trace is frozen. The CO is then calculated
from the equation:
CO ¼ HR SV
where the stroke volume is the product of the velocity time
integral (VTI) and the cross-sectional area (CSA) of the
chosen valve. The VTI represents the distance that
a column of blood travels with each stroke and is calculated from the peak velocity detected. In the USCOM
monitor, this is performed using a unique TouchPoint
semiautomated flow profile trace which requires the operator to mark out the flow trace for a chosen stroke of the
heart. This device simultaneously measures the patient’s
heart rate. The CSA of the chosen valve is determined by
applying height-indexed regression equations that are
incorporated into the USCOM device or by using another
imaging method (e.g., two-dimensional echocardiography). The regression equation used to calculate the aortic
valve area is that proposed by Nidorf and colleagues. The
pulmonary valve area is calculated by a separate regression
equation derived from the Nidorf equation.
The NICO system (Novametrix Medical Systems, Wallingford, CT, USA) uses Fick’s principle applied to carbon
dioxide (CO2) for the measurement of CO. For CO2
analysis, a mainstream infrared and airflow sensor is
used. CO2 production is calculated as the product of
CO2 concentration and air flow during a breathing cycle
and arterial CO2 content is derived from end-tidal CO2
and the CO2 dissociation curve. A disposable rebreathing
loop allows an intermittent partial rebreathing state to be
determined in cycles of 3 min. The rebreathing cycle
induces an increase of end-tidal CO2 and mimics a drop
of CO2 production. The obtained differences of these
values are then used to calculate CO. Validation studies
with conflicting results have been published over recent
years. Fairly good CO determination was observed as long
C
as the NICO system was applied to intubated and mechanically ventilated patients with minor lung abnormalities
and fixed ventilatory settings. However, variations in ventilatory modes, mechanically assisted spontaneous breathing or the use of this technique in patients with lung
pathologies (increased shunt fraction) resulted in
a decrease of CO accuracy. Thus, good accuracy can only
be obtained using the partial CO2 rebreathing technique
when applied in a precisely defined clinical setting to
mechanically ventilated patients.
Bioimpedance cardiography is based on the application of a high-frequency, low-alternating electrical current
to the thorax (thoracic electrical bioimpedance). Changes
in bioimpedance to this current are related to cardiac
events and blood flow in the thorax. Using a mathematical
conversion, changes in bioimpedance can be transformed
into an estimate of stroke volume. Recently, electrical
velocimetry was introduced as a new bioimpedance technique using a new algorithm: the Bernstein–Osypka equation (Aesculon, Osypka Medical, Berlin, Germany).
The accuracy and reliability of the majority of thoracic
bioimpedance devices have been evaluated with conflicting results. It is therefore possible that their use
could lead to inappropriate clinical interventions. Common cylinder and cone-based models for bioimpedance
stroke volume calculation represent oversimplifications of
the complex electrical events that occur inside the thorax
during the cardiac cycle; this is also the case when only the
intrathoracic blood volume is used as a model. Consequently, bioimpedance CO is not currently accepted as
a valid and reproducible method in clinical practice.
Although some results do seem promising, this technique
requires further investigation.
Bioreactance technology (NICOM system, Cheetah
Medical Inc., Portland, OR, USA) is the analysis of the
variations in the frequency of a delivered oscillating current
that occurs when the current traverses the thoracic cavity, as
opposed to traditional bioimpedance that purely relies
upon the analysis of changes in signal amplitude. To our
knowledge, three validation studies have been conducted
that compared bioreactance to intermittent (COpa) and
continuous cardiac output (CCO), obtained from PAC,
and to PCCO and APCO obtained respectively form
PiCCO and Vigileo Systems. In each case bioreactance
was found to give results comparable to those arising
from the other techniques. More recently, bioreactance
was also tested against intermittent and continuous CO
obtained from the PiCCO System in 20 cardiac surgical
patients during the postoperative period. The authors concluded that although occasional discordance may occur
in CO values assessed by transthoracic bioreactance
477
C
478
C
Cardiac Output, Measurements
and pulse contour arterial wave analysis, the level of precision was acceptable.
Applications
CO is nowadays monitored in critically ill patients to
assess cardiac function with the primary aim of
maintaining tissue perfusion (Table 1).
In addition to measurement of CO, modifications of
the original PAC have allowed for continuous measurement of mixed venous oxygen saturation (SvO2), right
ventricular function (RVEF), and right ventricular enddiastolic volume (RVEDV); however, the use of PAC can
also cause complications. Several reports have described
intrinsic morbidity and mortality arising from the use of
PAC; thus its application should be restricted to highly
selected patient populations. The selective use of the PAC
can only be justified in patients with right ventricular
failure and patients with increased pulmonary vascular
resistance requiring vasodilator therapy. The use of the
PAC in low-risk cardiac surgery, vascular surgery and
major abdominal, orthopedic or neurosurgical procedures
should not be recommended. Advocates of the PAC suggest that it is crucially important that physicians and
nursing staff are familiar with the PAC technology, including the procedure of inserting, positioning and
maintaining the PAC. The use of PAC requires training
and education as misinterpretation of data obtained with
this apparatus is common. Finally, due to its invasiveness,
PAC used for the purpose of CO monitoring is no longer
justified.
Calibrated vs. uncalibrated wave analysis. The two
available CO measurement systems, PiCCOplus and
LiDCOplus, require calibration prior to the measurement
of continuous CO based on the assumption that the systolic part of the arterial pressure waveform represents
stroke volume. The PiCCO system requires
transpulmonary thermodilution for the calibration procedure, whereas LiDCO can be calibrated using lithium
dilution. Recalibration is also necessary after profound
changes in arterial compliance (e.g., sepsis following
CPB) and/or hemodynamics in order for subsequent measurements of CO with continuous pulse contour CO to be
Cardiac Output, Measurements. Table 1 Cardiac output monitoring, different tools and clinical applications
OR
OR/ICU
OR/ICU
*Unexpected low CO
*Cardiac patients undergoing major
non-cardiac surgery
*Heart failure
Cardiac patients undergoing minor
surgery
Hyperdynamic CV status
Patients with PHP and RV dysfunction
Intraoperative time in Ltx
Liver transplantation
HD monitoring for ICU stay (long time)
Lung transplantation
Major orthopedic surgery
ARDS/Septic shock/Heart failure
Cardiac surgery
ARDS/Septic shock
Baseline approach
Arterial Line
Arterial Line (radial/femoral)
Arterial Line
Peripheral venous line or CVC
CVC
PAC
ED
ED
PAC
Vigileo
PiCCOplus
Advanced PAC (Vigilance)
LiDCOplus
LiDCOplus
Devices
Limitations
Arrhythmias (Vigileo)
Vascular surgery (PiCCO)
PAC related complication
Arterial signal quality (Vigileo)
Esophageal surgery (ED)
Time limited insertion
Esophageal surgery (ED)
Arterial signal quality (PiCCO/LiDCO)
*If available and you are familiar with, first check the CV status with a TTE and or a TEE
OR: operating room; ICU: intensive care unit; CO: cardiac output; Ltx: liver transplant; HD: hemodynamic; PHP: pulmonary hypertension; RV: right
ventricle; CVC: central venous catheter; PAC: pulmonary artery catheter; ED: esophageal Doppler; CV: cardiovascular; TTE: transthoracic echocardiography; TEE: transesophageal echocardiography
Cardiac Output, Measurements
carried out with the usual accuracy. This prerequisite is
mainly due to resulting changes in vasomotor tone. When
these criteria are fulfilled, the accuracy of both techniques
is sufficient for clinical purposes.
Moreover, the PiCCO System (based on TPID technique) allows the estimation of preload index as intrathoracic blood volume index (ITBVI) and a “lung edema”
index as extravascular lung water index (EVLWI). The
ITBVI has been extensively investigated as a static preload
index in critically ill and surgical patients (cardiac surgery,
liver, and lung transplant surgery). These studies have
shown that the ITBVI can predict preload better than
filling pressure, particularly during the intraoperative
period. The EVLWI positively correlated with survival
and seems to be an independent predictor of prognosis
in critically ill patients, especially in septic patients. It
seems reasonable that fluid management based on
EVLWI measurements can be beneficial to the critically
ill. Indeed, it has been shown that fluid restriction and
keeping a low EVLWI improves oxygenation, reduces the
length of time that mechanical ventilation is required, and
may also improve survival rates. Unfortunately no definitive data have been published so far on EVLWI and its
clinical applications. Moreover, stroke volume variation
(SVV) and pulse pressure variation (PPV) experimentally
and clinically validated fluid responsiveness indexes in
controlled mechanically ventilated patients are also continuously monitored with PiCCOplus.
Together with CO, the LiDCOplus system provides
information about a series of derived variables including
oxygen delivery (Hb values and SaO2 need to be inserted
manually) and fluid responsiveness indices such as PPV,
SVV, and Systolic Pressure Variation (SPV). Recently,
a protocol targeting DO2I of 600 mL/m2 in high-risk
surgical patients, using the LiDCO system was proven to
improve patient outcome, reduce morbidity, and length of
hospital stay. Good accuracy and precision between the
intermittent and continuous data obtained from LiDCO
and PAC were also detected in hyperdynamic patients in
the postoperative period following liver transplantation
procedures.
The uncalibrated Vigileo technique showed conflicting results even when the last generation algorithm
was used; particularly in the hyperdynamic setting (liver
transplant and septic shock patients). Together with CO,
the monitor gives other derived variables such as oxygen
delivery (Hb values and SaO2 together with PaO2 are
inserted manually) and dynamic indices of fluid responsiveness. Based on actual references, Vigileo monitoring
seems to be useful in patients with a low or normal CO
level, that is, for intraoperative goal-directed therapy
C
(GTD). At present, its use in septic shock, liver transplant,
and arrhythmic patients should not be encouraged.
Doppler flow measurements for CO estimation can be
performed in the descending aorta using probes that are
smaller than conventional TEE probes; their correct insertion is crucial requiring highly skilled operators. Initially
the esophageal Doppler technique was serially utilized
in multiple prospective, randomized, controlled perioperative trials to guide hemodynamic management and
it consistently demonstrated a reduction in complications and lengths of hospital stay. It has been used
intraoperatively in cardiac surgical, femoral neck repair,
and abdominal surgical patients, as well as postoperatively
following cardiac surgery and multiple trauma; the control group in each study was randomized to standard
practice either with or without the use of a central venous
catheter. More recent clinical trials have, however, shown
conflicting results.
Limited accuracy may result from signal detection
problems, the assumption of fixed regional blood flow or
the use of nomograms to determine aortic cross-sectional
area. The HemoSonic 100 device was developed to eliminate the latter by echocardiographic aortic diameter measurement, but optimal adjustment of both the Doppler
technique and the ultrasonic signal can be challenging.
Therefore, the value of the esophageal Doppler technique
is limited in clinical practice. However, Doppler devices
may be used in specific situations by skilled observers.
Based on the ability to reliably track CO changes over
time, early goal directed therapy in the intraoperative
setting may be a typical indication, since different studies
have demonstrated improved outcomes when using this
concept.
Until now, tools for continuous CO monitoring have
been validated as if they were tools for snapshot measurements. Most authors have compared variations in CO
between two time-points and have used Bland–Altman
representations to describe the statistical agreement
between these variations. The impact of time and repetitive measurements over time have not been taken into
consideration. Recently Squara and coworkers proposed
a conceptual framework for the validation of CO monitoring devices [6]. Four quality criteria were suggested and
studied: accuracy (with a small bias), precision (with
a small random error in measurements), a short response
time, and an accurate amplitude response. As an amount
of deviation in each of these four criteria is admitted, the
authors proposed to add, as a fifth criterion, the ability to
detect significant cardiac output directional changes.
Other important issues regarding the designing of studies
to validate cardiac output monitoring tools were also
479
C
480
C
Cardiac Steroids and Glycoside Toxicity
underscored: the choice of patient population to be studied, choice of reference method, the method of data acquisition, data acceptability checking, data segmentation,
and the final evaluation of reliability. The application of
this framework underlines the importance of precision
and time response for the clinical acceptance of monitoring tools.
(Nerium oleander), foxglove (Digitalis spp.), lily of the
valley (Convallaria majalis), and red squill (Urginea
maritima), a rodenticide of historical significance. The
dried secretions of the Bufo toad, a purported aphrodisiac
when topically applied, contain a cardioactive steroid and
have also caused toxicity when ingested.
Pathophysiology
References
1.
2.
3
4.
5.
6.
Practice Guidelines for Pulmonary Artery Catheterization (2003) An
Update Report by the American Society of Anesthesiologists Task
Force on Pulmonary artery Catheterization. Anesthesiology
99:988–1014
Della Rocca G, Costa MG, Pompei L, Coccia C, Pietropaoli P (2002)
Continuous and intermittent cardiac output measurement: pulmonary artery catheter versus aortic transpulmonary technique.
Br J Anaesth 88:350–356
Costa MG, Della Rocca G, Chiarandini P, Mattelig S, Pompei L,
Barriga MS, Reynolds T, Cecconi M, Pietropaoli P (2008) Continuous and intermittent cardiac output measurement in hyperdynamic
conditions: pulmonary artery catheter vs. lithium dilution technique. Intensive Care Med 34(2):257–263
McGee WT, Horswell JL, Calderon J, Janvier G, Van Severen T, Van
den Berghe G, Kozikowski L (2007) Validation of a continuous,
arterial pressure-based cardiac output measurement: a multicenter,
prospective clinical trial. Crit Care 11:R105
Singer M (2009) Oesophagela doppler. Curr Opin Crit Care
15(3):244–248
Squara P, Cecconi M, Rhodes A, Singer M, Chiche JD. Intensive Care
Med 2009; Jul 11 Epub ahead of print
Cardiac Steroids and Glycoside
Toxicity
NIMA MAJLESI, DIANE P. CALELLO, RICHARD D. SHIH
Department of Emergency Medicine, Morristown
Memorial Hospital, Morristown, NJ, USA
Synonyms
Digoxin toxicity; Foxglove toxicity; Oleander toxicity
Definition
Cardioactive steroids are a class of animal and plantderived compounds with a steroid nucleus and a specific
inotropic, chronotropic, and dromotropic effect. The
term cardiac glycoside refers to a subgroup of cardioactive
steroids that also contain sugar residues and include
digoxin, digitalis, and ouabain. In the United States, the
most common source of cardioactive steroid exposure is
pharmaceutical digoxin. Plant sources include oleander
Ingested cardioactive steroids (CAS) are approximately
80% bioavailable. However, toxicokinetics depends on
multiple factors, including electrolyte abnormalities, medication interactions, renal dysfunction, and disruption
of gastrointestinal flora. Hypokalemia, in particular,
results in excessive sensitivity to CAS as less binding to
skeletal Na+-K+ ATPase may result in increased effects on
the myocardium. Hypomagnesemia and hypercalcemia
may also potentiate CAS toxicity. Drug interactions are
unfortunately common with other cardiovascular medications. Amiodarone, spironolactone, furosemide, diltiazem, carvedilol, and verapamil can all interfere with the
kinetics of CAS through alteration of protein binding,
inactivation of P-glycoprotein, and decreased renal
perfusion.
Cardioactive steroids inhibit the Na+-K+ ATPase on
the membrane of the cardiac myocyte, thereby raising
the intracellular Na+ content, which then prevents the
Na+-Ca2+ antiporter from expelling Ca2+ in exchange
for Na+. This results in an increase in intracellular Ca2+
within the myocyte and calcium-mediated Ca2+ release
from the sarcoplasmic reticulum. Positive inotropy is
achieved by increased available calcium to bind troponin,
actin, and myosin. CAS also can affect the parasympathetic nervous system through increase of acetylcholine
from the vagus nerve.
The resultant effect on cardiac conduction and electrophysiology is variable. Therapeutically, CAS cause a
decreased rate of depolarization and conduction through
both the sinoatrial and atrioventricular nodes. A higher
resting membrane potential also leads to shortened repolarization and increased automaticity of the atria and
ventricles. The common ECG finding in patients
on therapeutic CAS is referred to as “digitalis effect.”
Digitalis effect is characterized by PR interval prolongation, QT shortening, and the ST-segment forces opposite
in direction from the QRS. This is a reflection of therapeutic effect in contrast to the ECG findings in CAS
toxicity.
Presentation
It is important to distinguish between those patients with
acute and chronic CAS toxicity as clinical manifestations
Cardiac Steroids and Glycoside Toxicity
and management differ significantly. An assessment of the
serum digoxin concentration, serum electrolytes, especially potassium and magnesium, renal function, and electrocardiogram are essential in determining the severity of
toxicity and the need for treatment. In the case of plantderived CAS exposure, the serum digoxin immunoassay
exhibits some cross-reactivity with these compounds and
will provide qualitative assessment. These cases should be
managed more by the clinical picture than the actual
serum level. Digoxin toxicity, however, typically requires
a concentration greater than 2 ng/mL.
In acute toxicity, early nausea and vomiting are nearly
universal; extracardiac manifestations may include confusion and lethargy. ECG findings vary widely, and essentially
any rhythm is possible in CAS toxicity with the notable
exception of rapidly conducted supraventricular tachydysrhythmias. Biventricular tachycardia, while pathognomonic, is rarely seen. The most commonly observed
findings are premature ventricular contractions and atrial
fibrillation or flutter with atrioventricular block [1].
Digitalis effect is not the result of CAS toxicity and represents normal therapeutic effect as mentioned earlier.
An elevated serum potassium concentration as a result
of Na+-K+ ATPase pump inhibition has been shown to be
prognostic in adults with acute ingestion. A large observational cohort study performed before the development
of digoxin-specific antibodies demonstrated a strong
correlation between serum potassium and mortality [2].
A potassium concentration between 5.0 and 5.5 mEq/L
was associated with a 50% mortality, and a serum potassium concentrations greater than 5.5 mEq/L was associated with 100% mortality. Though hyperkalemia may
exacerbate the toxicity due to CAS, it is more a marker
of severity in adults with acute ingestion rather than the
primary etiology.
Chronic CAS toxicity is more challenging both to
diagnose and to manage. Systemic symptoms are often
present, including malaise, GI symptoms, weakness, confusion, delirium, and various visual disturbances. These
often include decreased visual acuity and visual color
changes (xanthopsia).
Unlike acute CAS toxicity, chronic toxicity is often
complicated by hypokalemia due to concomitant diuretic
use which as mentioned may potentiate toxicity. Hypomagnesemia, when present, will enhance the myocardial irritability these patients exhibit. However, hyperkalemia and
hypermagnesemia may also be present, most commonly
in the setting of new onset renal failure or insufficiency.
Similar bradydysrhythmias and ventricular tachydysrhythmias which occur in acute toxicity are more common
in patients presenting with chronic CAS toxicity.
C
481
Treatment
Digoxin-specific antibody fragments have revolutionized
the treatment of CAS toxicity. The decision to administer
is multifactorial and should consider the amount ingested,
serum level, clinical evidence of toxicity, as well as
the patient’s underlying conditions which may be exacerbated by complete removal of digoxin where clinically
needed. In general, digoxin-specific Fab should be given
to patients with:
1. CAS-related dysrhythmias
2. Acute ingestion with potassium greater than 5 mEq/L
3. Chronic toxicity presenting with dysrhythmias, CNS
findings, or gastrointestinal symptoms
4. Serum digoxin concentration greater than 15 ng/mL at
any time or greater than 10 ng/mL 6 h post-ingestion
regardless of symptoms in an acute ingestion
5. Poisoning with a non-digoxin CAS
The optimal dosing of digoxin-specific Fab can be
determined based on the serum concentration, amount
ingested, or clinical presentation [3].
1. If the serum digoxin concentration is known, the dose
is calculated as:
# of vials ¼ ½serum digoxin concentration ðng=mLÞ
weight=100
2. If the amount ingested is known and the ingestion is
acute, the dose is calculated as:
# of vials ¼ ½amounted ingested ðmgÞ=0:5 ðmg=vialÞ
0:8 ðrepresents 80%bioavailability Þ
3. In the patient presenting with life-threatening toxicity
requiring immediate treatment before the serum concentration can be obtained, with an unknown amount
ingested, the recommended empiric dose is 10–20 vials
in acute poisoning, and 5 vials in chronic poisoning.
Gastrointestinal decontamination should be considered in patients with acute ingestions especially for those
with non-digoxin CAS ingestions. Multiple dose–
activated charcoal may be effective due to enterohepatic
recirculation of CAS. Gastric lavage and emesis should be
limited to the very few presenting early with non-digoxin
CAS ingestions.
Replacement of potassium and magnesium should
occur prior to administration of digoxin-specific antibody
fragments as correction often leads to abatement of the
presenting cardiac dysrhythmia, and Fab administration
may decrease the potassium further. In contrast, correction of hyperkalemia should begin with administration
of digoxin specific antibody fragments followed by
C
C
Cardiac Tamponade
intravenous insulin, dextrose, and sodium bicarbonate,
being careful not to cause hypokalemia. Intravenous calcium administration is contraindicated due to the relative
intracellular hypercalcemia which exists in CAS-poisoned
patients. Administration has been associated with cardiac
dysfunction and arrest.
Transvenous and external pacing is contraindicated in
patients with CAS poisoning due to increased adverse
outcomes associated with delay in digoxin-specific Fab
administration and conversion to unstable ventricular
dysrhythmias [4]. However, cardioversion and defibrillation is indicated in those with hemodynamic instability
and ventricular dysrhythmias.
Prognosis
The prognosis of CAS-poisoned patients is dependent
upon the complications of CAS toxicity that develop.
After-care
After management of acute medical consequences,
patients with intentional overdoses should be referred
for counseling. Patients with chronic CAS toxicity should
be evaluated for alternative treatment of the underlying
disorders so that unintentional toxicity will be less likely to
recur.
Definition
Cardiac Tamponade describes the hemodynamic sequelae
resulting from compression of the cardiac chambers due
to accumulation of fluid (or gas) within the pericardium.
Pathophysiology [1, 2]
An understanding of the underlying pathophysiology is
essential to fully grasp the clinical findings associated with
tamponade. The pericardial sac is relatively inelastic. It can
stretch to accommodate a limited volume (pericardial
reserve volume) before any further increase in pericardial
contents causes increased pericardial pressure and competition between the extracardiac contents and the contents of the cardiac chambers for the finite available space.
As tamponade develops, the cardiac chambers are compressed and their compliance is reduced. This imposes
a constraint on cardiac filling, reduces stroke volume and
ultimately leads to a decrease in cardiac output.
The compliance of the pericardium and the rate
of fluid accumulation determine the likelihood of
tamponade occurring (Fig. 1). Rapid accumulation of
fluid (e.g., traumatic intrapericardial hemorrhage) will
rapidly result in tamponade. Conversely, with slow fluid
accumulation (e.g., as a result of inflammation), up to 2 l
of fluid may be present before causing tamponade.
References
1.
2.
3.
4.
Bismuth C, Gaultier M, Conso F, Efthymiou ML
(1973) Hyperkalemia in acute digitalis poisoning: prognostic significance and therapeutic implications. Clin Toxicol 6(2):153–162
Ma G, Brady WJ, Pollack M, Chan TC (2001) Electrocardiographic
manifestations: digitalis toxicity. J Emerg Med 20(2):145–152
Antman EM, Wenger TL, Butler VP Jr, Haber E, Smith TW
(1990) Treatment of 150 cases of life-threatening digitalis intoxication with digoxin-specific Fab antibody fragments. Final report of
a multicenter study. Circulation 81(6):1744–1752
Taboulet P, Baud FJ, Bismuth C, Vicaut E (1993) Acute digitalis
intoxication–is pacing still appropriate? J Toxicol Clin Toxicol
31(2):261–273
Acute
Subacute
Pressure
482
a
b
Volume
Cardiac Tamponade
DOMINIC W. K. SPRAY
Anaesthetics and Intensive Care, St. George’s Hospital,
London, UK
Synonyms
Pericardial tamponade
Cardiac Tamponade. Figure 1 The effect of rapid
accumulation of pericardial fluid compared to gradual
accumulation. In acute tamponade, very little volume (point a)
is needed before pericardial reserve volume is exhausted and
a critical pressure reached. With gradual accumulation,
compensatory mechanisms allow more volume (point b) to be
accommodated before this critical pressure is reached. In both
cases, small increments in pericardial fluid beyond this point
result in rapid pressure rises
Cardiac Tamponade
Chronic changes in pericardial compliance allow the pericardium to accommodate this extra fluid. Compensatory
mechanisms based on sympathetic stimulation (tachycardia, increased peripheral vascular resistance and increased
ejection fraction due to increased contractility) with
increased blood volume (upregulation of the reninangiotensin system) act to delay the decrease in cardiac
output.
In all cases, at the critical inflection point, very little
extra fluid needs to accumulate to substantially raise pericardial pressure and cause tamponade (likewise, great
benefit is seen from the initial removal of fluid during
pericardiocentesis). As tamponade increases, eventually
the diastolic pressures in all cardiac chambers equalize at
a level similar to the pericardial pressure (15–30 mmHg).
Pneumopericardium is rare but may occur after
trauma, due to iatrogenic causes or secondary to gasforming infections. It can present in a similar way to
fluid tamponade and is similarly a medical emergency.
Hemodynamic Sequelae
Venous Return
Venous return usually has two peaks – one during early
diastole and one during ventricular systole. Progressive
tamponade causes increased pericardial pressure throughout the cardiac cycle. Since the heart chambers in total are
fullest during diastole (and diastolic pressure is increased)
there is effectively no space for extra blood to flow into. In
contrast, the stroke volume leaving during ventricular
systole makes room for venous return. Filling is therefore
progressively shifted towards systole and diastolic
filling diminishes. Jugular venous distension is present
and the x descent of the central venous pressure trace is
lost, but the y descent remains. Once tamponade is
advanced enough, filling also drops during systole leading
to a further fall in total venous return and cardiac output.
Ventricular Interdependence and Pulsus
Paradoxus
Changes in pleural pressure are still transmitted to the
heart during tamponade. Therefore, in spontaneous inspiration, systemic venous return increases due to the fall in
intrathoracic pressure. Since the RV free wall is constricted
by the pericardial effusion, the extra volume can only be
accommodated by shifting the interventricular septum
leftward at the expense of left ventricular volume. Therefore, during spontaneous inspiration, left ventricular cardiac output drops, manifested by an exaggerated decrease
in systolic blood pressure (>10 mmHg), a phenomenon
known as pulsus paradoxus (Fig. 2). Increased LV
C
483
100
80
Aorta
60
C
40
20
RV
0
Inspiration
Expiration
Cardiac Tamponade. Figure 2 Pulsus paradoxus. Note the
decreased aortic systolic pressure and narrowing of pulse
pressure during spontaneous inspiration and rise again during
expiration. Timings are reversed in the RV pressure trace
afterload due to the decreased intrathoracic pressure may
also contribute to this decrease in left ventricular stroke
volume.
Right ventricular output will increase during inspiration secondary to the increased venous return. Since the
ventricles are in series, a few beats later, this also contributes to an increased left ventricular stroke volume during
expiration. Thus the effects of ventilation during
tamponade are to produce right and left sided stroke
volume and pressure changes which are 180 out of phase.
Note that with intermittent positive pressure ventilation (IPPV), the above findings will be reversed as intrathoracic pressure is at its highest during inspiration and
decreases during expiration.
Decreased Cardiac Output
The end-result of the effects of tamponade is a decrease in
cardiac output. The decrease in systemic venous return
and reduction in end diastolic chamber volumes are the
primary reasons for the low cardiac output state. End
systolic volume does also decrease due to increased contractility secondary to sympathetic activation. However,
this is not sufficient to maintain stroke volume, hence the
reliance on tachycardia to maintain adequate cardiac
output.
The effects of IPPV itself on cardiac output are complicated and variable, but on the whole, IPPV tends to
reduce venous return and consequently cardiac output.
Therefore, IPPV should be avoided in patients who are not
intubated at presentation. Corrective treatment by rapid
pericardiocentesis should be the aim as this will reverse
most indications for ventilation.
484
C
Cardiac Tamponade
Etiology and Subtypes of Tamponade
shock and the differential diagnosis should include other
causes for this.
Classical Tamponade
This presents with a spectrum of severity, from a simple
effusion with few symptoms to a life-threatening emergency. It is sometimes divided on the basis of duration into
acute and sub-acute. Acute tamponade is usually due to
trauma, cardiovascular rupture (e.g., retrograde flow from
type A aortic dissection) or iatrogenic causes (e.g., cardiac
catheterization). Sub-acute tamponade develops more
insidiously, usually as a result of an inflammatory process
(e.g., infection, neoplasm, autoimmune, radiation or
drug-induced) but sometimes due to a non-inflammatory
process (e.g., hypothyroidism, amyloidosis) in which the
underlying pathogenesis of the effusion remains unclear.
Idiopathic pericardial effusion is also seen, often
presenting with large volumes of pericardial fluid. Whatever the cause, it is important to treat any decompensation
as a medical emergency and institute the appropriate
management.
Low Pressure Tamponade
It is the trend towards pressure equalization between pericardial pressure and intracardiac pressure that gives rise
to tamponade. Therefore in patients who are severely
hypovolemic for whatever reason (e.g., hemorrhage, hemodialysis), with low intracardiac pressures, tamponade has
been demonstrated with pericardial pressures in the range
of 6–12 mmHg. Echocardiographic features are similar to
patients with classic tamponade (effusion size, chamber
collapse, exaggerated respiratory variation in transvalvular
flow), but clinical signs such as tachycardia, jugular venous
distension and pulsus paradoxus are less prevalent.
Regional Tamponade
Loculated or localized effusions or hematoma can occur,
commonly after cardiac surgery. The hemodynamic effects
of these vary widely depending on the affected chambers
and the typical features of classical tamponade are often
missing. Although the presentation may mimic that of
heart failure, it is difficult to generalize about clinical
findings and detailed imaging is often necessary to establish the diagnosis. A high index of suspicion should prevail
if there is a possibility of regional tamponade given the
clinical findings and a suggestive history.
Evaluation and Assessment
As mentioned above, tamponade manifests a continuum
of symptoms with a range of clinical severity. Presentation
may also include many non-specific signs and symptoms.
Ultimately, the presentation will be that of cardiogenic
History
In cases of suspected tamponade, the usual history should
be taken and predisposing factors should be sought. This
may be more relevant on the ICU where patients may not
be able to communicate.
Symptoms
Symptoms tend to relate to the degree of impairment in
cardiac output. Patients may complain of tachycardia,
dyspnea and fatigue as well as a central chest discomfort,
often relieved by sitting forward.
Clinical Signs
These become all the more relevant in sedated intensive
care patients who are unable to give a history or describe
their symptoms. Sympathetic upregulation resulting in
tachycardia is seen in virtually all cases, with exceptions
being those patients in whom their underlying disease
manifests bradycardia (e.g., hypothyroidism).
Cardiac sounds may be muffled and the apex beat
difficult to palpate in the presence of a large effusion.
A pericardial rub may be present if pericarditis is the
underlying etiology. Elevated jugular venous pressure
is seen, with the y descent often attenuated or absent
due to the absence of diastolic filling as discussed above.
The x descent is usually preserved. This is usually more
easily appreciated on a central venous pressure trace than
by clinical examination.
Pulsus paradoxus can be demonstrated in most cases
of tamponade. The sphygmomanometer cuff is deflated
slowly. Initially, the first Korotkoff sounds are only audible
during spontaneous expiration (or IPPV inspiration), but
as the cuff is deflated further, sounds are heard throughout
the respiratory cycle. The difference in pressure between
these two events quantifies the degree of pulsus paradoxus.
There are situations in which tamponade does not give rise
to pulsus paradoxus. These include any pre-existing condition where left ventricular diastolic pressure or volume
are already raised (e.g., atrial septal defects, severe aortic
regurgitation, chronic renal failure). Pulsus paradoxus
may also occur outside the context of tamponade. It is
seen in severe asthma or COPD, pulmonary embolism and
in up to one third of cases of constrictive pericarditis.
In intensive care patients, tamponade must always
be amongst the differential diagnosis of any patient who
manifests cardiogenic shock. The usual symptoms of hypotension, diaphoresis, shut down extremities and oliguria
may also be used as a crude indicator of progression
Cardiac Tamponade
C
485
I
C
V3
V6
Cardiac Tamponade. Figure 3 Electrical alternans. Note the changing size of sequential QRS complexes possibly due to the
heart “swinging” in the pericardial effusion
of tamponade over time and the need to perform
pericardiocentesis.
Investigations
Tamponade remains a clinical diagnosis, but certain investigations are useful to confirm suspicions.
Electrocardiography
This usually demonstrates tachycardia and may be of
lower voltage than usual, although this is a non-specific
finding. It may be possible that this finding is limited to
tamponade alone rather than effusion per se. Patterns
associated with acute pericarditis may also present. Electrical alternans (Fig. 3) is very specific, but insensitive for
tamponade. There is a beat to beat variation in the size of
electrical complexes, often, but not necessarily, restricted
to the QRS. This is thought to be possibly due to the heart
“swinging” in the pericardial fluid although the exact
mechanism is poorly understood.
Chest Radiography
At least a moderate pericardial effusion (approx. 200 ml) is
required before the cardiac silhouette begins to enlarge to
a characteristic, round, “flask-shaped” appearance.
A lateral view may show a pericardial fat pad due to
separation of pericardial fat from epicardium by the pericardial fluid. Chest radiographs typically appear normal in
acute tamponade, except that the lung fields usually
appear oligemic.
Echocardiography [3]
Echocardiography remains the standard for non-invasive
assessment of pericardial effusion and its hemodynamic
consequences. There is a class 1 recommendation for
its use in assessment of patients with suspected pericardial disease (American College of Cardiology/American
Heart Association/American Society of Echocardiography
Cardiac Tamponade. Figure 4 Large anterior and posterior
effusion containing fibrin strands
guidelines 2003). The presence of pulmonary hypertension may mask echocardiographic findings in tamponade.
Typical echocardiographic findings associated with
tamponade include:
● Effusion
Effusion is usually well visualized (Fig. 4). It normally
needs to be circumferential for classical tamponade to
occur, but regional tamponade may present with loculated
or localized effusions.
● Diastolic chamber collapse
During atrial relaxation, the pressure in the RA is at
its lowest and pericardial pressure at its highest leading
486
C
Cardiac Tamponade
a
b
Eff
Eff
RV
LV
LV
RV
Eff
Eff
RA
RA
LA
Eff
LA
Cardiac Tamponade. Figure 5 Right atrial collapse as a result of tamponade. In mid diastole, the RA is seen to be full (a). After
end diastole the RA collapses – this has persisted into early systole (b). Note the free wall of the RV and the LA are not well
visualized in b. Eff,Effusion; RA, Right Atrium; RV, Right Ventricle; LA, Left Atrium; LV, Left Ventricle
to atrial collapse (Fig. 5). If this persists for more than
one third of the cardiac cycle, it is highly specific and
sensitive for tamponade. Brief collapse may occur for
other reasons.
RV collapse occurs in early diastole when the RV is still
empty (Fig. 6). This is a less sensitive, but more specific
finding for tamponade than RA collapse and may not
occur when diastolic pressure is raised, there is raised RV
afterload, or there is RV hypertrophy (since the RV
becomes less compliant).
Left sided collapse occurs less often. LV collapse occurs
very rarely since the wall is more muscular. LA collapse
when found is very specific for tamponade.
● Ventricular interdependence and septal shift
As discussed earlier, changes in right sided filling with
spontaneous inspiration result in the interventricular septum shifting into the LV (Fig. 7), the cause of pulsus
paradoxus.
Respiratory variation in trans-mitral and transtricuspid flow is also a result of the respiratory influence
on filling and ventricular interdependence. During
spontaneous inspiration (or IPPV expiration), rightsided filling increases and trans-tricuspid flow is
increased relative to that during spontaneous expiration
(or IPPV inspiration). The reverse is true for transmitral flow. Therefore the respiratory variation in flow
across the atrio-ventricular valves is 180 out of phase
(Fig. 8).
The presence of pulmonary hypertension can mask
some of the echocardiographic signs of tamponade.
● Venous hemodynamics
A plethoric IVC is often seen due to the raised central
venous pressure, which may also manifest a reduced
reduction in IVC diameter (<50%) during spontaneous
inspiration (Fig. 9) despite the transmission of intrathoracic pressure to the RA. Doppler patterns of atrial filling
will also reflect the shift towards systolic filling and the loss
of the diastolic component.
CT/MRI [4]
These should not be used in unstable patients, but may
have a role in situations where the diagnosis is unclear and
the patient is relatively stable (e.g., suspected regional
tamponade). Pericardial effusion may also be an incidental finding on CT. Findings suggestive of tamponade
include large sized effusions, systemic venous distension,
chamber deformity and interventricular shift as seen
on echocardiography. Cine MRI is also capable of
Cardiac Tamponade
C
487
C
a
b
RV
LV
Eff
RA
RV
Eff
LV
Eff
RA
LA
Eff
LA
Cardiac Tamponade. Figure 6 RV collapse in diastole due to tamponade (a). Note that the RV end diastolic volume is also
reduced (b), contributing to the low cardiac output state. Eff,Effusion; RA, Right Atrium; RV, Right Ventricle; LA, Left Atrium; LV,
Left Ventricle
a
b
Eff
LV
RV IVS
RV IVS
Eff
RA
LA
RA
LV
Eff
LA
Eff
Cardiac Tamponade. Figure 7 Interventricular septal shift seen in tamponade with spontaneous inspiration. Note both frames
are taken at the same point of the electrocardiogram. In (a), during spontaneous expiration, a normal anatomical relationship
exists. In (b), the patient has inhaled, resulting in increased filling of the right side and bowing of the interventricular septum into
the left ventricle as the only way to accommodate the extra RV volume. Eff,Effusion; RA, Right Atrium; RV, Right Ventricle; LA,
Left Atrium; LV, Left Ventricle
488
C
Cardiac Tamponade
Inspiration
Inspiration
Expiration
a
Expiration
b
Cardiac Tamponade. Figure 8 The effect of spontaneous ventilation on atrio-ventricular valve flow velocities in tamponade.
Trans-tricuspid flow (a) increases during spontaneous inspiration due to the increased systemic venous return, and decreases
during expiration. During inspiration, trans-mitral flow (b) is impaired due to reduced LA filling and increased LV pressure
as a result of ventricular interdependence. The reverse is true during expiration
Treatment
Cardiac Tamponade. Figure 9 M-mode echocardiography of
the inferior vena cava during tamponade, showing a plethoric
IVC with very little respiratory variation in size
demonstrating all findings seen by echocardiography.
Underlying pericardial pathology is better assessed by CT
than echocardiography.
Invasive Pressure Measurement
A pulmonary artery catheter will show equalization of
diastolic pressure across cardiac chambers and the respiratory changes in right and left sided pressures responsible
for pulsus paradoxus. It is also useful to monitor the
effects of treatment – filling pressures that remain elevated
after pericardiocentesis may indicate underlying pericardial pathology. Monitoring after pericardiocentesis can
help in early detection of any reaccumulation of fluid
and pressure changes indicating impending tamponade.
This depends on the severity of the hemodynamic disturbance. Early tamponade with minimal hemodynamic disturbance and no significant loss in cardiac output may be
treated conservatively, especially since the risks associated
with pericardiocentesis are increased with small effusions.
Treatment should be aimed at the underlying cause (e.g.,
steroids for autoimmune disease, correction of clotting
etc.) and monitoring must be commensurate with the
clinical picture. Invasive monitoring (including consideration of a pulmonary artery catheter) is indicated and
these patients should be nursed in a high dependency
setting. Serial assessment is needed to ascertain the likelihood of worsening tamponade and the patient watched
carefully for evidence of end organ dysfunction due to any
decrease in cardiac output (e.g., oliguria, altered mental
state).
In patients with idiopathic effusion alone, but no
tamponade, opinion is divided as to treatment. Removal
of fluid should only be undertaken for treatment of possible tamponade, and not for routine diagnosis. There is
conflicting evidence as to the rate of progression to
tamponade, but an effusion measuring greater than
20 mm on echocardiography should be considered for
drainage. This may also reduce recurrence in the future.
Patients with overt tamponade represent a clinical
emergency and require definitive treatment by removal
of the pericardial fluid. Intravenous fluids have been
used as a temporizing measure, with best effects seen in
patients with a systolic blood pressure of <100 mmHg.
Hydration raises pericardial pressure as well as RA pressure and LV end diastolic pressure which may explain why
Cardiac Tamponade
some patients do not benefit. Positive inotropes with or
without vasodilators are of limited efficacy, probably
because of the maximal endogenous sympathetic drive
seen in most cases.
Pericardiocentesis and Pericardectomy [4]
Pericardial fluid is usually removed either percutaneously
or by surgical pericardectomy, although balloon
pericardectomy has been described in neoplastic effusions.
Echocardiography allows the optimum site for
pericardiocentesis to be located (the shortest route to the
pericardial fluid via an intercostal approach). Echocardiography also reduces the risk of myocardial puncture and
allows visualization of fluid removal and consequent
hemodynamic effects. Under full asepsis, a needle is
advanced into the pericardial fluid. Agitated saline may
be used to confirm needle position (easily seen on echocardiography). Up to 150 ml of fluid should be removed
through the needle to ameliorate the worst of the
tamponade (as discussed earlier, removal of only a small
volume of pericardial fluid may have great benefit).
A guidewire is fed through the needle and a pigtail catheter
passed over this and secured in place. This procedure in
the Mayo Clinic had a success rate of 97% with complication rates of 3.5% (minor) and 1.2% (major) respectively.
Complications include perforation of myocardium or coronary vessels, arrhythmias, pneumothorax, air embolism,
and abdominal trauma. Pericardiocentesis should be done
in the cardiac catheter laboratory unless the patient is too
unwell to be moved.
Fluoroscopy may be used if echocardiography is
unavailable, with the sub-xiphoid route being commonest,
the needle being directed towards the left shoulder. Once
pericardial fluid is aspirated, a small amount of contrast is
injected to confirm position and the guidewire introduced.
The guidewire position is checked in two planes and the
pigtail catheter passed over it.
In emergency situations (e.g., during a pulseless electrical activity arrest), a sub-xiphoid entry point is used
and the needle directed toward the patient’s shoulder.
Aortic dissection is a major contraindication to
pericardiocentesis and coagulopathy a relative one.
A surgical approach is preferred where there are loculated
effusions, small effusions (<1 cm) or where there is evidence of clot or adhesions. Recurrent tamponade (especially due to neoplasm) is also often best managed
surgically. Pericardectomy is usually performed via a
sub-xiphoid approach under general anesthesia since a
small window is usually sufficient to relieve tamponade
(whereas removal of the entire pericardial sac via an
anterior approach is used in constrictive pericarditis).
C
Balloon pericardectomy allows drainage from the pericardium into the pleural cavity. The risks of general
anesthesia are increased in tamponade and even partial
drainage by pericardiocentesis prior to induction may
help to decrease risk.
Pericardial fluid samples should be sent to the laboratory for staining and culture (including mycobacteria),
plus differential white cell count, specific gravity, hematocrit and protein content if the diagnosis is not known
beforehand. Adenosine deaminase levels should also be
requested if tuberculous effusion is a possibility.
After Care and Prognosis
A pericardial drain usually stays in place until drainage is
less than 25 ml/day. During this time, the patient should
remain fully monitored for recurrence of tamponade
or possible complications. Mortality and morbidity are
very low since the advent of echocardiography guided
procedures, with recent studies estimating the major complication rate to be between 1.2% and 1.6%. Patients with
pre-existing pulmonary hypertension complicated by
tamponade seem to be at higher risk of death.
Comparison between Cardiac Tamponade
and Constrictive Pericarditis [5]
There are some important similarities and differences
between constrictive pericarditis and cardiac tamponade.
Constrictive pericarditis also results in increased intracardiac pressure and equalization of left and right filling
pressures. The fibrotic, scarred pericardium prevents
changes in intrathoracic pressure being transmitted to
the cardiac chambers (unlike tamponade). These changes
are still transmitted to the pulmonary circulation. On
spontaneous inspiration, the gradient from pulmonary
veins to left atrium is therefore reduced and left-sided
filling is impaired. This allows an increase in right sided
filling during spontaneous inspiration (the same ventricular interdependence which occurs in tamponade occurs
in constrictive pericarditis). The opposite occurs with
spontaneous expiration. Therefore, although the mechanism is different to tamponade, pulsus paradoxus can
occur in constrictive pericarditis, so cannot reliably be
used to distinguish between the two (although it is more
common in tamponade). Changes in transmitral and
transtricuspid flows are similar between the two.
In constrictive pericarditis, atrial filling occurs primarily in diastole due to raised atrial pressures driving flow.
This stops abruptly around mid-diastole when the noncompliant ventricle reaches its volume limit (due to pericardial constriction). This results in the so called “square
489
C
490
C
Cardiac Troponin I
root sign” – the right ventricular pressure trace shows an
initial dip followed by an acute rise in early diastole and
then a subsequent plateau during which no more filling
occurs. This is not seen in tamponade.
The JVP in constrictive pericarditis is raised (also in
tamponade) but contains both an x descent and
a prominent, collapsing y descent (unlike tamponade
where the y descent is attenuated or may be missing).
Since inspiratory pressures are not transmitted to the
RA, the usual increase in right heart return does not
occur and so systemic venous pressure increases (or at
least does not drop) during spontaneous inspiration
(Kussmaul sign). The Kussmaul sign does not occur in
tamponade.
Pericardial effusion leading to tamponade does occur
in patients with pre-existing constrictive pericarditis
(effusive constrictive pericarditis). Echocardiographic
findings may initially be intermediate, but pericardiocentesis unmasks the typical findings associated with constrictive pericarditis.
Cardiac Ultrasound
▶ Echocardiography
Cardiogenic Pulmonary Edema
▶ Heart Failure, Acute
▶ Ventricular Dysfunction and Failure
Cardiogenic Shock
▶ Acute Heart Failure: Risk Stratification
▶ Heart Failure, Acute
▶ Ventricular Dysfunction and Failure
References
1.
2.
3.
4.
5.
Spodick DH (2003) Acute cardiac tamponade. N Engl J Med
349:684–690
Zipes DP, Libby P, Bonow RO, Braunwald E (eds) (2005) Braunwald’s
heart disease: a textbook of cardiovascular medicine, 7th edn.
Elsevier Saunders, Philadelphia, pp 1762–1769
Wann S, Passen E (2008) Echocardiography in pericardial disease.
J Am Soc Echocadiogr 21:7–13
Restrepo CS, Lemos DF, Lemos JA et al (2007) Imaging findings
in cardiac tamponade with emphasis on CT. Radiographics
27:1595–1610
Maisch B, Seferovic PM, Ristic AD et al (2004) Guidelines on the
diagnosis and management of pericardial diseases. executive summary. The task force on the diagnosis and management of pericardial
diseases of the European society of cardiology. Eur Heart
J 25:587–610
Cardiomyopathy in Children
JONATHAN R. EGAN, MARINO S. FESTA
The Children’s Hospital at Westmead, Westmead,
Australia
Definition
A disease of the heart muscle in children is typically the
result of inherited conditions (consider genetic and metabolic disorders) or viral myocarditis that has become
indolent – characterized by cardiogenic failure.
Characteristics
Cardiac Troponin I
▶ Cardiac Markers for Diagnosing Acute Myocardial
Infarction
Cardiac Troponin T
▶ Cardiac Markers for Diagnosing Acute Myocardial
Infarction
It is useful to categorize pediatric cardiomyopathy into the
following four groups, the majority of which present prior
to 12 months of age [1]:
Hypertrophic Cardiomyopathy (HCOM)
Caused as a result of hypertrophic expansion either of the
left ventricular septum alone or in combination with free
wall hypertrophy. This disease leads to impingement on
the left ventricle cavity, impaired ventricular filling, and
variable left ventricular outflow obstruction. Most commonly, this is an inherited condition and there is a strong
association with Noonan’s syndrome, it is characterized by
shortness of breath or syncope on exertion. It is an important cause of sudden death in young people.
Cardiopulmonary Resuscitation
Dilated Cardiomyopathy (DCMP)
Typically as the result of burnt out viral myocarditis (40%)
or idiopathic in origin. There is reduced systolic performance and global cardiac dilatation. This has a poor prognosis and apart from supportive therapies will require heart
transplant where feasible and appropriate. As a result of the
dilatation and resulting arrhythmic propensity there is also
a risk of mural thrombus formation. Selenium deficiency
also results in a dilated cardiomyopathy.
Restrictive Cardiomyopathy (RCMP)
RCMP is not common and results from infiltrative conditions such as hemochromatosis, amyloidosis, and glycogen storage diseases. It is increasingly prevalent with
increasing age and diastolic function is primarily
compromised.
Arrhythmogenic Right Ventricular Dysplasia
and Left Ventricular Noncompaction
Right ventricular dysplasia is an inherited condition in
which the right ventricle is replaced by fibrous-fatty tissue.
Patients typically present with arrhythmias in early adulthood. Left ventricular noncompaction is variably
inherited and associated with systemic disorders. There
are coarse trabeculations of the ventricular apex, which
can affect both systolic and diastolic performance of the
systemic ventricle.
Management
Patients with HCOM have restrictions on physical activity
and typically receive beta blockade prophylaxis. Partial
septal myomectomy can also be considered. Given the
generally guarded prognosis of the cardiomyopathies it is
important to determine any (rarely) reversible causes – in
particular deficiencies of thiamine, selenium or carnitine,
endocrinopathies (thyroid and growth hormone abnormalities as well as phaeochromocytoma) need to be considered. Anticoagulation should be considered and
maintained in those with dilated cardiomyopathy. Subsequently, supportive medical and mechanical therapies
maybe indicated together with heart transplant depending
on the underlying cause and overall condition of the
patient. Initial supportive measures are similar to those
outlined in the cardiac failure section, but about half of
pediatric heart transplants are for cardiomyopathies [6].
References
1.
Nugent AW, Daubeney PE, Chondros P, Carlin JB, Cheung M,
Wilkinson LC, Davis AM, Kahler SG, Chow CW, Wilkinson JL,
Weintraub RG (2003) The epidemiology of childhood cardiomyopathy in Australia. N Engl J Med 348(17):1639–1646
C
491
Cardioparacentesis
▶ Periocardiocentesis
C
Cardiopulmonary Resuscitation
RAGHU R. SEETHALA1, BENJAMIN S. ABELLA2
1
Department of Anesthesiology, Perioperative and Pain
Medicine, Brigham and Women’s Hospital,
Boston, MA, USA
2
Department of Emergency Medicine and Department of
Medicine, Pulmonary, Allergy, and Critical Care Division,
Hospital of the University of Pennsylvania, Philadelphia,
PA, USA
Synonyms
Basic life support (BLS); Chest compression (CC)
Definition
Cardiopulmonary resuscitation (CPR) is a method of
providing artificial (externally-generated) circulation and
ventilation during cardiac arrest to achieve the return of
spontaneous circulation (ROSC). The main actions
performed during CPR are delivery of chest compressions
(CC) and rescue breaths. In a newer form of the therapy
specifically for lay public providers (“hands-only” CPR),
only CCs are delivered without the provision of rescue
breaths.
Role of CPR
Indication
Cardiac arrest (CA), defined as the abrupt cessation
of cardiac output usually due to sudden arrhythmia, is
an exquisitely time-sensitive condition common in
intensive care environments. In-hospital CA has
a survival to hospital discharge rate of approximately
20%. It is critical to perform CPR as promptly as possible
in all patients suspected of suffering from CA that do not
have advanced directives stating a preference to eschew
resuscitation efforts. It can be difficult to determine if
a patient is truly in CA. Generally speaking, international
resuscitation guidelines put forth by the International
Liaison Committee on Resuscitation (ILCOR) recommend that rescuers begin CPR in any victim who
becomes suddenly unconscious with absent or markedly
492
C
Cardiopulmonary Resuscitation
abnormal (gasping) respirations [1]. This definition was
primarily designed for recognition of out-of-hospital CA;
for in-hospital skilled providers, CA can be identified by
the lack of a palpable pulse and/or measurable blood
pressure. It is important to remember that CPR does not
represent definitive therapy, in that correction of the
underlying cause of CA must be addressed to reverse the
condition (see Table 1).
Epidemiology
Each year over one million people in Europe and North
America are afflicted with CA. Survival rates vary greatly
depending on initial cardiac rhythm, location of arrest
(in-hospital versus out-of-hospital, for example), and
a number of other factors. Including all initial rhythms,
survival rates have been documented below 10% for outof-hospital CA, and approximately 20% for in-hospital
CA. During out-of-hospital CA, bystander CPR has been
shown to more than double the survival rate. Unfortunately, the prevalence of CPR performed by bystanders has
been documented to be as low as 25% [2].
Cardiopulmonary Resuscitation. Table 1 Potential underlying causes of cardiac arrest
Category
Cardiac
Specific etiology
Myocardial ischemia
Primary arrhythmia
Secondary arrhythmia from
myocardial scar
Cardiac tamponade
Pulmonary
Hyperkalemia
Acidemia
Hypothermia
Toxins
CPR is designed to maintain coronary and cerebral perfusion as well as oxygenation during CA until definitive
therapy, such as defibrillation or reversal of underlying CA
pathophysiology, can be performed to achieve ROSC. The
commonly used acronym ABC (airway, breathing, circulation) describes the main principles that are highlighted
during CPR. It must be noted that the ensuing description
most aptly applies to the general case of an out-of-hospital
(non-intubated) patient. For patients already in intensivecare environments, the approach to CC remains the same
although clearly airway and breathing approaches
will vary. See Table 2 for summary of CPR delivery
recommendations.
Airway and Breathing
Technique
The initial step in evaluating the unresponsive patient is to
establish an open airway. This can be accomplished by the
head-tilt, chin-lift maneuver. If obstructing material
(food, emesis) is visible in the oropharynx, then the rescuer may perform a finger sweep in an attempt to clear the
airway. Additionally, if airway obstruction with a foreign
body is suspected, then chest thrusts, back blows, or
abdominal thrusts should be performed in order to relieve
the obstruction. Once an open airway has been
established, and the patient is still not breathing, rescue
breaths should be given. Current resuscitation guidelines
recommend administering two rescue breaths for every
30 CCs for adult victims of SCA [1]. Once advanced
medical support is available, endotracheal intubation
should be performed. Endotracheal intubation is considered the definitive airway management in CA patients.
Pulmonary embolism
Hypoxic respiratory failure
Metabolic
Application
Carbon monoxide
Cardiopulmonary Resuscitation. Table 2 Summary of CPR
recommendations
Characteristic
Parameters
Chest compressions (CCs)
Rate of 100 per min
Depth of 5+ cm
Opioid toxicity
Hemorrhage
Gastrointestinal bleeding
Other
Complete recoil between CCs
Cerebral hemorrhage
Drowning
Minimize interruptions in CCs
Ventilations
Volume of 400–800 cc for most
adults
Hanging
Penetrating or blunt trauma
Common etiologies of cardiac arrest are represented here; this list is
not intended to be comprehensive. Readers are encouraged to consult
with reference [1] for further information
Rate of 8–10 per min
Employ FiO2 of 1.0
Other
Call for assistance from others
Obtain defibrillator
Cardiopulmonary Resuscitation
C
Once this has been established, then a ventilation rate of
8–10 per minute is recommended, and should be
performed in parallel to ongoing CCs [1].
Traditionally, performing rescue breaths was considered as important a procedure as providing CCs. The
purpose of ventilation during cardiac arrest is to provide
oxygenation, decrease hypercapnia, and reduce acidosis.
However, recent evidence has demonstrated the detrimental effects of hyperventilation and prolonged pauses in CCs
while providing ventilation. Hyperventilation increases
intrathoracic pressure, thereby causing decreased venous
return to the heart. This ultimately results in decreased
coronary and cerebral perfusion. Additionally, interruptions in CCs to provide ventilation (in the non-intubated
patient) result in decreased blood flow to the heart and
brain.
decompression, the intrathoracic pressure falls, resulting
in passive refilling of the heart [3]. It is likely that this latter
model more accurately represents the action of CPR.
Circulation
Adjunct CPR Techniques and Devices
The most important action during CPR is to provide high
quality CCs, which generate blood flow and perfusion to
the brain and heart. Indeed, a more recently explored form
of resuscitation care for the lay public, “hands-only” CPR,
consists solely of CC delivery without rescue breaths, until
the arrival of trained health-care personnel.
Despite evidence demonstrating that high-quality CPR
positively impacts outcomes from SCA, studies have
documented that overall CPR performance is poor, both
during out-of-hospital and in-hospital CA. As a result,
there have been a variety of adjunctive techniques and
devices developed with this in mind.
Technique
Active Compression-Decompression CPR
(ACD-CPR)
The patient should be supine on a hard surface. If the
patient is on a soft surface (e.g., a mattress), a backboard
should be placed under the patient. The proper technique
for performing CCs in adults is to place the heel of one
hand in the center of the chest over the lower portion of
the sternum, with the other hand on top of the first. The
rescuer should keep the elbows straight and push firmly
and quickly. The sternum should be compressed to a depth
of 4 to 5 cm at a rate of 100 compressions per minute. After
reaching maximum depth, the chest wall should be
allowed to fully recoil before the next CC is delivered [1].
Physiology of CC
Currently, two models of the mechanism of blood flow
during CC exist. The “cardiac pump model” postulates
that the heart is compressed between the sternum and
vertebra generating an artificial systole with forward
blood flow from the ventricles; then during the decompression phase, the heart passively fills [3]. The “thoracic
pump model” argues that direct compression of the heart
is not responsible for the forward blood flow. This model
suggests that CCs cause an increase in intrathoracic pressure, which creates a pressure differential for blood to
flow to the lower-pressure extrathoracic arteries. During
CPR Quality
CA outcomes are dependent on the quality of CPR. The
key components of CPR quality are CC rate, CC depth,
chest-wall recoil, ventilation rate, and CC pauses. Higher
CC rates have been associated with higher rates of ROSC.
Increased depth of CC has been associated with greater
defibrillation success for ventricular tachycardia/ventricular fibrillation. In addition, decreased interruptions in CC
have been linked to improved survival. Incomplete chestwall recoil increases intrathoracic pressure, thereby
decreasing the preload of the heart and decreasing coronary and cerebral blood flow.
In this method of resuscitation, CPR is performed with
a suction cup compression device that is attached to the
middle of the sternum. The purpose of this suction cup is
to convert the passive decompression phase of CC into an
active decompression phase. This in turn produces a
greater negative intrathoracic pressure between compressions, which enhances venous return to the heart, subsequently increasing blood flow from the heart. Evidence
supporting ACD-CPR has been conflicting. While some
animal and human investigations have demonstrated that
ACD-CPR is capable of producing higher perfusion pressures compared to standard CPR, most have shown no
overall survival benefit [3].
Mechanical CPR Devices
It has been well documented in the literature that CCs are
not performed to a quality consistent with guideline recommendations, partly due to rescuer fatigue. This has led
to the introduction of mechanical devices that are able to
deliver CCs at a consistent rate and depth. Furthermore,
these devices are able to liberate rescuers from the function
of CC delivery so that they can perform other important
resuscitation tasks.
493
C
494
C
Cardiopulmonary Resuscitation
Two types of these tools are the mechanical piston
device and the load-distributing band (LDB) device. The
mechanical piston device compresses the sternum via
a plunger mounted on a backboard. This mechanical
adjunct has been shown to improve perfusion parameters
like mean arterial pressure and end-tidal CO2 in both in
and out-of-hospital settings [4].
The LDB uses a load-distributing compression band
that is placed circumferentially around the chest and
attached to a small backboard. It compresses the entire
anterior chest wall resulting in increased intrathoracic
pressure at a specified rate. Use of this device has shown
to improve mean aortic pressure as well as coronary perfusion pressure. In 2006, two studies were published comparing an LDB device with standard CPR that yielded
conflicting results. One study showed an improvement
in survival to discharge with the LDB compared to standard CPR. The other study showed no improvement in
survival and actually reported a significant decrease in
patients with good neurological outcome. One common
criticism of all mechanical CPR devices is that using these
devices may lead to a clinically significant delay in the
initiation of CPR. Currently, evidence supporting the use
of mechanical CPR devices in lieu of standard CPR
remains inconclusive but suggestive of benefit.
Impedance Threshold Device (ITD)
The ITD is a valve that attaches between the endotracheal
tube and resuscitation ventilation bag or mechanical
ventilator. It limits the flow of air into the thoracic cavity
during the decompression phase of CC. In doing
so, intrathoracic pressure is reduced allowing for
improved venous return to the heart. Studies have
suggested that the use of an ITD improves early outcome
in patients with out-of-hospital SCA. As of yet, no study
has shown an improvement in the victim’s long-term
outcome [4].
Monitoring and Feedback Devices
In an effort to improve the quality of CPR, defibrillators
have been developed with CPR-sensing capabilities and
the ability to provide automated feedback. In this fashion,
CPR parameters such as CC rate, depth and ventilation
performance can be “coached” via an automated system.
Such devices still require provider action to modify errors
in CPR delivery. Recent investigations in both the inhospital and out-of-hospital setting have suggested that
the use of CPR-sensing defibrillators can improve both
CPR delivery and initial survival rates, although these
devices have not been tested in randomized controlled
trials at this time. CPR-sensing and recording defibrillators may also serve an important educational role,
allowing for detailed debriefing after CA events where
rescuers can be shown their individual CPR performance
characteristics.
Cardiocerebral Resuscitation (CCR)
Recent evidence has shown that interruptions in CCs during CPR results in poor hemodynamic consequence and are
associated with poor outcomes. These observations have
led investigators to study an alternative strategy to resuscitation, known as cardiocerebral resuscitation or CCR. This
resuscitation approach involves providing a greater number
of uninterrupted CCs to optimize cardiac and cerebral
perfusion. One such protocol was instituted by investigators in Arizona with promising results. This protocol
entailed initially providing 200 uninterrupted CCs before
defibrillating a shockable cardiac rhythm, followed by
200 uninterrupted CC post-defibrillation. Further minimization of interruptions in CCs was accomplished by
delaying endotracheal intubation and positive pressure
ventilation by initially providing passive oxygen insufflation via an oral pharyngeal airway and non-rebreather
face mask. This study demonstrated a significant improvement in survival to discharge for OHCA from 1.8% before
the CCR protocol to 5.4% after the protocol [5].
Continuous Chest Compression (CCC)-CPR or
“Hands-Only” CPR
Recently, there has been a parallel trend in bystander CPR
questioning the necessity of ventilations early during
CA-resuscitation care. Some resuscitation experts have
even called for the abandonment of ventilations altogether
in bystander CPR for out-of-hospital CA victims. One of
the major arguments for CCC-CPR is that bystanders are
more likely to perform CCC-CPR than standard CPR.
Also, in CA from sudden arrhythmia early ventilations
are unnecessary as blood is likely to be adequately oxygenated. In fact, ventilations require pauses in CCs which
decrease coronary and cerebral perfusion. CCC-CPR is
also easier to learn and teach. Several animal investigations
have demonstrated improved hemodynamics and outcome comparing CCC-CPR to standard CPR. Several
non-randomized clinical studies have shown that there is
no difference in outcome when comparing CPR with
rescue breathing to CCC-CPR. In fact, one study showed
that CCC-CPR led to better neurological outcome in
certain population subsets. The American Heart Association issued an advisory statement in 2008 that encouraged
CCC-CPR in witnessed CA, when untrained bystanders or
bystanders are not willing to perform rescue breathing [2].
Cardiorenal Syndrome
Complications
Complications from CPR can result from providing ventilation or CCs. Victims of CA may suffer tracheal or other
airway injuries during intubation attempts. Inadvertent
esophageal intubation may result in increased intragastric
pressures leading to vomiting and aspiration. Rib fractures
and sternal fractures are uncommon but recognized complications from receiving CCs. The incidence of these
fracture complications is not known; recent work has
suggested that they are both uncommon, and when they
do occur, they are of small clinical consequence.
References
1.
2.
3.
4.
5.
2005 international consensus on cardiopulmonary resuscitation
(CPR) and emergency cardiovascular care (ECC) science with treatment recommendations, part 2: adult basic life support. Circulation
112(suppl):III-5–III-16
Sayre MR, Berg RA, Cave DM, Page RL, Potts J, White RD
(2008) Hands-only (compression-only) cardiopulmonary resuscitation: a call to action for bystander response to adults who experience
out-of-hospital sudden cardiac arrest: a science advisory for the
public from the American Heart Association Emergency Cardiovascular Care Committee. Circulation 117:2162–2167
Ornato JP, Peberdy MA (eds) (2005) Cardiopulmonary resuscitation.
Humana Press Inc., Totowa, NJ
2005 international consensus on cardiopulmonary resuscitation
(CPR) and emergency cardiovascular care (ECC) science with treatment recommendations, part 6: CPR Techniques and Devices.
Circulation 112(suppl):IV-47–IV-50
Bobrow BJ, Clark LL, Ewy GA et al (2008) Minimally interrupted
cardiac resuscitation by emergency medical services for out-ofhospital cardiac arrest. J Am Med Assoc 299:1158–1165
Cardiopulmonary Resuscitation
(CPR)
Is a constellation of maneuvers provided by bystander(s)
to a person who has lost spontaneous respiration and
circulation. Described by the American Heart Association
(AHA), it is designed to temporarily sustain life while
awaiting definitive medical care and typically involves
rhythmic external compression of the chest and rescue
breathing (“mouth-to-mouth”).
C
Cardiorenal Syndrome
CLAUDIO RONCO1, MIKKO HAAPIO2, NAGESH S. ANAVEKAR3,
ANDREW A. HOUSE4, RINALDO BELLOMO5
1
Department of Nephrology, St. Bortolo Hospital,
Vicenza, Italy
2
Division of Nephrology, HUCH Meilahti Hospital,
Helsinki, Finland
3
Department of Cardiology, The Northern Hospital,
Melbourne, Australia
4
Division of Nephrology, London Health Sciences Centre,
London, Canada
5
Department of Intensive Care, Austin Hospital,
Melbourne, Australia
Synonyms
Heart–kidney interaction
Definition
Although generally defined as a condition characterized by
the initiation and/or progression of renal insufficiency
secondary to heart failure, the term cardiorenal syndrome
is also often used to describe the negative effects of
reduced renal function on the heart and circulation
(more appropriately named ▶ reno-cardiac syndrome)
(Fig. 1, Tables 1 and 2) [1–4].
A major problem with the previous terminology is that
it does not allow clinicians or investigators to identify and
fully characterize the relevant pathophysiological interactions. This is important because such interactions differ
according to the type of combined heart/kidney disorder.
For example, while a diseased heart has numerous negative effects on kidney function, renal insufficiency can also
significantly impair cardiac function. Thus, a large number of direct and indirect effects of each organ dysfunction
can initiate and perpetuate the combined disorder of the
two organs through a complex combination of neurohumoral feedback mechanisms. For this reason a subdivision
into different subtypes seems to provide a more concise
and logically correct approach to this condition. We will
use such a subdivision to discuss several issues of importance in relation to this syndrome.
Evaluation
CardioQ
▶ Esophageal Doppler
495
Cardiorenal Syndrome Type I (Acute
Cardiorenal Syndrome)
Type I CRS or Acute Cardiorenal Syndrome (ACRS) is
characterized by a rapid worsening of cardiac function,
C
496
C
Cardiorenal Syndrome
Chronic
Acute
Cardiorenal Syndrome. Table 2 Proposed definition of
cardiorenal syndromes
Cardiorenal syndrome (CRS)
general definition
C-R
A pathophysiologic disorder
of the heart and kidneys
whereby acute or chronic
dysfunction in one organ
may induce acute or chronic
dysfunction in the other
organ.
CRS type I (acute cardiorenal Abrupt worsening of cardiac
syndrome)
function (e.g., acute
cardiogenic shock or
decompensated congestive
heart failure) leading to
acute kidney injury.
R-C
Cardiorenal Syndrome. Figure 1 The bidirectional nature of
the cardiorenal syndrome and the acute or chronic temporal
characteristics of the syndrome
CRS type II (chronic
cardiorenal syndrome)
Chronic abnormalities in
cardiac function (e.g.,
chronic congestive heart
failure) causing progressive
and permanent chronic
kidney disease.
CRS type III (acute
reno-cardiac syndrome)
Abrupt worsening of renal
function (e.g., acute kidney
ischemia or
glomerulonephritis) causing
acute cardiac disorder
(e.g., heart failure,
arrhythmia, ischemia).
CRS type IV (chronic
reno-cardiac syndrome)
Chronic kidney disease
(e.g., chronic glomerular
disease) contributing to
decreased cardiac function,
cardiac hypertrophy, and/or
increased risk of adverse
cardiovascular events.
CRS type V (secondary
cardiorenal syndrome)
Systemic condition
(e.g., diabetes mellitus,
sepsis) causing both cardiac
and renal dysfunction.
Cardiorenal Syndrome. Table 1 Heart and kidney
interactions
● CKD secondary to HF
● AKI secondary to contrast induced
nephropathy (CIN)
● AKI secondary to cardiopulmonary bypass
(CPB)
● AKI secondary to heart valve replacement
● AKI secondary to HF
● Cardiovascular mortality increased by end
stage kidney disease (ESKD)
● Cardiovascular risk increased by kidney
dysfunction
● Chronic HF progression due to kidney
dysfunction
● Uremia-related HF
● Volume-related HF
● HF due to acute kidney dysfunction
● Volume/uremia-induced HF
● Renal ischemia-induced HF
● Sepsis/cytokine induced HF
which leads to acute kidney injury (Fig. 2). Acute heart
failure may then be divided into four main subtypes
(hypertensive pulmonary edema with preserved LV systolic function, acute decompensated chronic heart failure,
cardiogenic shock, and predominant right ventricular failure). Type I cardiorenal syndrome (CRS) is common.
More than one million patients in the USA alone are
admitted to hospital every year with either de novo acute
heart failure (AHF) or with an acute decompensation of
chronic heart failure (ADCHF) [2]. Among patients with
ADCHF or de novo acute heart failure (AHF), premorbid
chronic renal dysfunction is common and predisposes
to acute kidney injury (AKI). The mechanisms by which
the onset to AHF or ADCHF leads to AKI are multiple
and complex. They are broadly described in previous
Cardiorenal Syndrome
C
497
Cardiorenal Syndrome: Type I
Hemodynamically mediated damage
C
Exogenous factors
drugs
Acute
heart
dysfunction
Humorally mediated damage
Acute
kidney
injury
Hormonal factors
Immuno-mediated damage
Cardiorenal Syndrome. Figure 2 Diagram illustrating and summarizing the major pathophysiological interactions between
heart and kidney in type I cardiorenal syndrome
publication [1]. The clinical importance of each of
these mechanisms is likely to vary from patient to patient
(e.g., acute cardiogenic shock vs. hypertensive pulmonary
edema) and situation to situation (AHF secondary to
perforation of a mitral valve leaflet from acute bacterial
endocarditis vs. worsening right heart failure secondary to
noncompliance with diuretic therapy). In AHF, AKI seems
to be more severe in patients with impaired left ventricular
ejection fraction (LVEF) compared to those with
preserved LVEF and increasingly worse when LVEF is
further impaired. AKI achieves an incidence >70% in
patients with cardiogenic shock. Furthermore, impaired
renal function is consistently found as an independent
risk factor for 1-year mortality in AHF patients with
ST-elevation myocardial infarction. A plausible reason
for this independent effect might be that an acute decline
in renal function does not simply act as a marker of
illness severity but also carries an associated acceleration
in cardiovascular pathobiology leading to a higher rate
of cardiovascular (CV) events, both acutely and chronically, possibly through the activation of inflammatory
pathways.
Cardiorenal Syndrome Type II (Chronic
Cardiorenal Syndrome)
Type II CRS or chronic Cardiorenal syndrome (CCRS) is
characterized by chronic abnormalities in cardiac function
(e.g., chronic congestive heart failure) causing progressive
chronic kidney insufficiency (Fig. 3).
Worsening renal function (WRF) in the context of
heart failure (HF) is associated with significantly increased
adverse outcomes and prolonged hospitalizations. The
prevalence of renal dysfunction in chronic heart failure
(CHF) has been reported to be approximately 25%. Even
limited decreases in estimated GFR of >9 ml/min appears
to confer a significantly increased mortality risk. Some
researchers have considered WRF a marker of severity of
generalized vascular disease. Independent predictors of
WRF include: old age, hypertension, diabetes mellitus,
and acute coronary syndromes.
The mechanisms underlying WRF likely differ based
on acute versus chronic HF. Chronic HF is characterized
by a relatively stable long-term situation of probably
reduced renal perfusion, often predisposed by both
micro- and macrovascular disease in the context of the
same vascular risk factors associated with cardiovascular
disease. However, although a greater proportion of
patients with low estimated GFR have a worse NYHA
class, no evidence of association between LVEF and estimated GFR can be consistently demonstrated. Thus,
patients with chronic heart failure and preserved LVEF
appear to have similar estimated GFR than patients with
impaired LVEF (<45%). Neurohormonal abnormalities
are present with excessive production of vasoconstrictive
498
C
Cardiorenal Syndrome
Cardiorenal Syndrome: Type II
Low cardiac output (CO)
Chronic hypoperfusion
Necrosis-apoptosis
Chronic
heart
disease
Low cardiac output (CO)
Subclinical inflammation
Endothelial dysfunction
Accelerated atherosclerosis
Chronic hypoperfusion
Increased renal vasc. resist.
Increased venous pressure
Chronic
kidney
disease
Sclerosis-fibrosis
Cardiorenal Syndrome. Figure 3 Diagram illustrating and summarizing the major pathophysiological interactions between
heart and kidney in type II cardiorenal syndrome
mediators (epinephrine, angiotensin, endothelin) and
altered sensitivity and/or release of endogenous vasodilatory
factors (natriuretic peptides, nitric oxide).
Cardiorenal Syndrome Type III (Acute
Reno-Cardiac Syndrome)
Type III CRS or acute reno-cardiac syndrome (ARCS) is
characterized by an abrupt and primary worsening of
renal function (e.g., acute kidney injury, ischemia, or
glomerulonephritis), which then causes or contributes to
acute cardiac dysfunction (e.g., heart failure, arrhythmia,
ischemia). The pathophysiological aspects are summarized in Fig. 4.
The development of AKI as a primary event leading to
cardiac dysfunction (Type III CRS) is considered less
common than type I CRS. This is partly because, unlike
Type I CRS, it has not been systematically considered or
studied. However, AKI is a condition with a growing incidence in hospital and ICU patients. Using the recent
RIFLE consensus definitions and its Injury and Failure
categories, AKI has been identified in close to 9% of
hospital patients and, in a large ICU database, AKI was
observed in more than 35% of critically ill patients. AKI
can affect the heart through several pathways whose hierarchy is not yet established. Fluid overload can contribute to
the development of pulmonary edema. Hyperkalemia can
contribute to arrhythmias and may cause cardiac arrest.
Untreated uremia affects myocardial contractility through
the accumulation of myocardial depressant factors and can
cause pericarditis. Partially corrected or uncorrected
acidemia produces pulmonary vasoconstriction, which, in
some patients, can significantly contribute to right-sided
heart failure. Acidemia appears to have a negative inotropic
effect and may, together with electrolyte imbalances, contribute to an increased risk of arrhythmias. Finally, as
discussed above, renal ischemia itself may precipitate activation of inflammation and apoptosis at cardiac level.
Cardiorenal Syndrome Type IV (Chronic
Reno-Cardiac Syndrome)
Type IV CRS or chronic reno-cardiac syndrome (CRCS) is
characterized by primary chronic kidney disease (CKD)
(e.g., diabetes or chronic glomerular disease) contributing
to decreased cardiac function, ventricular hypertrophy,
diastolic dysfunction, and/or increased risk of adverse
cardiovascular events (Fig. 5). The National Kidney
Cardiorenal Syndrome
C
499
Cardiorenal Syndrome: Type III
Volume
expansion
C
Drop of GFR
Acute
kidney
injury
Sympathetic activation
Acute
heart
dysfunction
RAA activation,
vasoconstriction
Electrolyte, acid-base
and coagulation imbalances
Humoral
signalling
Cardiorenal Syndrome. Figure 4 Diagram illustrating and summarizing major pathophysiological interactions between heart
and kidney in type III cardiorenal syndrome
Cardiorenal Syndrome: Type IV
Chronic
kidney
disease
Glomerular/interstitial
damage
Acquired risk factors
Primary nephropathy
Anemia
Uremic toxins
Ca/Pi abnormalities
Nutritional status, BMI
Na–H2O overload
Chronic inflammation
Sclerosis-fibrosis
Dialysis
Chronic
heart
disease
Anemia and malnutrition
Ca/Pi abnormalities
Na–H2O overload
Unfriendly milieu
Inflammation
Cardiorenal Syndrome. Figure 5 Diagram illustrating and summarizing the major pathophysiological interactions between
heart and kidney in type IV cardiorenal syndrome
500
C
Cardiorenal Syndrome
Foundation divides CKD into five stages based on
a combination of severity of kidney damage and GFR.
Individuals with CKD, particularly those receiving renal
replacement therapies are at extremely high cardiovascular
risk. Greater than 50% of deaths in CKD stage V cohorts
are attributed to CV disease, namely, coronary artery
disease (CAD) and its associated complications. The
2-year mortality rate following myocardial infarction
(MI) in patients with CKD stage V is high and estimated
to be 50%. In comparison, the 10-year mortality rate post
MI for the general population is 25%. Type IV cardiorenal
syndrome is becoming a major public health problem.
A large population of individuals entering the transition
phase towards end stage kidney disease (ESKD) is emerging. National Kidney Foundation guidelines define these
individuals as having CKD. CKD, which also encompasses
ESKD, is defined as persistent kidney damage (confirmed
by renal biopsy or markers of kidney damage) and/or
glomerular filtration rate (GFR)<60 ml/min/1.73 m2
over 3 months. This translates into a serum creatinine
level of 1.3 mg/dl, which would ordinarily be dismissed
as not being representative of significant renal dysfunction. Using these criteria, current estimates of CKD
account for at least 11 million individuals and rising.
The association between increased CV risk and renal dysfunction originally stemmed from data arising from ESKD
or stage V CKD cohorts. The leading cause of death
(>40%) in such patients is cardiovascular event-related.
This observation is supported by Australian and New
Zealand Dialysis and Transplant Registry (ANZDATA),
United States Renal Data System (USRDS), and the
Wave 2 Dialysis Morbidity and Mortality Study. Based
on these findings, it is now well established that CKD is
a significant risk factor for cardiovascular disease, such
that individuals with evidence of CKD have from 10- to
20-fold increased risk for cardiac death compared to ageand sex-matched controls without CKD. As discussed,
part of this problem may be related to the fact that such
individuals are also less likely to receive risk modifying
interventions compared to their non-CKD counterparts.
Less severe forms of CKD also appear to be associated with
significant cardiovascular risk. Evidence for increasing CV
morbidity and mortality tracking with mild to moderate
renal dysfunction, has mainly stemmed from communitybased studies [5]. All these studies documented an inverse
relationship between renal function and adverse cardiovascular outcomes. In particular, the association between
reduced renal function and CV risk appears to consistently
occur at estimated GFR levels below 60 ml/min/1.73 m2,
the principal GFR criterion used to define CKD. Among
high CV risk cohorts, baseline creatinine clearance is
a significant and independent predictor of short-term
outcomes (180 days follow-up), namely, death and myocardial infarction. Similar findings were also noted among
patients presenting with ST-elevation myocardial infarction, an effect independent of the Thrombolysis in Myocardial Infarction (TIMI) risk score. Other large-scale
studies that have examined the relationship between
renal function and cardiovascular outcomes among high
CV risk cohorts with left ventricular dysfunction have
included the Studies of Left Ventricular Dysfunction
(SOLVD), Trandolapril Cardiac Evaluation (TRACE),
Survival and Ventricular Enlargement (SAVE), and
Valsartan in Acute Myocardial Infarction (VALIANT) trials. These studies excluded individuals with baseline
serum creatinine of 2.5 mg/dl. In all these studies,
reduced renal function was associated with significantly
higher mortality and adverse CV event rates.
Renal insufficiency is highly prevalent among patients
with heart failure and is an independent prognostic factor
in both diastolic and systolic ventricular dysfunction. It is
an established negative prognostic indicator in patients
with severe heart failure.
Cardiorenal Syndrome Type V (Secondary
Cardiorenal Syndrome)
Type V CRS or secondary cardiorenal syndrome (SCRS) is
characterized by the presence of combined cardiac and
renal dysfunction due to systemic disorders (Fig. 6).
There is limited systematic information on type V CRS,
where both kidneys and heart are affected by other systemic processes. Although there is an appreciation that, as
more organs fail, mortality increases in critical illness,
there is limited insight into how combined renal and
cardiovascular failure may differently affect such an outcome compared to, for example, combined pulmonary
and renal failure. Nonetheless, it is clear that several
acute and chronic diseases can affect both organs simultaneously and that the disease induced in one can affect
the other and vice versa. Several chronic conditions such
as diabetes and hypertension are discussed as part of
type II and type IV CRS.
In the acute setting, severe sepsis represents the most
common and serious condition, which can affect both
organs. It can induce AKI while leading to profound
myocardial depression. The mechanisms responsible for
such changes are poorly understood but may involve the
effect of tumor necrosis factor on both organs. The onset
of myocardial functional depression and a state of inadequate cardiac output can further decrease renal function as
discussed in type I CRS and the development of AKI can
affect cardiac function as described in type III CRS. Renal
Cardiorenal Syndrome
C
501
Cardiorenal Syndrome: Type V
Neurohumoral activation
C
Hemodynamic changes
Systemic
diseases
Altered
metabolism
Exogenous toxins
drugs
Combined
heart-kidney
dysfunction
Immunological
response
Cardiorenal Syndrome. Figure 6 Diagram illustrating and summarizing the major pathophysiological interactions between
heart and kidney in type V cardiorenal syndrome
ischemia may then induce further myocardial injury in
a vicious cycle, which is injurious to both organs.
Treatment
Cardiorenal Syndrome Type I
The salient clinical issues of type I CRS relate to how the
onset of AKI (de novo or in the setting of chronic renal
impairment) induced by primary cardiac dysfunction
impacts on diagnosis, therapy, and prognosis and how
its presence can modify the general approach to the treatment of AHF or ADCHF. The first important clinical
principle is that the onset of AKI in the setting of AHF
or ADCHF implies inadequate renal perfusion until
proven otherwise. This should prompt clinicians to consider the diagnosis of a low cardiac output state and/or
marked increase in venous pressure leading to kidney
congestion and take the necessary diagnostic steps to
either confirm or exclude them (careful physical examination looking for ancillary signs and laboratory findings of
a low cardiac output state such as absolute or relative
hypotension, cold extremities, poor post compressive capillary refill, confusion, persistent oliguria, distended jugular veins, and elevated or rising lactate). The second
important consequence of the development of type I
CRS is that it may decrease diuretic responsiveness. In
a congestive state (peripheral edema, increased body
weight, pulmonary edema, elevated central venous pressure), decreased response to diuretics can lead to failure to
achieve the desired clinical goals. The physiological phenomena of diuretic breaking (diminished diuretic effectiveness secondary to post-diuretic sodium retention) and
post-diuretic sodium retention may also play an enhanced
part in this setting. In addition, concerns of aggravating
AKI by the administration of diuretics at higher doses or
in combination are common among clinicians. Such concerns can also act as an additional, iatrogenic mechanism
equivalent in its effect to that of diuretic resistance (less
sodium removal). Accordingly, diuretics may best be given
in AHF patients with evidence of systemic fluid overload
with the goal of achieving a gradual diuresis. Furosemide
can be titrated according to renal function, systolic blood
pressure, and history of chronic diuretic use. High doses
are not recommended and a continuous diuretic infusion
might be helpful. In parallel, measurement of cardiac
output and venous pressure may also help ensure continued and targeted diuretic therapy. Accurate estimation of
cardiac output can now be easily achieved by means of
arterial pressure monitoring combined with pulse contour
analysis or by Doppler ultrasound. Knowledge of cardiac
output allows physicians to develop a physiologically safer
and more logical approach to the simultaneous treatment
of AHF or ADCHF and AKI. If diuretic-resistant fluid
overload exists despite an optimized cardiac output,
removal of isotonic fluid can be achieved by ultrafiltration
(Fig. 7). This approach can be efficacious and clinically
beneficial. The presence of AKI with or without concomitant hyperkalemia may also affect patient outcome by
502
C
Cardiorenal Syndrome
Extracorporeal Ultrafiltration
Pre-filter pressure
Filter
Transmembrane pressure
Post-filter pressure
Pre-pump pressure
TMP = Pi - π = (Pb - Pd) - π
Pb
π
Pd
Ultrafiltrate
Hydrostatic
Oncotic
Heparin
Ven. line
Blood pump
Art. line
Cardiorenal Syndrome. Figure 7 Diagram presenting the technical features of ultrafiltration as applicable to patients with acute
heart failure and diuretic-resistant fluid overload
inhibiting the prescription of ACE inhibitors and aldosterone inhibitors (drugs that have been shown in large
randomized controlled trials to increase survival in the
setting of heart failure and myocardial infarction). This is
unfortunate because, provided there is close monitoring
of renal function and potassium levels, the potential benefits of these interventions likely outweigh their risks even
in these patients.
The acute administration of beta-blockers in the setting of type I CRS is generally not advised. Such therapy
should wait until the patient has stabilized physiologically
and concerns about a low cardiac output syndrome have
been resolved. In some patients, stroke volume cannot be
increased and relative or absolute tachycardia sustains the
adequacy of cardiac output. Blockade of such compensatory tachycardia and sympathetic system-dependent inotropic compensation can precipitate cardiogenic shock and
can be lethal. Particular concern applies to beta-blockers
excreted by the kidney such as atenolol or sotalol, especially
if combined with calcium antagonists. These considerations should not inhibit the slow, careful, and titrated
introduction of appropriate treatment with beta-blockers
later on, once patients are hemodynamically stable.
This aspect of treatment is particularly relevant in
patients with the cardiorenal syndrome where evidence
suggests that undertreatment after myocardial infarction
is common. Attention should be paid to preserving renal
function, perhaps as much attention as is paid to preserving myocardial muscle. Worsening renal function during
admission for ST-elevation myocardial infarction is
a powerful and independent predictor of in-hospital and
1-year mortality. In a study involving 1,826 patients who
received percutaneous coronary intervention, even a transient rise in serum creatinine (>25% compared to baseline) was associated with increased hospital stay and
mortality. Similar findings have also been shown among
coronary artery bypass graft cohorts. In this context, creatinine rise is not simply a marker of illness severity but it
rather represents a causative factor for cardiovascular
injury acceleration through the activation of hormonal,
immunological, and inflammatory pathways. Given that
the presence of type I CRS defines a population with high
mortality, a prompt, careful, systematic, multidisciplinary
approach involving invasive cardiologists, nephrologists,
critical care physicians, and cardiac surgeons is both
logical and desirable.
Cardiorenal Syndrome Type II
Pharmacotherapies used in the management of HF
have been touted as contributing to WRF. Diuresis-associated hypovolemia, early introduction of renin-angiotensin-aldosterone system blockade, and drug-induced
Cardiorenal Syndrome
hypotension, have all been suggested as contributing factors. However, their role remains highly speculative. More
recently, there has been increasing interest in the pathogenetic role of relative or absolute erythropoietin (EPO)
deficiency contributing to a more pronounced anemia in
these patients than might be expected for renal failure
alone. EPO receptor activation in the heart may be
protective from apoptosis, fibrosis, and inflammation.
In keeping with such experimental data, preliminary
clinical studies show that EPO administration in patients
with chronic heart failure, chronic renal insufficiency, and
anemia leads to improved cardiac function, reduction in
left ventricular size, and lowering of B-type natriuretic
peptide. Patients with type 2 CRS are more likely to receive
loop diuretics and vasodilators and also to receive higher
doses of such drugs compared to those with stable renal
function. Treatment with these drugs may participate in
the development of renal injury. However, such therapies
may simply identify patients with severe hemodynamic
compromise and thus a predisposition to renal dysfunction rather than being responsible for worsening renal
dysfunction. Regardless of the cause, reductions in renal
function in the context of heart failure are associated with
increased risk for adverse outcomes.
Cardiorenal Syndrome Type III
The development of AKI, especially in the setting of
chronic renal failure can affect the use of medications
that normally would maintain clinical stability in patients
with chronic heart failure. For example, an increase in
serum creatinine from 1.5 mg/dl (130 mmol/l) to 2 mg/dl
(177 mmol/l), with diuretic therapy and ACE inhibitors,
may provoke some clinicians to decrease or even stop
diuretic prescription; they may also decrease or even temporarily stop ACE inhibitors. In some, maybe many cases,
this may not help the patient. An acute decompensation of
CHF may occur because of such changes in medications.
When this happens the patient may be unnecessarily
exposed to an increased risk of acute pulmonary edema
or other serious complications of undertreatment.
Finally, if AKI is severe and renal replacement therapy
is necessary, cardiovascular instability generated by rapid
fluid and electrolyte shifts secondary to conventional dialysis can induce hypotension, arrhythmias, and myocardial
ischemia. Continuous techniques of renal replacement,
which minimize such cardiovascular instability, appear
physiologically safer and more logical in this setting.
Cardiorenal Syndrome Type IV
The logical practical implications of the plethora of data
linking CKD with CV disease is that more attention needs
C
to be paid to reducing risk factors and optimizing medications in these patients, and that undertreatment due to
concerns about pharmacodynamics in this setting may
have lethal consequences at individual level and huge
potential adverse consequences at public health level.
Nonetheless, it is also equally important to acknowledge
that clinicians looking after these patients are often faced
with competing therapeutic choices and that, with the
exception of MERIT-HF, large randomized controlled trials that have shaped the treatment of chronic heart failure
in the last two decades have consistently excluded patients
with significant renal disease. Such lack of CKD population-specific treatment effect data makes therapeutic
choices particularly challenging. In particular, in patients
with advanced CKD, the initiation or increased dosage of
ACE inhibitors can precipitate clinically significant worsening of renal function or marked hyperkalemia. The
latter may be dangerously exacerbated by the use of aldosterone antagonists. Such patients, if aggressively treated,
become exposed to a significant risk of developing dialysis
dependence or life-threatening hyperkalemic arrhythmias.
If too cautiously treated they may develop equally lifethreatening cardiovascular complications. In these
patients, the judicious use of all options while taking
into account patient preferences, social circumstances,
other comorbidities, and applying a multidisciplinary
approach to care seems to be the best approach.
Cardiorenal Syndrome Type V
Treatment is directed at the prompt identification, eradication, and management of the source of infection while
supporting organ function with invasively guided fluid
resuscitation and inotropic and vasopressor drug support.
In this setting, all the principles discussed for type I and
type III CRS apply. In these septic patients, preliminary
data using more intensive renal replacement technology
suggest that blood purification may have a role in improving myocardial performance while providing optimal
small solute clearance. Despite the emergence of consensus definitions and many studies, no therapies have yet
emerged to prevent or attenuate AKI in critically ill
patients. On the other hand, clear evidence of the injurious effects of pentastarch fluid resuscitation in septic AKI
has recently emerged. Such therapy should, therefore, be
avoided in septic patients.
After-care
The proportion of individuals with CKD receiving appropriate risk factor modification and/or interventional strategies is lower than in the general population, a concept
termed “therapeutic nihilism.” Many databases and
503
C
504
C
Carukia barnesi
registries have repeatedly shown that these therapeutic
choices seem to parallel worsening renal function. In
patients with CKD stage V, who are known to be at
extreme risk, less than 50% are on the combination of
aspirin, b blocker, ACE inhibitors, and statins. In a cohort
involving over 140,000 patients, 1,025 with documented
ESKD were less likely to receive aspirin, b blockade, or
ACE inhibition post MI. Yet those ESKD patients who did
receive the aspirin, b blocker, and ACE inhibitor combination had similar risk reductions in 30-day mortality
when compared to non-ESKD patients who had received
conventional therapy. This failure to treat is not just limited to ESKD patients. Patients with less severe forms of
CKD are also less likely to receive risk modifying medications following myocardial infarction compared to their
normal renal function counterparts.
Potential reasons for this therapeutic failure include
concerns about worsening existing renal function, and/or
therapy-related toxic effects due to low clearance rates.
Bleeding concerns with the use of platelet inhibitors and
anticoagulants are especially important with reduced renal
function and appear to contribute to the decreased likelihood of patients with severe CKD receiving aspirin and/or
clopidrogrel despite the fact that such bleeding is typically
minor and the benefits sustained in these patients. However, several studies have shown that when appropriately
titrated and monitored, cardiovascular medications used
in the general population can be safely administered to
those with renal impairment and with similar benefits.
Newer approaches to the treatment of cardiac failure
such as cardiac resynchronization therapy (CRT) have not
yet been studied in terms of their renal functional effects,
although preserved renal function after CRT may predict
a more favorable outcome. Vasopressin V2-receptor
blockers have been reported to decrease body weight and
edema in patients with chronic heart failure, but their
effects in patients with the cardiorenal syndrome have
not been systematically studied and a recent large randomized controlled trial showed no evidence of a survival
benefit with these agents.
Prognosis
Considering that the presence of any type CRS defines
a population with high mortality, a multidisciplinary
approach involving cardiologists, nephrologists, critical
care physicians, and cardiac surgeons is recommended.
In both chronic and acute situations, an appreciation of
the interaction between heart and kidney during dysfunction of each or both organs has practical clinical implications. The depth of knowledge and complexity of care
necessary to offer best therapy to these patients demands
a multidisciplinary approach. In addition, by using an
agreed definition of each type of cardiorenal syndrome,
physicians can describe treatments and interventions,
which are focused and pathophysiologically logical. They
can also conduct and compare epidemiological studies in
different countries and more easily identify aspects of each
syndrome, which carry a priority for improvement and
further research. Randomized controlled trials can then be
designed to target interventions aimed at decreasing morbidity and mortality in these increasingly common conditions. Increasing awareness, ability to identify and define,
and physiological understanding will help improve the
outcome of these complex patients.
Acknowledgments
We thank Drs. Alexandre Mebazaa, Alan Cass, and Martin
Gallagher for their useful advice in the development of this
manuscript.
References
1.
2.
3.
4.
5.
Ronco C (2008) Cardiorenal and reno-cardiac syndromes: clinical
disorders in search of a systematic definition. Int J Artif Organs
31:1–2
Liang KV, Williams AW, Greene EL, Redfield MM (2008) Acute
decompensated heart failure and the cardio-renal syndrome. Crit
Care Med 36(Suppl):S75–S88
Ronco C, House AA, Haapio M (2008) Cardio-renal syndrome:
refining the definition of a complex symbiosis gone wrong. Intensive
Care Med 34(5):957–962
Ronco C, Haapio M, House AA, Anavekar N, Bellomo R (2008)
Cardiorenal syndrome. J Am Coll Cardiol 52(19):1527–1539
Go AS, Chertow GM, Fan D et al (2004) Chronic kidney disease and
the risks of death, cardiovascular events, and hospitalization. N Engl
J Med 351:1296–1305
Carukia barnesi
▶ Jellyfish Envenomation
Carybdeid Jellyfish
▶ Jellyfish Envenomation
Catabolism
▶ Metabolic Disorders, Other
Catheter-Associated Urinary Tract Infection
Catheter and Line/Tubing/
Administration Sets Change
▶ Change
Catheter Port Allocation
▶ Port Designation
Catheter-Associated Bloodstream
Infection
▶ Catheter-Related Bloodstream Infection
Catheter-Associated Urinary Tract
Infection
ANDREW M. MORRIS
Mount Sinai Hospital and University Health Network,
University of Toronto, Toronto, ON, Canada
Synonyms
Foley-catheter infection; Pyelonephritis; Urinary catheter
sepsis; Urosepsis
Definition
Catheter-associated urinary tract infection (CAUTI) is
generally defined in the medical literature as bacteriuria
or funguria (of at least 103 cfu/mL) in association with a
urinary catheter. The definition has remained problematic, as it ignores a central tenet in the management of
infections: differentiating colonization from infection.
In patients without urinary catheters, pyuria is strongly
associated with urinary tract infection, but some have
contested using such a criterion for CAUTI. A preferred
definition would be the symptoms and signs of urinary
tract infection accompanied by pyuria and greater than
103 cfu/mL microorganisms in association with an urinary
catheter.
Epidemiology
The epidemiology of CAUTI is poorly understood, owing
to the problematic definition used in the literature, but
C
CAUTI appears to affect approximately 9% of all patients
in the ICU, with a rate of 12.0 per 1,000 catheter-days. Risk
factors include female sex, duration of catheterization,
and duration of ICU stay. Outside of the ICU, failure to
use a closed collection system has also been associated
with CAUTI. Antibiotic use appears protective, but this
may be because of confounding. Gram-negative bacilli
and enterococci are the most common isolates, although
candida species are frequently isolated in patients with
prolonged ICU stay (usually in patients receiving prolonged and/or repeated courses of broad-spectrum
antimicrobials).
Prevention
Avoiding urinary catheters and removing them when
unnecessary are the best means of preventing CAUTI.
Condom catheters for men have been shown to reduce
CAUTI with acceptable tolerability; in-and-out catheterization is also well tolerated. Nevertheless, neither of these
methods has been widely adopted in ICUs to prevent
CAUTI. Use of antimicrobial catheters may reduce bacteriuria, but have not been shown to prevent CAUTI or
other meaningful outcomes [1].
Treatment
There are few randomized trials evaluating management
of CAUTI. A small trial of catheter-associated bacteriuria
in women (not in the ICU) demonstrated that asymptomatic bacteriuria frequently progressed to symptomatic
CAUTI that single-dose antibiotic treatment was equivalent to a 10-day course of therapy. Another recent trial
compared short-course (3 days) antibiotics and catheter
change with standard care (i.e., no change, no antibiotics)
for patients with asymptomatic catheter-associated bacteriuria and found no difference in meaningful outcomes,
including development of pyelonephritis. Similarly, treatment of candiduria with fluconazole in immunocompetent patients temporarily eradicated the candidura, but
failed to offer any long-term benefit.
Evaluation
As mentioned above, evaluation is problematic. At present, routine urinalyses cannot be advocated. In
catheterized patients, pyuria (greater than 10 white
blood cells/mL) is specific but insensitive for the presence
of bacteriuria. Because it is unclear if treatment of asymptomatic patients with bacteriuria is warranted, routine
cultures are also not warranted. Investigation of fever of
unknown origin should include, however, urinalysis and
urine culture.
505
C
506
C
Catheter-Related Bloodstream Infection
Prognosis
When adjusted for confounding factors, CAUTI does not
appear to be associated with increased mortality in critically ill patients.
Economics
The attributable patient cost of CAUTI in the USA
ranges from $862 to $1, 007, costing US hospitals $0.39
to $0.45 billion annually [2].
References
1.
2.
Lo E, Nicolle L, Classen D, Arias KM, Podgorny K, Anderson DJ et al
(2008 Oct) Strategies to prevent catheter-associated urinary tract
infections in acute care hospitals. Infect Control Hosp Epidemiol
29(Suppl 1):S41–50
Scott II RD (2009) The direct medical costs of healthcare-associated
infections in U.S. hospitals and the benefits of prevention. In:
Department of Health and Human Services, Centers for Disease
Control and Prevention, 2009
Catheter-Related Bloodstream
Infection
ANDREW M. MORRIS
Mount Sinai Hospital and University Health Network,
University of Toronto, Toronto, ON, Canada
Synonyms
Catheter-associated bloodstream infection; Catheterrelated infection; Central line infection; Central venous
catheter infection; Line sepsis; Vascular catheter infection
Definition
Catheter-related bloodstream infection (CRBI) is
bacteraemia or fungaemia that originate from an intravascular catheter. For the purpose of this chapter, CRBI will be
limited to catheters that are usually inserted and removed in
intensive care units and will not include tunneled catheters
or other long-term catheters. CRBI most commonly originates from the skin-insertion site, with microorganisms
traveling along the course of the vascular catheter into the
bloodstream. Less often, organisms contaminate the catheter hub and travel intraluminally. Some organisms, primarily coagulase-negative staphylococci, elaborate a protective
multilayered biofilm matrix preventing immune system
effectors and antimicrobials from reaching the organisms.
Although localized infection may occur at the site of insertion (often termed “exit site infection”), such infections are
easy to diagnose, do not generally cause systemic illness,
and are beyond the scope of discussion here. The study of
CRBI has been complicated by the lack of a definition that is
both sensitive and specific. Fever and other clinical criteria
are sensitive but nonspecific, whereas repeatedly positive
blood cultures drawn from the periphery and vascular
catheter with identical organisms in the presence of clinical
signs of infection without other primary foci are specific but
insensitive. For this reason, catheter-associated bloodstream infection is often measured, which identifies
bacteraemia in the presence of a vascular catheter, but
may not be caused by the catheter.
Epidemiology
The epidemiology of CRBI is not well known, although
the reported rate of CRBI in the province of Ontario,
Canada (population 13 million), is 1.4 per 1,000 catheter-days. The National Nosocomial Infection Surveillance
system of the CDC estimates the rate to be 1.8–5.2
per 1,000 catheter-days. There are an estimated 92,011
cases of CRBI annually in the USA [1]. Coagulase-negative
staphylococci are the most common organisms responsible
for CRBI, followed by (in decreasing order) Staphylococcus
aureus, Candida species, and gram-negative bacilli.
Prevention
The Centers for Disease Control recommend five procedures that are likely to have the greatest impact on reducing CRBI with the lowest barriers to implementation:
hand washing, using full-barrier precautions during the
insertion of central venous catheters, cleaning the skin
with chlorhexidine, avoiding the femoral site if possible,
and removing unnecessary catheters. Using these very
same procedures resulted in dramatic reductions across
103 ICUs in Michigan, reducing median rates of CRBI
from 2.7 per 1,000 catheter-days to zero [2].
Hand Washing
The evidence supporting hand-washing in preventing
CRBI is not strong but is it low-cost, theoretically appealing, and, when bundled with full-barrier precautions, is
proven to be effective in reducing CRBI.
Full-Barrier Precautions During Insertion
Sterile gloves, long-sleeved sterile gown, mask, cap, and
large sterile sheet drape during insertion have been shown
to dramatically reduce CRBI.
Cutaneous Antisepsis
Although povidone–iodine remains the most widely used
skin antiseptic in hospitals, there is strong evidence that
Catheter-Related Bloodstream Infection
chlorhexidine is superior to povidone–iodine. Tincture of
iodine also appears to be superior to povidone–iodine, but
is less well studied.
Site of Insertion
The femoral site is unequivocally inferior to the subclavian
site vis à vis infection risk. However, preference between
internal jugular and subclavian veins is less clear, with
colonization being greater for internal jugular venous
catheters compared with subclavian venous catheters,
but there is no evidence showing lower rates of CRBI
with the subclavian site.
Routine Changing of Lines
Although most teaching (including that of the CDC) states
that routine changing of lines is not advised, it is based on
little evidence. One study from 1981 looked at routine
changing of haemodialysis catheters in 90 patients, and
showed no difference between routine changes over a wire
at 7 days rather than at a new site. Another study from
1990 compared no routine changes with routine changes
over a wire and routine changes at a new puncture site, and
showed no difference.
Treatment
Treatment of CRBI begins with removal of the vascular
catheter when infection is suspected. In many cases, this
proves curative, with fever abating and leukocytosis
resolving without the need for antimicrobials. Clearly,
however, this requires further study. Optimal treatment
of documented CRBI requires (a) removal of the catheter
(where feasible) and (b) antimicrobial therapy [3].
Catheter Removal
Catheter removal for CRBI is always preferred; however,
situations do occur when this is not feasible or desired. In
such situations, an option includes antibiotic lock therapy,
whereby an aliquot of antibiotic is left in the catheter hub
and tubing continuously. This is only likely to be beneficial
for patients whose CRBI is due to an intraluminal infection, and is not supported by high-quality trials. Some
experts recommend retaining the vascular catheter for
CRBI due to coagulase-negative staphylococci, but the
recurrence risk is high.
Antimicrobial Therapy
Empiric Therapy
Treatment, as with all nosocomial infections, should be
based on the likely organism coupled with severity of
illness. For many such infections, patients will be
C
haemodynamically stable, and treatment of the most likely
pathogens (usually staphylococci) will suffice. In centers
with a high prevalence of methicillin-resistant S. aureus,
vancomycin is likely an appropriate empiric therapy.
However, it may be reasonable to consider a methicillinlike penicillin (e.g., cloxacillin) or first-generation cephalosporin in stable patients.
Pathogen-Specific Therapy
Coagulase-negative staphylococci: Removal of the catheter
is often sufficient, but many authorities recommend
5–7 days therapy, unless there is no other medical hardware
in situ, the vascular catheter has been removed, the patient is
haemodynamically stable, and repeated blood cultures are
negative. No approach has been formally evaluated with
randomized controlled trials. S. lugdunensis is a coagulasenegative staphylococcus that should be treated as S. aureus.
S. aureus: Treatment should be based on susceptibilities. The most effective therapy for methicillin-susceptible
S. aureus is a b-lactam. However, in cases of severe allergy
or resistance, vancomycin is a preferred agent. Recently,
concerns have been raised regarding the effectiveness and
safety of vancomycin, especially with the emergence of
strains that are either resistant to or have reduced susceptibility to vancomycin. However, a recent open-label noninferiority trial comparing linezolid with vancomycin for
CRBI showed a trend favoring vancomycin in intentionto-treat analysis. Optimal duration of therapy for S. aureus
CRBI is unclear. Although teaching for many years has
maintained the axiom “treat for 2 weeks if a removal focus
of infection, and it has been removed,” recent studies
have questioned this wisdom with the recognition that
(a) infective endocarditis may complicate up to 13% of
catheter-associated bacteraemia and (b) infective endocarditis and other complications may be seen in approximately 6% of cases of S. aureus CRBI treated with 2 weeks
therapy (compared with 4 weeks). I prefer 4 weeks of therapy unless a trans-esophageal echocardiogram is performed
and is negative (making endocarditis highly unlikely), which
is largely consistent with recent recommendations [3].
Enterococci: The optimal treatment of enterococci
is ampicillin; if unable to use ampicillin because of resistance or allergy, then vancomycin is the preferred
agent. Linezolid or daptomycin are options when ampicillin
or vancomycin cannot be used, although there is limited
experience with these agents. The duration of treatment for
enterococcal bacteraemia is unclear, although 7–14 days is
usually sufficient. The risk of subsequent infective endocarditis is quite low, estimated at around 1%.
Gram-negative bacilli: The optimal treatment of
Gram-negative bacilli (GNB) is dependent on local
507
C
508
C
Catheter-Related Infection
susceptibilities. Empiric choices prior to speciation should
cover the majority of possibilities, and may include combination therapy (especially if the patient is neutropaenic,
severely ill, or known to be colonized with multidrugresistant organisms). However, there is weak evidence
supporting combination therapy once susceptibility is
known, including therapy for non-lactose-fermenting
agents such as Pseudomonas aeruginosa. The optimal duration of therapy is also unknown, although 7–14 days is
usually sufficient.
Candida species: Candidaemia is a frequent cause of
CRBI in patients who have been receiving prolonged broadspectrum antibacterial agents, as well as patients receiving
total parenteral nutrition, or who have received solid organ
or stem cell transplantation. Empiric therapy should be
based on local data, but may include amphotericin B,
fluconazole, or an echinocandin. These appear to be equally
efficacious, although azole resistance has been rising in
centers with high azole use. For this reason, many have
recommended echinocandin therapy to be first-line treatment. Candidaemia is generally treated with 2 weeks of
effective therapy, with the first negative blood culture
being considered day 1.
Evaluation
Diagnosis is primarily a microbiological one following
clinical suspicion. Where possible, cultures should come
from both peripheral blood and the catheter lumen prior
to antimicrobial therapy. Catheter tip cultures (using
a 5 cm segment and using either a roll-plate method or
sonification) are also advised; however, positive tip cultures reflect colonization, not CRBI.
CRBI can be confidently diagnosed when:
(a) Peripheral and catheter-drawn blood cultures are positive with the same isolate, and the catheter-drawn
culture grew more quickly (i.e., with a differential
time-to-positivity, or DTP, of at least 2 h)
(b) A catheter-drawn blood culture and a catheter-tip
culture are positive with the same isolate
(c) Both peripheral and catheter-drawn blood cultures
are positive, but the colony-count is threefold higher
in the culture growing from the venous catheter.
Prognosis
CRBI has an attributable mortality of approximately
12–25%.
Economics
The attributable patient cost of catheter-associated bloodstream infection (not CRBI) in the USA ranges from
$7,288 to $29,156, costing US hospitals $0.67–2.68 billion
annually [1].
References
1.
2.
3.
Scott II RD (2009) The direct medical costs of healthcare-associated
infections in U.S. hospitals and the benefits of prevention. In:
Department of Health and Human Services, Centers for Disease
Control and Prevention, 2009
Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H,
Cosgrove S et al (2006 Dec 28) An intervention to decrease
catheter-related bloodstream infections in the ICU. N Engl J Med
355(26):2725–2732
Mermel LA, Allon M, Bouza E, Craven DE, Flynn P, O’Grady NP et al
(2009 Jul 1) Clinical practice guidelines for the diagnosis and management of intravascular catheter-related infection: 2009 Update by
the Infectious Diseases Society of America. Clin Infect Dis 49(1):1–45
Catheter-Related Infection
▶ Catheter-Related Bloodstream Infection
Cauda Equina Syndrome
SCOTT E. BELL1, KATHRYN M. BEAUCHAMP2
1
Department of Neurosurgery, School of Medicine,
University of Colorado Health Sciences Center,
Denver, CO, USA
2
Department of Neurosurgery, Denver Health Medical
Center, University of Colorado School of Medicine,
Denver, CO, USA
Definition
Cauda equina syndrome is a clinical condition arising
from acute, subacute, or chronic dysfunction of nerve
roots that comprise the structure known as the “cauda
equina.” It is considered a spine emergency during the
acute stages of neurologic deterioration from compressive
lesions. Anatomically, the spinal cord ends at approximately the first to second lumbar vertebrae in normal
adults. The dural sac continues as a fluid-filled structure
to approximately the second sacral vertebrae. Within this
sac, between levels L2 to S2, are contained the nerve roots
that have emanated from the spinal cord. These nerve
roots are collectively referred to as the “cauda equina,”
as they exist prior to exiting the dural sac and spinal
canal in pairs, through the neural foramina at each level.
Cauda Equina Syndrome
A variety of lesions can serve as the etiology for cauda
equina syndrome, including herniated intervertebral
disks, intradural, or extradural tumors, traumatic fractures, hematoma, abscess, and non-compressive causes
such as neuropathy or ankylosing spondylitis [1].
The incidence of cauda equina syndrome (CES) as
a true clinicopathologic entity is extremely rare; however,
it is frequently over-diagnosed on initial evaluation of
patients with signs and symptoms from spine or nerve
dysfunction in the lower extremities. This is likely due to
two reasons, (1) the highly generalized symptoms that are
found at presentation, and (2) the consequences of permanent functional impairment with under-diagnosis.
Because of its rarity, epidemiologic data is sparse, but
historic reports place its prevalence at 1–3: 100,000 population. In patients with low back pain, the occurrence of
CES is 4: 10,000. The most common etiology is herniated
nucleus pulposis (HNP), still presenting as only 1–2% of
those HNP cases requiring surgery [1].
Clinical Presentation
The classic symptomatology of cauda equina syndrome
includes perineal anesthesia, urinary or fecal retention
and/or incontinence, low back and/or radicular pain,
numbness in the lower extremities, weak rectal tone, or
weakness in the lower extremities and associated reflexes
[1]. While none of these symptoms are specific for CES
individually, their presence in various combinations, frequently accompanied by certain anatomic hallmarks, can
be highly sensitive for cauda equina dysfunction. The
symptom with the greatest sensitivity for CES is urinary
retention, found to be 90% sensitive in multiple series [2].
Without this, only 1:1,000 cases of suspected CES will be
true [1]. Likewise, “saddle anesthesia,” which is absent
sensation in the perineal region, has a sensitivity of 75%
for CES. In the astute patient who presents early, rapid
progression of clinical findings can also be an indication of
CES. However, patients frequently fail to recognize the
severity of their symptoms until they are more advanced.
Shi et al. created a classification system to categorize
severity of CES [2]. Patients fell into preclinical, early,
middle, or late categorization, with no determination as
to temporal progression of symptoms. The preclinical
patient was considered to show only electrophysiologic
changes in pudendal reflexes with imaging signs of compression; the early CES patient was considered to show
slight saddle sensory disturbances and sciatica; the middle
CES patients were considered to show severe saddle sensory
disturbances, some bowel or bladder dysfunction, and
lower extremity weakness; and late CES patients were
considered to show no saddle sensation, severe bladder
C
and bowel dysfunction, severe sexual dysfunction. It is an
important concept to attempt a categorization of severity
for CES patients, as a differential response to surgery based
on severity is well recognized throughout the literature [2].
The severity at presentation is as important for prognostication as timing of intervention for many patients [3].
This confounder is difficult to account for, and sometimes
the distinction is minimized or ignored in level II and III
analyses focusing on timing of intervention.
Recognizing the difference between cauda equina dysfunction and spinal cord/conus medullaris dysfunction is
important for diagnosis of CES. Asymmetry of sensory or
motor disturbances can be an important finding that
distinguishes cauda equina dysfunction from higher
lesions affecting the spinal cord or conus medullaris [2].
For example, saddle anesthesia or lower extremity weakness can be a unilateral process in CES, but is frequently
bilateral and symmetric in conus medullaris lesions.
Another distinguishing finding of the saddle anesthesia
resulting from CES is its lack of sensory dissociation,
which is often found in spinal cord pathology. Pain can
sometimes distinguish a cauda equina lesion from a spinal
cord lesion. While not as sensitive for CES, if present, pain
is frequently the symptom that will bring the patient to
seek medical attention early. Pain from CES will be in the
lumbar region or radicular in nature, radiating to lower
extremities or the perineal region. The pain from CES may
be quite prominent, while spinal cord lesions will usually
cause only local pain (above L1 level), or little to no pain in
the case of conus medullaris lesions [1]. Because of the
anatomic relationship between the autonomic nervous
system (ANS) and the cauda equina, symptoms relating
to the function of the ANS are frequently encountered late
in the course of CES, while they may be encountered
relatively early with spinal cord or conus lesions due to
their second degree neurons’ intramedullary location [2].
These ANS symptoms include bladder dysfunction, impotence, and sphincter disturbances.
Cauda equina syndrome is considered a clinical
diagnosis that is usually, but not always, accompanied by
imaging findings suggesting compressive pathology.
When the above-mentioned symptomatology is linked
with compressive anatomic findings, urgent to emergent
surgical remediation may be warranted. However, caution
in diagnosis should be applied as there are many instances
when imaging studies convey a compressive structural
abnormality in the lumbar spine, while the patient experiences little or no symptoms. Without symptomatology,
there is no indication for a diagnosis of CES. This is an
important distinction when deciding if emergent surgery
is needed for treatment.
509
C
510
C
Cauda Equina Syndrome
Evaluation
Treatment is dictated by accompanying findings on full
clinical evaluation. A thorough history and physical examination are paramount as a guide for proper decisionmaking in diagnosis and treatment. An appropriate
physical exam will consist of the standardized format for
a full neurologic examination, which consists of (1) mental
status and executive function evaluation, (2) cranial nerve
evaluation, (3) sensory exam, (4) motor exam, (5) central
and peripheral reflex exam, (6) coordination evaluation,
(7) gait evaluation.
For the purpose of this discussion, focus will be placed
on examination of peripheral function, i.e., sensory,
motor, reflex, coordination, and gait examination, of the
lower extremities. The elements of a thorough sensory
exam include the dermatomal distribution of light
touch, sharp-dull distinction, pain, and temperature, as
well as non-segmental proprioception. Figure 1 shows the
generally accepted distribution of segmental nerve root
innervation for cutaneous sensation, described by Foerster
in 1933 [4]. With specific nerve root impingement, one
expects to find derangement in those dermatomes served
by its respective lumbosacral segmental level. Frequently
in CES, there is impingement of multiple roots producing
a regional derangement of sensory function, which usually
includes the lower sacral dermatomes, producing the saddle anesthesia in addition to more distal sensory changes.
Motor function, likewise, has been well characterized
with respect to the segmental innervation of the lower
extremity musculature, termed myotomes. It follows
a similar pattern as the innervation of cutaneous sensation. It is most valuable to describe any motor derangements by the function that is impaired or absent.
The myotome served by L1 nerve roots causes hip flexion,
L2-3 causes knee extension, L4 causes hip adduction,
L5 causes foot inversion, eversion and dorsiflexion. The
sacral nerves cause plantar flexion and knee flexion (S1-2).
Strength is described as a gradient from 0 (no movement
L2
S1
S2
S3
S4
L3
S5
T10
T11
L1
T12
L4
L5
L1
L2
L3
L5
L4
L5
S1
L5
L4
© 1999 Scott Bodell
Cauda Equina Syndrome. Figure 1 Lower extremity dermatomes (Adapted from aafp.org)
Cauda Equina Syndrome
or muscle contraction) to 5 (full strength). Cauda equina
syndrome may cause weakness along any point of the
strength spectrum with lower motor neuron findings,
which include atrophy from trophic influences, decreased
tone, and reflex arc interruption. The degree of weakness
found along the strength spectrum tells the story of the
severity of the cauda equina lesion. Coordination and gait
disturbances will occur in so much as the patient has
weakness of the lower extremities. The degree of weakness
will dictate the success or failure of the measures of multiple muscle coordination, such as gait. Lower sacral nerve
impairment will cause weakness of rectal tone; this is
always an important test to perform when evaluating for
spinal cord or spinal nerve injury.
Deep tendon reflexes are another element of the physical examination that will inform the examiner of the
extent of injury. Reflexes become diminished in CES due
to interruption of the reflex arc at the level of the lower
motor neuron. It is usually the knee jerk and ankle jerk
that are affected. The neurons serve an arc that communicates tendon stretch directly with alpha motor neurons
and inhibitory interneurons. Interruption of this arc will
cause a diminution or absence of the reflex for its respective myotome. This may be noticeable at multiple levels,
reflecting the common multi-neuronal dysfunction within
the cauda equina during the process of CES.
In the acute stages of the disease, imaging is an important modality to help guide the clinician toward surgical
treatment if appropriate pathology is seen. If trauma is
suspected, initial imaging should include lumbar roentgenography or computed tomography (CT) scans, if available. The sensitivity and specificity of lumbar CTscan have
been shown to be 97% and 95%, respectively, compared
C
with 86% and 58%, respectively, for lumbar roentgenograms (9). A CT scan has better resolution for fractures,
and their relationships with the spinal canal and neural
foramina can be viewed in sagittal, coronal, and axial
planes. While harder to interpret due to its 2-dimensional,
monoplanar depiction, roentgenograms are frequently
used as a screening tool to guide decision algorithms for
subsequent diagnostic maneuvers, especially in those
patients in whom CT scan is contraindicated or impractical due to habitus, availability, etc.
For better representation of soft tissue structures
including neural elements, a magnetic resonance image
(MRI) of the lumbar spine may be important in determining the nature and anatomic location of compressive
pathology if surgical considerations are being made.
Figure 2 shows a representation of lumbar stenosis causing
neural compression. No contrast is needed in any imaging
modality on evaluation of an acute process, as the diagnostic value is not improved. However, if tumor or infection is suspected, either iodinated contrast for CTscans, or
gadolinium for MRI scans, is an important addition for
diagnostic considerations.
Another important factor in consideration of the traumatic etiology of CES is to recognize an unstable lumbar
spine. With or without ongoing neural compression,
treatment considerations will shift to a multimodal
approach. If ongoing compressive pathology exists, surgical plans may include decompression as well as stabilization procedures. On the other hand, if CES is the result of
a transient compression of neural elements that
underwent closed reduction, then conservative treatment
of the CES symptoms, in conjunction with a surgical
stabilization procedure, may be appropriate.
Cauda Equina Syndrome. Figure 2 (a and b) MRI in sagittal (a) and axial (b) planes showing a herniated disc causing cauda
equine syndrome (Adapted from Chou et al. Orthopedics 200813)
511
C
512
C
Cauda Equina Syndrome
Treatment
The timing of when to address a surgical lesion in CES is
the most controversial aspect of this syndrome. It is commonly accepted as a surgical emergency. Once recognized,
if appropriate compressive signs are found on imaging,
surgery should be performed within 48 h from the onset of
CES symptoms [3]. Some surgeons argue that evidence
supports a time frame within 24 h from symptom onset.
Shaprio [3] analyzed 39 cases of documented CES and
described that cases operated within 24 h showed better
functional recovery of lower extremity strength than cases
operated within 48 h, which showed better functional
recovery than cases operated after delay (two groups
with mean delays of 3.4 and 9 days). Likewise, for pudendal
symptoms, 24 h proved better than 48 h for urinary, bowel,
and sexual function recovery, and delayed surgery showed
no return of function. However, the differences in postsurgical recovery between surgery timed at <24 h versus
<48 h were determined from n=2. In patients presenting
after 48 h from onset of symptoms, especially if symptoms
include urinary retention or incontinence, and saddle
sensory changes, functional recovery is poor with or without surgery.
The Shapiro study only grouped patients by timing of
surgery, and did not address analysis by severity of symptoms at presentation. In a meta-analysis, Ahn et al. found
that there was multivariate significance in improvement
among patients presenting with CES – that based on
surgery <48 h and that based on a lower degree of symptom severity at presentation. Those with worse symptom
severity, including chronic low back pain, urinary symptoms, saddle anesthesia, and rectal dysfunction on presentation, tended to show worse prognosis for improvement
postoperatively, even in patients operated within 48 h.
Those patients with less severe symptoms, for instance,
lower extremity weakness and saddle hypoesthesia, had
greater chance of improvement with surgery <48 h from
onset compared with surgery >48 h from onset.
Other factors found to contribute to results from
surgery include acuity of symptoms. Those with more
acute symptom onset tend to have a better chance at
improvement after surgery compared with those showing
a more insidious, chronic onset. Time to recovery of
symptoms has been shown to vary in small level II and
III analyses [5]. Usually full extent of recovery is found
within 2 years, but gradual recovery of some function has
been observed for up to 5 years.
The type of surgery performed also varies widely
in the literature. It is a consensus that minimally invasive lesionectomy, such as semi-hemilaminotomy and
microdiscectomy, is inadequate to decompress the nerve
roots from the offending lesion once CES has developed.
Among the procedures that produce adequate decompression exist high variability in approach. Hemilaminectomy,
bilateral foraminal decompression with wide laminectomy,
as well as one study that purported the need for transdural
disk exploration in up to 18% of cases, have all been shown
to be effective in treating CES. Rationale for the latter
procedure is that it reduces the traction on injured nerves
during surgery, thus improving the chances for recovery of
function.
When ankylosing spondylitis (AS) is found to be the
etiology of CES, the pathophysiologic mechanism is not
entirely clear. It is thought to be a progressive ectasia of the
dura. The effectiveness of either conservative or surgical
treatments has been called into question. There are some
studies that advocate medical management is superior,
while others that endorse surgical treatment is the most
effective. This problem is classically treated conservatively,
but in the last decade, a variety of surgical approaches have
been employed including lumbar decompression and
durotomy, and even cerebrospinal fluid shunting, to
treat the ectatic lumbar dura.
When there is clear lack of compressive pathology
in CES, conservative management focuses on the presumed inflammatory process causing the nerve injury,
similar to CES in ankylosing spondylitis. The treatment
of choice for acute peripheral nerve injuries is high-dose
intravenous steroids, and pain control [1]. Physical therapy early in the process of recovery, whether after conservative or operative treatment, is an important aspect of
convalescence.
Summary
Cauda equina syndrome is a dangerous, but uncommon
entity in spine pathology. If acute onset and progression
are confirmed clinically, surgery should be performed
without delay, within 24–48 h. While compression is
the most common etiology, some inflammatory processes are found to be the cause, warranting conservative
management. The prognosis for functional recovery is
poor when the onset is insidious, or the presentation
severe. But, under the correct circumstances, with acute
onset and early symptoms, prognosis for recovery is
good when treated emergently. Surgical approaches
vary, but it is generally accepted that wide decompression
and removal of the offending lesion are the best treatment for compressive causes of CES. Further studies,
accounting for both severity on presentation and timing
of treatment, are warranted to establish the best management for maximizing the patients’ ability to overcome
this illness.
Central Spinal Cord Syndrome
References
1.
2.
3.
4.
5.
Greenberg MS (ed) (2010) Handbook of neurosurgery, 7th edn.
Thieme Medical Publications, New York
Shi J, Jia L, Yuan W, Shi GD, Ma B, Wang B, JianFeng W (2010)
Clinical classification of cauda equina syndrome for proper treatment: A retrospective analysis of 39 patients. Acta Orthopedica 81
(3):391–395
Shapiro S (2000) Medical realities of cauda equina syndrome
secondary to lumbar disc herniation. Spine 25(3):348–352
Foerster O (1933) The dermatomes in man. Brain 56:1
Ahn UM, Ahn NU, Buchowski JM, Garrett ES, Seiber AN, Kostuik JP
(2000) Cauda equina syndrome secondary to lumbar disc herniation:
a meta-analysis of surgical outcomes. Spine 25(12):1515–1522
C-Collar
C
Central Spinal Cord Syndrome
SARAH E. PINSKI1, ARIANNE BOYLAN2, JENS-PETER WITT3,
TODD F. VANDERHEIDEN4, PHILIP F. STAHEL5
1
Department of Orthopaedic Surgery, Denver, CO, USA
2
Department of Neurosurgery, University of Colorado
Denver, School of Medicine, Denver, CO, USA
3
Neuro Spine Program, Department of Neurosurgery,
University of Colorado Hospital, Colorado, CO, USA
4
Department of Orthopaedic Surgery, Center for Complex
Fractures and Limb Restoration, Denver Health Medical
Center, University of Colorado School of Medicine,
Denver, CO, USA
5
Department of Orthopaedic Surgery and Department of
Neurosurgery, Denver Health Medical Center, University
of Colorado School of Medicine, Denver, CO, USA
This is the cervical collar that is used for stabilizing the
neck in the neutral position.
Synonyms
“Burning Hands” syndrome; CCS; Central cord injury
syndrome
CCS
▶ Central Spinal Cord Syndrome
Celiotomy
Definition
A syndrome associated with ischemia, hemorrhage, or
necrosis involving the central portions of the spinal cord
due to traumatic injury sustained in the cervical or upper
thoracic regions of the spine, characterized by weakness in
the arms with relative sparing of the leg strength associated
with variable sensory loss.
▶ Laparotomy
Dorsal columns
Lateral
corticospinal tract
Central Cord Injury Syndrome
▶ Central Spinal Cord Syndrome
Central Cord Syndrome
▶ Spinal Cord Injury Syndromes
Central Line Infection
▶ Catheter-Related Bloodstream Infection
513
Lateral
spinothalamic tract
Central Spinal Cord Syndrome. Figure 1 Illustration of the
affected area (red color) of central cord syndrome (CCS) in
a schematic axial drawing through the spinal cord. Note that
the sacral structures are more peripheral in the dorsal columns
and the lateral corticospinal tract. These structures are
therefore preferentially spared in patients with CCS
C
514
C
Central Spinal Cord Syndrome
a
b
c
Central Spinal Cord Syndrome. Figure 2 (Continued)
Central Spinal Cord Syndrome
Anatomy
The main descending motor pathway is the lateral
corticospinal tract. The tract is arranged with the cervical
(cranial) nerve paths more centrally located and the sacral
(caudal) nerve paths more peripherally located. The major
ascending sensory pathway is the dorsal column (fasciculus gracilis, fasciculus cuneatus). Similar to the lateral
corticospinal tract, the dorsal columns are arranged
such that cervical structures are centrally located and
sacral structures are more peripherally located (Fig. 1)
[1]. Central cord syndrome (CCS) originates from
a vascular compromise in the distribution area of the
anterior spinal artery, which supplies the central portions
of the spinal cord.
Epidemiology
CCS is the most common type of incomplete spinal cord
injury (SCI), comprising 15–25% of all cases [1]. The
“classic” mechanism leading to CCS is represented in
elderly patients with underlying degenerative spinal
changes, who sustain a hyperextension injury the cervical
spine (C-spine), with or without evidence of acute
spinal injury on plain X-rays. Susceptibility to CCS is
represented by a preexisting narrowing of the cervical
spinal canal due to spondylosis, osteophyte formation,
stenosis and ossification of the posterior longitudinal
ligament. The cervical cord may be injured by direct compression from buckling of the ligamentum flavum into
a narrowed, stenotic spinal canal [1]. CCS may also occur
in younger individuals sustaining high-energy trauma that
results in unstable spinal fractures, ligamentous instability,
or fracture-dislocations [2]. Young patients with congenital cervical stenosis are also at particular risk for sustaining a CCS after trauma. This entity presents with a wide
spectrum of neurological symptoms, ranging from preserved sensation with burning dysesthesia and allodynia in
the hands, to motor weakness to the upper extremities, to
a complete quadriparesis with sacral sparing. As a general
rule, the upper extremities are more affected than the lower
extremities. The classic paradigm is represented by a patient
who walks around but can’t move the arms. Return of
motor function follows a characteristic pattern, with the
C
515
lower extremities recovering first, bladder function next,
and the proximal upper extremities and hands last [3].
Application
Diagnosis is made based upon clinical and radiographic
examination. Initial radiographic evaluation consists of
anteroposterior, lateral, and open-mouth odontoid
X-rays. CT scans may also be obtained to gain a better
understanding of fractures and dislocations. The MRI
represents the “gold standard” for evaluating injuries to
the soft tissues (discs, ligaments), to quantify the extent of
spinal stenosis and cord compression, and to asses for
presence of epidural hematoma, spinal edema, and spinal
contusions. At the time of initial evaluation, sacral sparing
may be the only neurologic function present to differentiate incomplete from complete SCI. Most cases of CCS
are successfully managed non-operatively, with the likelihood of considerable neurologic recovery [1, 3]. Medical
management of CCS consists of admission to intensive
care for close monitoring of neurologic status and hemodynamics. Maintenance of blood pressure (mean arterial
pressure of >85 mmHg) by volume resuscitation
supplemented by vasopressors, if needed, has been shown
to improve neurologic outcome by presumably maximizing spinal cord perfusion and limiting secondary injury
[1, 3]. Intravenous methylprednisolone is the most
commonly used pharmacologic treatment for SCI. The
established standard dosing is 30 mg/kg bolus followed
by 5.4 mg/kg/h for 24 h if the infusion is started within
3 h of injury, and for 48 h if the infusion is started between
3 and 8 h from the time of injury. This is a controversial
treatment that recent literature reviews showed no evidence for the use of corticosteroids as a neuroprotective
agent. Additionally, corticosteroids may adversely affect
patient outcome due to the side effects related to immunosuppression, including pulmonary infections [4].
Any patient with suspected CCS should be
immobilized in a hard cervical orthosis to prevent further
motion and potential injury. The cervical collar is typically
used for an additional 6 weeks or until neck pain has
resolved and neurologic improvement is noted. Once
the patient is medically stable, early mobilization and
Central Spinal Cord Syndrome. Figure 2 Case example of a 21-year old man who sustained a fall while snowboarding. He
presented with bilateral upper extremity motor weakness and subjectively “burning” hands. Imaging with plain X-rays, CT scan,
and MRI reveals an unstable C5/C6 flexion/distraction injury with a three-column fracture at C5, and a spinal cord contusion on
MRI (arrows in panels A and B). This patient was managed surgically by posterior fusion due to the inherent instability of the injury
(panel C). No decompression was performed. The patient recovered well within 3 months of surgery, with a full resolution of
dysethestesia and allodynia and improved upper extremity function. The patient was able to return to work without restrictions
as a pizza delivery courier
C
516
C
Central Venous Access Catheter (CVC)
rehabilitation with physical and occupational therapy is
essential. Gait and hand function training are the main
goals. Surgery is indicated in those cases with spinal instability [2, 5], as outlined in the case example shown in
Fig. 2. Surgical intervention for CCS without spinal instability is controversial. However, in the setting of persistent
cord compression, failure of motor recovery, or neurologic
decline, surgical intervention may be warranted. These
symptoms may be due to a herniated disk, an epidural
hematoma, or bony fragments in the spinal canal. In such
cases, the early spinal decompression may prevent the
progression of neurologic impairment and may lead to
improved recovery and function [1, 5].
References
1.
2.
3.
4.
5.
Nowak DD, Lee JK, Gelb DE, Poelstra KA, Ludwig SC (2009) Central
Cord Syndrome. J Am Acad Orthop Surg 17:756–765
Stahel PF, Flierl MA, Matava B (2011) Traumatic spondylolisthesis.
In: Vincent JL, Hall J (eds) Encyclopedia of intensive care medicine.
Springer, Heidelberg
Aarabi B, Alexander M, Mirvis SE, Shanmuganathan K, Chesler D,
Maulucci C, Iguchi M, Aresco C, Blacklock T (2011) Predictors of
outcome in acute traumatic central cord syndrome due to spinal
stenosis. J Neurosurg Spine 14:122–130
Hurlbert RJ, Hamilton MG (2008) Methylprednisolone for acute
spinal cord injury: 5-year practice reversal. Can J Neurol Sci 35:41–45
Fehlings MG, Rabin D, Sears W, Cadotte DW, Aarabi B (2010)
Current practice in the timing of surgical intervention in spinal
cord injury. Spine 35(suppl 21):S166–S173
Central Venous Access Catheter
(CVC)
▶ Vascular Access for RRT
Definition
Central venous pressure (CVP) is the pressure of blood in
the thoracic vena cava at the point where the superior vena
cava meets the inferior vena cava prior to entry into the
right atrium (RA) of the heart.
Characteristics
Normal values of CVP in spontaneously breathing
patients are 5–10 cm of water and can be up to 5 cm of
water higher in patients mechanically ventilated with
positive inspiratory pressure. The normal CVP waveform
consists of three upward deflections (“a”, “c”, “v” waves)
and two downward deflections (“x” and “y” descents)
(Fig. 1). The “a” wave reflects right atrial contraction and
occurs just after the “P” wave on the ECG. It is followed by
“c” wave, which is the result of tricuspid valve bulging into
RA during isovolumic ventricular contraction. The third
positive deflection is “v” wave and represents the filling of
the RA during late ventricular systole. The “x” descent
occurs during right ventricular ejection when the tricuspid
valve is pulled away from the atrium and the “y” descent
represents rapid blood flow from the RA into right ventricle (RV) during early diastole.
Clinical Estimation of CVP
Physical assessment of jugular venous distension and pressure in patients sitting up at 45–60% angle allows CVP
estimation. The level of internal jugular veins filling can be
determined and pulsations can be clearly seen. The vertical
distance from the filling level and sternal angle is measured. Five centimeters (the approximate distance from
sternal angle and RA) is added to the measured distance in
order to get the CVP estimation. The external jugular
veins are observed in the 20 angle of the upper part of
the body against horizontal line. In patients with normal
CVP values, the veins are filled to one third of the distance
Central Venous Catheter Infection
▶ Catheter-Related Bloodstream Infection
Central Venous Pressure
ECG
a
CVP tracing
c
GORAZD VOGA
Medical ICU, General Hospital Celje, Celje, Slovenia
Synonyms
Right atrial pressure
X
v
Y
Central Venous Pressure. Figure 1 Simultaneous ECG and
CVP tracing
Central Venous Pressure
between clavicle and mandible. Unfortunately, considerable disagreement and inaccuracy exists in the clinical
assessment of CVP in critically ill patients and therefore
measurement is mandatory.
Zeroing and reference level of the transducer
Central venous blood volume
Venous return
The CVP is usually measured by placing a catheter in one
of the veins and then threading it to the superior vena
cava. Internal jugular and subclavian veins are most suitable for cannulation, since the catheter is easily advanced
to the proper position. Antecubital veins can be also used,
if catheter is long enough to reach the superior vena cava.
The CVP is measured using a manometer filled with
intravenous fluid and attached to the central venous catheter. Zero point, approximately the mid-axillary line in the
fourth intercostal space in supine position, must be determined. Catheter should not be blocked or kinked to allow
free flow of the fluid. The manometer is filled with fluid
and then three-way stopcock is open to the catheter. Fluid
level steadily drops to the level of the CVP, which is
measured in centimeters of water. Fluid level should fluctuate slightly with breathing and may slightly pulsate. On
the other hand, prominent pulsations are due to significant tricuspid regurgitation or improper position of the
catheter tip in the right ventricle, which usually requires
reposition of the catheter. In the ICU setting, catheters are
usually connected with transducers, and the CVP waveform is continuously displayed in the monitor. Transducers also have to be zeroing and put at the standard
reference level for hemodynamic measurements, which is
usually 5 cm below the sternal angle. The electronically
measured values are displayed in the monitor and
expressed in mmHg (10 cm H2O is 7.5 mm Hg).
Besides proper levelling and zeroing, changes of
intrathoracic pressure should be considered in the interpretation of CVP values. Increased intrathoracic pressure
is commonly seen in patients with high levels of PEEP
or forced expiration, on the other hand, highly negative
intrathoracic pressure frequently results from vigorous
inspiratory efforts. Both conditions can markedly change
CVP. Therefore, CVP waveform should be always
observed in order to assess proper CVP values. Factors
that affect CVP measurement are summarized in the
table (Table 1).
Blood volume
Nonivasive estimation of CVP is possible by transthoracic
echocardiography. In the subcostal view, inferior vena
cava (IVC) is visualized and the diameter during inspiration and expiration is measured. The IVC collapsibility
index (IVCCI) is defined as difference between maximum
517
Central Venous Pressure. Table 1 Factors affecting CVP
measurement
Invasive CVP Measurement
Nonivasive CVP Estimation
C
C
Vascular tone
Right ventricular compliance
Intrathoracic pressure
Tricuspid regurgitation and stenosis
Central Venous Pressure. Table 2 Estimation of CVP from
measurement and respiratory variation of IVC diameter
Inspiratory
IVC diameter (cm) decrease
Estimated CVP
(mm Hg)
<1.5
Collapse
<5
1.5–2.5
>50%
5–10
>2.5
<50%
10–15
>2.5
No
>20
and minimum IVC diameter, expressed in percent. The
estimation of CVP by IVC measurement is showed in the
table and is reliable in the spontaneously breathing
patients (Table 2). The IVC size of 2 cm and the IVC
collapsibility of 40% discriminates CVP below or above
10 mm Hg with 73% sensitivity and 85% specificity [1].
Clinical Value of CVP
CVP is a static pressure variable, which is frequently used
for preload assessment. CVP measurement is the essential
part of hemodynamic assessment in critically ill patients
and is frequently performed during surgery to estimate
cardiac preload and circulating blood volume, also. CVP
reflects the amount of blood returning to the heart and the
ability of the heart to pump the blood into the arterial
system. Measurement of the “c” wave value at the end
expiration reflects end-diastolic pressure in the right ventricle and can be used as an index RV preload. At the same
time CVP represents the back pressure for venous return
and gives an estimate of the intravascular volume status. It
predominantly depends on circulating blood volume,
venous tone, and right ventricular function. In patients
with normal cardiac function increased venous return is
associated with increased cardiac output, without major
change in CVP. On the other hand, CVP is elevated in
518
C
Central Venous, Arterial, and PA Catheters
patients with poor right ventricular contractility and/or
obstruction to the inflow in right atrium (tamponade,
tension pneumothorax) or to the outflow in pulmonary
circulation (pulmonary embolism).
Unfortunately, CVP poorly reflects left ventricular
preload and is of little value for hemodynamic assessment
in patients with heart failure and cardiogenic shock. Very
poor relationship between CVP and blood volume and
poor prediction of CVP changes for fluid responsiveness
was found [2]. CVP values lower than 5 mm Hg have only
47% positive predictive value for fluid responsiveness in
mechanically ventilated septic patients [3]. Nevertheless,
CVP values in septic shock are significantly different in
survivors and nonsurvivors 6–48 h after admission and
CVP values 8–12 mm Hg are proposed as an early resuscitation goal of the initial hemodynamic stabilization in
patients with septic shock [4].
CVP is only a part of hemodynamic assessment and
must be interpreted together with other hemodynamic
variables and clinical state of patient. It is clear that very
high and low CVP values must be considered as abnormal,
but they are not conclusive for any specific hemodynamic
situation. Therefore, such findings require further diagnostic workup. Normal CVP values also can be associated
with different hemodynamic disturbances in critically ill
patients.
Examination of the CVP waveforms gives some additional information regarding tricuspid regurgitation, cardiac tamponade, cardiac restriction, decreased thoracic
compliance, and arrhythmias. Patients with tricuspid
regurgitation have prominent “v” waves, on the other
hand restrictive RV filling is associated with large and
deep “y” descent. In patients with cardiac tamponade
“x” and “y” descent usually disappear. Large inspiratory
rise in the CVP during mechanical ventilation suggests
decreased thoracic wall compliance. In patients with atrial
fibrillation “a” wave is absent, and in presence of atrioventricular dissociation high and tall (cannon) “a” wave can
be seen due to atrial contraction against closed tricuspid
valve [5].
References
1.
2.
3.
Brennan JM, Blair JE, Goonewardena S, Ronan A, Shah D,
Vasaiwala S, Kirkpatrick JN (2007) Spencer KT reappraisal of the
use of inferior vena cava for estimating right atrial pressure. J Am Soc
Echocardiogr 20:857–861
Marik PE, Baram M, Vahid B (2008) Does central venous pressure
predict fluid responsiveness? A systematic review of the literature and
the tale of seven mares. Chest 134:172–178
Osman D, Ridel C, Ray P, Monnet X, Anguel N, Richard C, Teboul JL
(2007) Cardiac filling pressures are not appropriate to predict hemodynamic response to volume challenge. Crit Care Med 35:64–68
4.
5.
Dellinger RP, Levy MM, Carlet JM et al (2008) Surviving Sepsis
Campaign: International guidelines for management of severe sepsis
and septic shock. Crit Care Med 36:296–327
Magder S (2006) Central venous pressure monitoring. Curr Opin
Crit Care 12:219–227
Central Venous, Arterial, and
PA Catheters
JOSÉ RODOLFO ROCCO
Clementino Fraga Filho University Hospital, Federal
University of Rio de Janeiro, Rio de Janeiro, Brazil
Introduction
Vascular cannulation is an essential tool for fluid and drug
administration, accurate monitoring of hemodynamic
parameters, and blood sampling in critically ill patients.
Preparation, indications, contraindications, clinical utility, and techniques for vascular cannulation are reviewed
in this chapter. The sites of catheterization and complications of arterial, central venous, and pulmonary artery
catheterization are also presented.
Central Venous Catheterization
The main indications for central venous catheterization
(CVC) are: (1) monitoring of hemodynamic and tecidual
perfusion, and (2) therapeutic (Table 1).
Central Venous, Arterial, and PA Catheters. Table 1 Indications for central venous catheterization
1 – Monitoring of hemodynamic and tecidual perfusion
1.1 – Measurement of central venous pressure
1.2 – Placement of pulmonary (Swan-Ganz) catheter
and Presep®
1.3 – Placement of jugular bulb catheter
2 – Therapeutic
2.1 – Fluid therapy in general
2.2 – Fluid therapy of irritant solutions (concentrated
potassium chloride, parenteral nutrition,
hypertonic saline) and vasopressor amines
2.3 – Hemodyalisis and plamapheresis
2.4 – Placement of transvenous pacemaker
2.5 – When peripherical venous access is impossible
Central Venous, Arterial, and PA Catheters
Volemic resuscitation is not an indication for CVC.
However, in a hypovolemic patient if the peripheral vein
cannulation is difficult, it would be necessary to access
a central vein.
During cardiac arrest, there is an urgency to access
a vein (peripheral or central) for drug administration. In
this case, femoral vein is the first option. Cannulation of
femoral vein can be done without stopping the cardiac
massage.
Sites of Catheterization
In general, the site of catheterization is selected based on
doctor’s experience. However, for some procedures, there
are preferential sites (Table 2).
If the patient had a pleural catheter, the venous cannulation must be done at the same side of the thoracic drain.
Preparation
Patient and operator preparation is a crucial component
of the vascular cannulation procedure.
If possible, it is advisable to obtain informed consent
from the patient or surrogate whenever an invasive procedure is to be performed.
Hand washing is mandatory (and often overlooked)
before the insertion of vascular devices. Scrubbing with
antimicrobial cleansing solutions does not reduce the
C
incidence of catheter-related sepsis, so a simple soapand-water scrub is sufficient.
A CVC is a sterile procedure. If there is a contamination,
the procedure must be interrupted and the contaminated
material must be replaced. If a patient is cannulated in an
emergency situation (e.g., during cardiac arrest), the venous
catheter must be replaced as soon as possible.
The insertion site is prepped and povidone-iodine or
alcoholic solution of chlorhexidine is most commonly
used, although chlorexidine appears to be more efficient.
After skin preparation, the insertion site should be draped
with a sterile field. The sterile field must be big enough to
cover the head and the body of the patient (maximum
barrier). This procedure reduced the incidence of catheterrelated sepsis six times compared to the use of sterile
gloves and a small camp (Fig. 1).
To avoid patient discomfort, local anesthesia, analgesia, and/or sedation need to be performed. Most vascular
cannulations are done percutaneously because of the facility to insert the catheter and reduced risk of infection.
Cannulation over direct vision through a surgical cut
down may be performed in very difficult situations.
Most of the central venous and arterial catheters are
inserted passing a guidewire through the needle (the
Seldinger technique) (Fig. 2).
In Fig. 3, steps of the right internal jugular vein cannulation are depicted.
Central Venous, Arterial, and PA Catheters. Table 2 Sites of catheterization in diverse clinical conditions
Indication
First choice
Second choice
Third choice
Venous access in general
SCV
IJV or EJV
FV
Placement of (Swan-Ganz)
catheter
RIJV
LSCV
LIJV or RSCV
Coagulopathy
EJV
IJV
FV
Pulmonary disease or
elevated PEEP
RIJV
LIJV
EJV
Total parenteral nutrition
SCV
IJV
–
Hemodyalisis/plasmapheresis IJV
FV
SCV
Cardiac arrest
SCV
IJV
FV
Transvenous pacemaker
RIJV
SCV
–
Hypovolemic patient
SCV or FV
IJV
–
Urgent access to airway
FV
SCV
IJV
Monitoring of venous
saturation
SCV
IJV
EJV
CVP monitoring
IJV
EJV
SCV
SCV – subclavian vein; IJV – internal jugular vein; EJV – external jugular vein; FV – femoral vein; RIJV – right internal jugular vein; LSCV – left
subclavian vein; LIJV – left internal jugular vein; RSCV – right subclavian vein; PEEP – positive end expiratory pressure; CVP central venous pressure
519
C
520
C
Central Venous, Arterial, and PA Catheters
Central Venous, Arterial, and PA Catheters. Figure 1 Vascular cannulation of right internal jugular vein in an intensive care unit
setting. Note the use of sterile field covering the face and body of patient (maximum barrier)
The use of Doppler or ultrasound-guided vascular
access increase the success of cannulation, reducing the
risk of complications related to the insertion. In the future,
the use of ultrasound equipment associated with a trained
team will improve this technique. Figure 4 demonstrate
the collapse sign, useful to differentiate arterial from
venous vessels.
Catheter Tip Position
After cannulation, the catheter placement in jugular or
subclavian veins must be checked with a thoracic radiography. Ideally, the tip of the catheter need to be positioned 3–5 cm above the junction of the superior vena
cava and right atrium or 1 cm below the right tracheobronchial angle (never below the main carina) or outside
cardiac silhouette (Fig. 5). The catheter length position
must be 16–18 cm at right-side cannulation and 19–21 cm
at the left-side cannulation, independently of the gender
or patient biotype.
Subclavian Vein Catheterization
The access to the subclavian vein may be gained by the
supraclavicular or infraclavicular approach. The angle of
insertion for all infraclavicular approaches is parallel to
the coronal plane. Initially, the patient is positioned in
Trendelenburg (15–30 ) in order to increase venous
return (37%) with a small rolled towel between the
scapulae to increase the distance between the clavicle and
the first rib. However, Trendelenburg position is not well
tolerated in cardiac patients. In these cases, legs are elevated to increase venous return filling subclavian vein,
facilitating the cannulation. The head is slightly rotated
to the opposite side and the arms are located along the
body. After skin preparation and local anesthesia with
lidocaine (1% or 2%), the needle is advanced 2–3 cm
caudal to clavicle in the delto-peitoral angle. The insertion
point can be performed lateral to the midclavicular line at
the junction of the lateral and middle thirds of the clavicle,
in the mid-clavicle, or at the junction of the middle and
medial thirds of the clavicle (Fig. 6).
When blood comes into the syringe (because a slight
negative pressure is applied), the needle is fixed with
fingers and the syringe is removed, and the guidewire is
introduced 15 cm with the J-tip turned lower. If resistance is met when advancing the guidewire, both the
guidewire and needle should be withdrawn simultaneously. It is always important to control the guidewire.
The tip of guidewire is flexible to avoid vascular lesion
(J-tip) during the introduction. Never try to introduce
the other tip because of the risk of vascular lesion. The
catheter should be introduced over the guidewire without resistance.
This approach has a success rate of 70–99% and is
easier to maintain, and is preferably used when airway
control is necessary. Disadvantages include difficulty in
controlling bleeding, higher risk of pneumothorax, and
Central Venous, Arterial, and PA Catheters
C
521
C
A small-bore needle is used to probe for the target vessel
A thin wire with a flexible tip (called a J-tip because of its shape) is
passed through the needle and into the vessel lumen
The needle is then removed, leaving the wire in place to serve as
guide for cannulation of the vessel
Vascular catheter is passed over wire-guide. In deep vessels a
rigid dilator is first threaded and removed
Finally the wire is removed and the catheter is advanced
Central Venous, Arterial, and PA Catheters. Figure 2 Vascular cannulation with a guidewire (the Seldinger technique)
interference with chest compressions during cardiopulmonary resuscitation.
One study shows that the strongest predictor of
a complication is a failed catheterization attempt. Many
clinicians feel that three attempts are enough, and then it is
time to ask another clinician to attempt catheterization
from another site.
The use of Doppler guidance to reduce the complications related to subclavian vein catheterization needs to be
better elucidated.
522
C
Central Venous, Arterial, and PA Catheters
a
b
d
e
g
h
c
f
i
Central Venous, Arterial, and PA Catheters. Figure 3 Steps for right internal jugular vein cannulation: (a) preparation of skin
with chlorexidine and collocation of fields, (b) local anesthesia, (c) needle vein cannulation and guidewire is advanced through
needle, (d) cutdown the skin with a blade (to facilitate the introduction of catheter), (e) the guidewire in place and compression
of the site to avoid bleeding, (f) introduction of dilator, (g) introduction of vascular catheter is passed over guidewire, (h) the
guidewire is retired, and (i) the catheter is fixed
Internal Jugular Vein Catheterization
The internal jugular vein has been cannulated with success
rate similar to that of the subclavian vein (58–99% success
rate). Three different approaches have been described
(Fig. 7): (a) anterior to the sternocleidomatoid (SCM);
(b) central between the two heads of the SCM, and (c)
posterior to the SCM.
The carotid artery lies posterior and medial to the
vein. The operator must maintain a minimum pressure
in internal carotid artery with the left hand and using the
(b) technique previously described (Fig. 7), the vessel is
cannulated at an angle of 45 pointing the needle to the
ipsilateral nipple. The puncture is achieved by introducing
the needle at about 1–5 cm. The rest of the procedure is
same as that of a cannulation of subclavian vein.
Internal jugular vein catheterization has a lower risk of
pneumothorax and is easier to compress the insertion site
if bleeding occurs. However, it may be more difficult to
cannulate in patients with volume depletion or shock.
Dressing and maintaining are also difficult.
External Jugular Vein Catheterization
The cannulation of the external jugular vein has reduced
incidence of complications, but a higher incidence of
failure (60–90% success rate). The patient is placed in
the Trendelenburg position with the head turned away
Central Venous, Arterial, and PA Catheters
Internal jugular vein
C
Because catheters inserted through the neck are more
difficult to dress and maintain than those in other sites,
this approach is not suitable for prolonged venous access.
Femoral Vein Catheterization
Carotid artery
a
Internal jugular vein
Carotid artery
b
Central Venous, Arterial, and PA Catheters. Figure 4
Transversal axis ultrasound view of cervical region. In (a)
carotid artery is located at the left side (smaller circle) and
internal jugular vein is at the right side (larger circle). In (b)
there is a collapse of internal jugular vein with transducer
compression
from the insertion site. If necessary, the vein can be
occluded just above the clavicle (with forefinger of the
nondominant hand) to engorge the entry side.
The recommended insertion point is midway between
the angle of the jaw and the clavicle.
The external jugular vein has little support from the
surrounding structures, thus the vein should be anchored
between the thumb and forefinger when the needle is
inserted. Sometimes it is difficult to advance the guidewire
or the catheter. If the catheter does not advance easily, do
not force it, as this may result in vascular perforation.
However, as many as 15% of patients do not have an
identifiable external jugular vein.
It is ideal for coagulopathy patients, because any significant bleeding can be easily recognized and treated with
local pressure. The risk of pneumothorax is also avoided.
523
The femoral vein is the easiest, large vein to be cannulated
and does not lead to pneumothorax. The vein is located
just medial to the femoral artery 2 cm below the inguinal
ligament. The needle is directed cephalad at a 45 angle.
The distal tip of needle should not traverse the inguinal
ligament to minimize the risk of retroperitoneal hematoma. The risk of infection and thrombosis limit its general acceptance for long-term use in critically ill patients.
Other disadvantages associated with this route are the
femoral artery puncture (5%) and limited ability to flex
the hip (which can be bothersome for awake patients).
Figure 8 shows the anatomy of the femoral sheath.
Table 3 shows the advantages, disadvantages, and main
contraindications of central vein cannulation.
Complications Related to Vein
Catheterization
Complications occurring during catheter placement
include catheter malposition, arrhythmias, embolization, and vascular, cardiac, pleural, mediastinal, and neurologic injuries. Pneumothorax is the most frequently
reported immediate complication of subclavian vein
catheterization, and arterial (carotid) puncture is the
most common immediate complication of internal jugular vein cannulation.
When a carotid artery puncture is performed (2–10%
of attempted cannulations), the needle should be removed
and pressure should be applied to the site for at least 5 min
(10 min if the patient has coagulopathy). If the carotid
artery has been inadvertently cannulated, the catheter
should not be removed, as this could provoke serious
hemorrhage. In this situation, a vascular surgeon should
be consulted immediately.
Pneumothorax can be detected in the postinsertion
chest films in upright position and during expiration (if
possible). Expiratory films facilitate the detection of small
pneumothoraxes because expiration decreases the volume
of air in the lungs, but not the volume of air in the pleural
space. Pneumothorax can be life threatening in ventilated
patients. In minutes, the patient can develop hypertensive
pneumothorax and evolution to cardiopulmonary arrest.
Sometimes the physical examination (hyperresonance at
thoracic percussion) can be the tip for diagnosis.
Pneumothorax may not be radiographically evident
until 24–48 h after central venous cannulation
C
524
C
Central Venous, Arterial, and PA Catheters
Central Venous, Arterial, and PA Catheters. Figure 5 Thoracic radiography showing the correct placement of catheter tip
(arrow)
Internal jugular vein
Internal jugular vein
Sternocleidomastoid
muscle
a
c
Anterior scalene
muscle
Subclavian vein
b
a
b
c
Subclavian vein
Central Venous, Arterial, and PA Catheters. Figure 6 Three
approaches for infraclavicular access to the subclavian vein. a –
Junction of the lateral and medial thirds of the clavicle; b –
mid-clavicle; c – junction of the middle and medial thirds of the
clavicle
(delayed pneumothorax). Therefore, the absence of a
pneumothorax on an immediate postinsertion chest
film does not absolutely exclude the possibility of
a catheter-induced pneumothorax. This is an important
consideration in patients who develop dyspnea or other
Central Venous, Arterial, and PA Catheters. Figure 7 Three
approaches to access the internal jugular vein. a – Anterior to
the sternocleidomastoid; b – central between the clavicular
and sternal heads of sternocleidomastoid, and c – posterior to
the sternocleidomastoid
signs of pneumothorax in the first few days after central
venous cannulation.
Venous air embolism is one of the most feared complications of central venous cannulation. Prevention is the
hallmark of reducing the morbidity and mortality of
venous air embolism. Placing patient in Trendelenburg
position with the head 15 below the horizontal plane
C
Central Venous, Arterial, and PA Catheters
525
Central Venous, Arterial, and PA Catheters. Table 3
Advantages, disadvantages, and main contraindications of
central vein cannulation
Femoral nerve
Vein Advantages
Disadvantages Contraindications
EJV
Secure
Difficult access
in obese or
short neck
patients
IJV
Low risk of
Difficult view in
pneumothorax obese and skin
flaccidity. High
risk of infection
Coagulopathy,
previous surgery,
short neck, or
obese patients
SCV
Constant
anatomy
Coagulopathy,
clavicle deformity,
low functional
respiratory
reserve,
cifoescoliosis
FV
No interference Difficult to
of thoracic
progress the
masses
guidewire in
ascitis patients,
difficult
hygiene, more
risk of infection
Femoral artery
Femoral vein
Inguinal ligament
.
sm
riu
r to
Sa
More risk of
pneumothorax,
difficult to
compress
Central Venous, Arterial, and PA Catheters. Figure 8 The
anatomy of the femoral sheath
can facilitate the elevation of venous pressure above the
atmospheric pressure. Special care must be employed during changing connections in a central venous line.
Long-term complications related to the length of time
that the catheter remains in place include infection and
thrombosis. Surface-modified central venous catheters
have been developed to reduce catheter-related infection
(e.g., minocycline and rifampin impregnated cook spectrum glide® central venous catheter).
The complications observed in a study of over 4,000
cannulations of central veins are shown in Table 4.
Previous surgery,
difficult view
Obese, urinary
incontinence,
infection, or local
venous
thrombosis
EJV – external jugular vein; IJV – internal jugular vein; SCV – subclavian
vein; FV – femoral vein
Central Venous, Arterial, and PA Catheters. Table 4
Comparison between the incidence of complication in SCV
and IJV
SCV (%)
IJV (%)
Risk of arterial puncture
0.5
3.0
Catheter malposition
9.3
5.0
Hemo- or pneumothorax
1.3
1.5
Bloodstream infection
4.0
8.6
Vessel occlusion/thrombosis
1.2
0
Swan-Ganz Catheter
SCV – subclavian vein; IJV – internal jugular vein
The use of Swan-Ganz (pulmonary artery) catheter is not
just important for the specialty of critical care, but it is also
responsible for the specialty of critical care. This catheter is
so much a part of patient care that it is impossible to
function properly in the ICU without a clear understanding of this catheter and the information it provides. It is
indicated whenever the data obtained improves therapeutic decision making. Although no carefully designed study
has definitely established the benefit of hemodynamic
monitoring to the individual patient, it is reasonable to
assume that more precise bedside knowledge of cardiovascular parameters would allow earlier diagnosis and
guide therapy. Table 5 shows the indications for pulmonary artery catheterization most often noted in the
literature.
The Swan-Ganz catheter is a multilumen catheter
110 cm long and has an outside diameter of 2.3 mm
C
526
C
Central Venous, Arterial, and PA Catheters
(7 French gauge). There are two internal channels: proximal (right atrium) and distal (pulmonary artery). The tip
of the catheter is equipped with a balloon with 1.5 mL
capacity. Finally, there is a thermistor (i.e., a transducer
Central Venous, Arterial, and PA Catheters. Table 5
Recommendations of pulmonary artery catheterization
I. Surgical
Perioperative management of high-risk patients
undergoing extensive surgical procedures
Postoperative cardiovascular complications
Multisystem trauma
Severe burns
Shock despite perceived adequate fluid therapy
Oliguria despite perceived adequate fluid therapy
II. Cardiac
Myocardial infarction complicated with pump failure
Congestive heart failure unresponsive to conventional
therapy
Pulmonary hypertension (for diagnosis and monitoring
during acute drug therapy)
III. Pulmonary
To differentiate noncardiogenic (acute respiratory
distress syndrome) from cardiogenic pulmonary
edema
To evaluate effects of high levels of ventilatory support
on cardiovascular status
device that senses changes in temperature) located on the
outer surface of the catheter 4 cm from the catheter tip.
The thermistor measures the flow of a cold fluid that is
injected through the proximal port of the catheter, and
this flow rate is equivalent to the cardiac output. An
example of this catheter is illustrated in Fig. 9.
Other accessories are available on specially designed
Swan-Ganz catheter: (1) an extra channel that can be used
as infusion channel or for passing temporary pacemaker
that leads into the right ventricule; (2) a fiberoptic system
that allows continuous monitoring of mixed venous oxygen saturation; (3) a rapid-response thermistor that can
measure the ejection fraction of right ventricle, and
(4) a thermal filament that generates low-energy heat
pulses and allow continuous thermodilution measurement of the cardiac output.
It is essential to prepare the electronic equipment and
test the catheter component before insertion. The access to
central venous circulation for insertion of Swan-Ganz
catheter is the same for placement of a central venous
catheter in subclavian or internal jugular positions.
The procedure has been facilitated by the use of
introducer assemblies. Once an introducer sheath is in
place, the pulmonary catheter is inserted and advanced
until the tip reaches an intrathoracic vein (as evidenced
by respiratory variations on the pressure tracing). The
balloon is then inflated with 1.5 mL of air and the
catheter is advanced while the pressure waveform and
Valve for inflation the balloon
Socket of distal lumen
Socket of thermistor
Proximal (RA) lumen
Thermistor
Extra socket
Socket of proximal lumen
Distal (PA) lumen
Inflated
balloon
Swan-Ganz Catheter
Central Venous, Arterial, and PA Catheters. Figure 9 The Swan-Ganz catheter. PA – pulmonary artery; RA – right atrium
Central Venous, Arterial, and PA Catheters
The Swan-Ganz catheter provides a significant amount
of physiologic information that can guide therapy in critically ill patients. This information includes central
venous pressure; pulmonary artery: diastolic, systolic,
and mean pressures; pulmonary artery occlusion
“wedge” pressure, cardiac output by bolus or continuous
thermodilution techniques; mixed venous blood gasses by
intermittent sampling; and continuous mixed venous
oximetry. A multitude of derived parameters can also be
obtained.
30
20
10
RA
(1)
Right atrium
527
Data Collected for Swan-Ganz Catheter
the electrocardiogram tracing are monitored. The catheter
is advanced through the right atrium and into right ventricle where a sudden increase in the systolic pressure
appears on the tracing. The catheter is subsequently
advanced through pulmonic valve and into the pulmonary artery where a sudden increase in the diastolic pressure is recorded. The catheter is gently advanced until
a pulmonary artery occlusion or “wedge” tracing is
obtained (Fig. 10). The balloon is deflated, a pulmonary
artery tracing is confirmed, the catheter is secured, and
a chest radiograph is obtained (Fig. 11).
0
mmHg
C
RV
(2)
Right ventricle
PA
(3)
Pulmonary artery
PCW
(4)
Pulmonary
artery
branch
4
3
1
2
Central Venous, Arterial, and PA Catheters. Figure 10 Pressure tracing recordings with corresponding locations as the
pulmonary catheter is passed into the “wedge” position
C
528
C
Central Venous, Arterial, and PA Catheters
Sc
SVC
MPA
RLL
PA
RA
RV
Central Venous, Arterial, and PA Catheters. Figure 11
Normal course of a Swan-Ganz catheter. A Swan-Ganz catheter
inserted on the right goes into the subclavian vein (Sc), into
the superior vena cava (SVC), right atrium (RA), right ventricle
(RV), main pulmonary artery (MPA), and in this case, the right
lower lobe pulmonary artery (RLL PA)
Hemodynamic variables are often expressed in relation to body size. A simple equation can replace the use of
normograms: BSA (m2) = [Ht(cm) + Wt(kg) 60]/100
The parameters of cardiovascular performance
directly measured (and normal values) are shown below:
Central venous pressure (CVP) = 1–6 mmHg
CVP is equal to pressure in right atrium. The right
atrium pressure (RAP) should be equivalent to rightventricular end-diastolic pressure (RVEDP); then
CVP = RAP = RVEDP
Pulmonary capillary wedge pressure (PCWP) = 6–12
mmHg
PCWP should be the same as the left-atrial pressure
(LAP). The LAP should also be equivalent to the leftventricular end-diastolic pressure (LVEDP) when there is
no obstruction between left atrium and ventricle.
PCWP = LAP = LVEDP
Cardiac index (CI) = 2.4–4.0 L/min/m2
CI = cardiac output/BSA
Stroke volume index (SVI) = CI/heart rate (HR)
(N = 40–70 mL/beat/m2)
Right ventricular ejection fraction (RVEF) = SV/
RVEDP (= CVP) (N = 46–50%)
Right ventricular end-diastolic volume (RVEDV) =
SV/RVEF (N = 80–150 mL/m2)
Left ventricular stroke work index (LVSWI) = (MAP
PCWP) SVI ( 0.0136) (N = 40–60 g.m/m2)
where MAP = medium arterial pressure and 0.0136 is
the factor that converts pressure and volume to units
to work
Right ventricular stroke work index (RVSWI) = (PAP
CVP) SVI ( 0.0136) (N = 4–8 g.m/m2) where PAP =
medium pulmonary arterial pressure
Systemic vascular resistance index (SVRI) = (MAP
RAP) 80/CI (N = 1,600–2,400 dynes.s.m2/cm5)
Pulmonary vascular resistance index (PVRI) = (PAP
PCWP) 80/CI (N = 200–400 dynes.s.m2/cm5)
The parameters of systemic oxygen transport are
shown below (Hb = hemoglobin).
Mixed venous oxygen saturation (SvO2) = 70–75%
Oxygen delivery (DO2) = CI 13.4 Hb SaO2
(N = 520–570 mL/min.m2)
Oxygen uptake (VO2) = CI 13.4 Hb (SaO2
SvO2) (N = 110–160 mL/min.m2)
Oxygen extraction ratio (O2ER) = VO2/ DO2 ( 100)
(N = 20–30%)
Complications of Swan-Ganz Catheter
The most common complication during the passage of
pulmonary catheter is the development of arrhythmias. If
an arrhythmia is noted, withdraw the catheter into the
vena cava, and the arrhythmia should disappear. Rarely,
treatment of arrhythmias is necessary, except complete
heart block (which should be treated with a temporary
transvenous pacemaker) and sustained ventricular tachycardia (which should be treated with lidocaine or other
suitable antiarrhythmic agent).
Coiling, looping, or knotting in the right ventricle may
occur during catheter insertion. This can be avoided if no
more than 10 cm of catheter is inserted after a ventricular
tracing is visualized and before a pulmonary artery tracing
appears. Aberrant catheter locations such pleural, pericardial, peritoneal, aortic, vertebral artery, renal vein, and
inferior vena cava have also been reported.
After catheter insertion, the complications include
infection, thromboembolism, pulmonary infarction, pulmonary artery rupture, hemorrhage, pseudo aneurysm
formation, thrombocytopenia, cardiac valve injuries,
catheter fracture, and balloon rupture.
Finally, complications can result from delay in treatment because of time-consuming insertion problems and
from inappropriate treatment based on erroneous information or erroneous data interpretation.
Central Venous, Arterial, and PA Catheters
Arterial Catheterization
Arterial catheterization is indicated whenever continuous
monitoring of blood pressure or frequent sampling of
arterial blood is required. Patients with shock, hypertensive crisis, major surgical interventions, and high levels of
respiratory support require precise and continuous blood
pressure monitoring, particularly when vasoactive or inotropic drugs are being administered. In shock patients, the
difference between direct blood pressure and cuff blood
pressure could be more than 30 mmHg in 50% of patients.
The radial, ulnar, axillary, brachial, femoral, dorsalis
pedis, and superficial temporal arteries have been used to
access the arterial circulation for continuous monitoring.
The radial artery of nondominant hand should be
attempted first. The dual blood supply to the hand and
the superficial location of the vessel make the radial artery
the most commonly used site for arterial catheterization.
The Allen test is frequently used to test the adequacy of
collateral circulation before cannulation (Fig. 12). Ultrasonic Doppler technique, plethysmography, and pulse
oximetry have also been used to assess the adequacy of
the collateral arterial supply.
The puncture site is slightly proximal (2 cm) to the
flexion skin fold with a small catheter at 30–45 angle to
the skin. For cannulation in the direct threading technique, the anterior wall of the artery is penetrated (A).
When blood return is noted (B), the catheter is advanced
farther up the arterial lumen as the needle is withdrawn
(C) (Fig. 13). The cannula is then connected to a pressure
monitoring system.
a
b
C
The axillary artery has been recommended for longterm direct arterial pressure monitoring because of its
larger size, freedom for the patient’s hand, and close proximity to the central circulation. Pulsation and pressure
are maintained even in the presence of shock with
marked vasoconstriction. Thrombosis does not result in
compromised flow in the distal arm because of the extensive collateral circulation. The major disadvantages are its
low accessibility, visibility, and location within the
neurovascular sheath, which may increase the risk of neurologic compromise if a hematoma develops.
The major advantages of using femoral artery are its
superficial location and large size, which allow easier
localization and cannulation when the pulses are absent
over more distal vessels. The major disadvantages are the
decreased mobility of the patient, contamination from
ostomies or draining abdominal wounds, and the possibility of occult bleeding into the abdomen or thigh.
Both axillary and femoral arteries are cannulated by
using the modified Seldinger technique.
The dorsalis pedis artery may be absent in up to 12%
of feet. Assessment of collateral flow to the remainder of
the foot through the posterior tibial artery should precede
cannulation. This can be done by occluding the dorsalis
pedis artery, blanching the great toe by compressing the
toenail for several seconds, and then releasing the toenail
while observing the return of color.
The major disadvantages of using the dorsalis pedis
artery are its relatively small size and overestimation of
systolic pressure (5–20 mmHg higher than the radial artery).
c
Central Venous, Arterial, and PA Catheters. Figure 12 Allen test: In (a) occlusion of both ulnar and radial arteries while
patient makes a fist; (b) radial and ulnar arteries occluded after hand is opened; and (c) release of pressure on ulnar artery and
observation for color return to hand within 5–10 s. This is a demonstration of patency of ulnar artery
529
C
530
C
Cerebral Abscess
a
b
c
Central Venous, Arterial, and PA Catheters. Figure 13 Direct approach to cannulation of the radial artery
The superficial temporal artery has been extensively
used in infants and in some adults for continuous
pressure monitoring. Because of its small size and
tortuousity, surgical exposure is required for cannulation. A small incidence of neurologic complications
resulting from cerebral embolization has been reported
in infants.
The brachial artery is not used often because of high
complication rate associated with arteriography. Although
this artery has been successfully used for short-term
monitoring, there are little data to support prolonged
brachial artery monitoring, and its use has been discouraged. Disadvantages include difficulty in maintaining the
site and the possibility of hematoma formation in
anticoagulated patients. The latter may lead to median
nerve compression neuropathy and Volkmann’s contracture. Compartment syndrome of the forearm and hand
has also been reported.
Complications of Arterial Catheterization
Major complications for all sites of arterial line insertion
include: bleeding, ischemia, distal embolization, sepsis,
neuropathy, arteriovenous fistula, and pseudoaneurysm
formation. Inadvertent injection of vasoactive drugs or
other agents into an artery can cause severe pain, distal
ischemia, and tissue necrosis. Minor complications are:
thrombosis, skin ischemia and local inflammation, infection, and hematoma. Infections are more frequent:
(a) after 4 days of catheter placement, (b) when insertion
is made by surgical cut down, and (c) presence of local
inflammation.
References
1.
2.
3.
Irwin RS, Rippe JM (eds) (2007) Intensive care medicine, 6th edn.
Wolkers Kluwer/Lippincott, Williams & Wilkins, Philadelphia, PA
Marino PL (1998) The ICU book, 2nd edn. Williams & Wilkins,
Baltimore, MD
O’Donnell JM, Nácul FE (eds) (2001) Surgical intensive care medicine. Kluwer, Boston, MA/Dordrecht/London
Cerebral Abscess
▶ Post-neurosurgical
Empyema
Brain
Abscess
and
Subdural
Cerebral Concussion
Cerebral Concussion
DANIEL B. CRAIG1, KATHRYN M. BEAUCHAMP2
1
Denver, CO, USA
2
Department of Neurosurgery, Denver Health Medical
Center, University of Colorado School of Medicine,
Denver, CO, USA
Synonyms
The term “cerebral concussion” is often used interchangeably with “minor traumatic brain injury” or MTBI. Other
less common synonyms are: mild head injury, minor head
trauma, mild brain injury.
Definition
Cerebral concussion can be defined as a post-traumatic,
immediate, and transient change in neural function. The
roots of “concussion” come from two Latin words –
concutere (to shake violently) and concussus (the act of
striking together). It is the most common type of head
injury and has been recognized as a group of symptoms for
centuries. The diagnosis of concussion is almost entirely
clinical, and the range of symptoms is broad. The specific
clinical definition of concussion has been contested over
the years. The Vienna conference in 2001 set out to offer
a comprehensive current definition resulting in a broad
definition with highlights including: “an impulsive force
transmitted to the head. . .rapid onset of short-lived
impairment. . .resolves spontaneously. . .symptoms largely
reflect a functional disturbance rather than structural
injury. . .may or may not involve loss of consciousness. . .sequential resolution of symptoms. . .associated with
grossly normal structural imaging studies.”
The Debate
Concussion amongst athletes is increasingly common and
therefore it is important to have a common definition
amongst practitioners from which to guide treatment.
Many consider loss of consciousness at the time of injury
or some minimal period of peri-traumatic amnesia as necessary symptoms. Others have searched for a clear structural
brain injury – especially as imaging modalities improved to
complement the physiological definition. In general, neurological and cognitive concussion symptoms are immediate, transient, caused by blunt-force trauma, and generally
exist without a clear structural anatomic lesion.
Historically, several evaluation systems evolved to categorize injuries based on initial symptoms, and many
C
recovery guidelines still use this grading system to determine return to activity protocol. At the Prague International Conference on Concussion in Sport (2004) [1], the
authors supported the trend toward abandoning the classic concussion grading scale as true severity of injury has
shown limited correlation to the number and duration of
acute concussion signs/symptoms. Instead, they argued
for a division based on management needs into simple
or complex concussion. Progressive resolution of symptoms within 7–10 days defines simple concussion and
represents the vast majority of injuries without the need
for formal intervention or extensive neuropsychological
screening. Complex concussion indicates persistent symptoms, prolonged loss of consciousness (>1 min), or
prolonged cognitive impairment and requires more formal medical management with consideration of imaging,
and multi-disciplinary follow-up.
One final area of current debate is that of concussion
being a linear spectrum of severity or a group of distinct
subtypes. The spectrum ideology has been the classic
model, but the variations in clinical outcome with the
same impact force suggest discreet differences in pathology.
This complements the progress in understanding the mechanism of injury, and may help explain why a clear common
pathway in concussion remains somewhat elusive.
Mechanism
The blunt forward or oblique force with impact causes a
rapid acceleration/deceleration of the head and a resulting
anterior/posterior movement of the brain within the cranial vault. Several theories about the resulting neuronal
dysfunction involve ionic shifts, altered metabolism,
impaired connectivity, and changes in neurotransmission.
Reticular theory suggests a temporary paralyzation of the
brainstem reticular formation. Centripetal hypothesis
involves a mechanical disruption of neuronal tracts. The
pontine cholinergic scheme describes an activation of
cholinergic neurons causing a suppressed behavioral
response. The convulsive response theory is based on
induction of generalized neuronal firing. No conclusive
human studies have confirmed a specific mechanism, and
the underlying pathophysiology is likely multifactorial.
Giza and Hovda [2] describe in an animal model
a train of biochemical activity in response to blunt trauma.
The initial insult causes a neurochemical cascade and
membrane/axon dysfunction leading to: increased extracellular potassium ! depolarization ! excitatory neurotransmitter release ! neurotransmitter storm !
generalized post-storm suppression. This series ultimately
causes increased glucose use and lactate production,
531
C
532
C
Cerebral Concussion
decreased cerebral blood flow, NMDA receptor activation,
calcium influx, and impaired oxidative metabolism. This
theory is limited with respect to neuro/cognitive evaluation in an animal model, but it does offer a hypothesis on
the biochemical basis of the generalized post-concussion
syndrome.
Overall, concussion can be viewed as a combination of
mechanical changes from shearing or torsional forces in
addition to a related cascade of neurochemical events.
This is characterized in an animal model by massive initial
depolarization and ultimate decrease in cerebral perfusion
leading to metabolic depression. The real-time progression of physiologic events helps explain the later onset of
some symptoms of concussion, and the cascade has been
shown to render brain tissue more vulnerable to further
injury. Recent studies have shown persistent metabolic
alterations long after initial injury, and further studies
may one day explain or even predict long-term postconcussion syndrome.
Presentation
The injured patient will present after a non-penetrating
blunt impact to the head with a range of neurologic and
cognitive symptoms. Common presentations include:
brief loss of consciousness, period of retro and anterograde amnesia, visual disturbances, and disorientation. He
or she may also experience dizziness, nausea and vomiting,
balance problems, emotional lability, sleep disturbances,
sensitivity to light or sound, fatigue, numbness/tingling,
and loss of concentration. Autonomic signs include pallor,
bradycardia, mild hypotension, and sluggish pupillary
reaction. Less common are brief convulsions and specific
neurologic deficits.
Assessment of these symptoms may depend on witnesses to the event (loss of consciousness may be very brief
and often missed), and availability of immediate evaluation (symptoms may be transient). This need for rapid
assessment of mental status and neurologic changes
encourages the use and standardization of the initial survey by the first responder (coach, trainer, doctor, EMT).
Treatment
In the majority of cases, the symptoms of concussion
resolve spontaneously – usually in 7–10 days. Thorough
serial neurologic examinations are crucial to monitor resolution of symptoms and to rule out more serious injury.
At a molecular and anatomic level, the pathophysiology of
the brain injury and its progression remains loosely
defined and thus resists development of medical/surgical
intervention to hasten recovery. As our ability to evaluate
concussion has grown, the need for and length of hospitalization has decreased. Several overlapping and competing theories for the proper rehabilitative steps to return to
play will be discussed in more detail later.
A handful of studies have examined the importance of
intentional supervised rehabilitation post concussion.
Original trials in the late 1970s showed the benefit of
early ambulation, activity, and education [3]. More
recently, there is focus on outpatient rehab as hospital
stays decrease. A randomized controlled trial in 2007
showed that early active rehabilitation did not change
the outcomes of post-concussive symptom resolution
and life satisfaction after 1 year between intervention
and control groups [4]. Treatment is primarily tailored
to the individual while determining the extent of injury.
One of the most important factors in recovery remains the
time from the initial injury.
The pharmacologic management of the specific symptoms of concussion lacks high-level evidence. Antidepressants are the most commonly prescribed treatment for
post-concussion syndrome, specifically SSRI’s and newer
heterocyclics. Trazodone can be an effective choice for
insomnia, although its anticholinergic side effect profile
may limit its use. Acetylcholinesterase inhibitors (physostigmine, donepezil), and choline precursors (lecithin,
CDP-choline) have been shown to improve neuropsychological test performance, but are limited by short half-life,
side effects, and route of administration.
Evaluation
At the Scene
Initial survey at the scene of the head injury is crucial.
Primary trauma survey should be the first step (airway,
breathing, circulation, disability, exposure). After vital
signs have been stabilized, a more detailed examination
may be performed. A neurologic exam assessing cranial
nerves, coordination, motor function, and cognitive function should be performed based on the severity of injury.
Questionnaires such as the mini-mental-status exam and
the Maddocks questions can be quite useful for quick
evaluation of a patient’s cognition and orientation and
can be learned and used effectively by nonmedical
personnel.
At the Hospital
Patient’s arriving at the hospital after concussion should
be evaluated similar to the initial screen but with more
thorough neurologic exam. A non-contrast head CT may
be considered. Indications of a more serious injury
Cerebral Concussion
include: focal neurologic deficit; seizures; prolonged
altered level of consciousness; oto-rhinorrhea; diplopia;
anisocoria; and progressive symptoms. Concussion is classically characterized as immediate onset of symptoms, but
this is not always the case. Symptoms may arise up to
hours after the initial injury. However, progressive worsening of an established symptom is a red flag for possible
structural injury and indicates the need for further
workup.
Patients presenting with a mild concussion generally
do not require hospital admission, but it is important to
verify that they go home with another adult capable of
following their symptoms. For outpatient treatment of
pain, acetaminophen is the drug of choice and narcotics
and NSAIDS should be avoided due to the risks of
increased intra-cranial pressure and hemorrhage
respectively.
If the presence of a more serious injury is suspected
based on previously mentioned criteria, a more thorough
evaluation is warranted. These patients should be admitted for close observation. In these instances, CT or MRI
imaging is useful to rule out epidural/subdural hematoma,
cerebral contusion, or possible skull fracture. Length of
inpatient stay depends on individual recovery and diagnosis of more serious injury.
The Acute Concussion Evaluation (ACE) provides
a thorough initial medical evaluation of the patient
presenting with concussion. This questionnaire evaluates
the specifics of the injury and assessment of symptoms
and risk factors. The Sport Concussion Assessment Tool
(SCAT) developed by the Prague International Conference
in 2004 is another tool for thorough graded assessment
combining elements of the sideline tests with more
exhaustive neuro-physical exam. The use of
a standardized form is especially helpful to provide
a baseline for future evaluations. It can also be an effective
means of standardizing communication between various
levels of medical personnel (PA, EMT, nurse, trainer, MD).
C
consistently demonstrate frontal and/or temporal
hypometabolism following concussion both at rest and
during tasks, but have had limited clinical applicability.
Several newer modalities show promise in both diagnosis
and assessment of recovery, specifically functional MRI
(fMRI), event-related potentials (ERP’s), and magnetic
source imaging (MSI) [5].
fMRI studies demonstrate increased overall brain activation during memory and sensorimotor tasks in postconcussion patients, and note a discernible difference in
prefrontal cortex usage between injured and non-injured
subjects with an inverse relationship between prefrontal
working memory area (mid-dorsolateral) usage and
symptom severity. The symptomatic subjects included in
these studies had no abnormalities on T2 MRI, thus functional impairment can be observed in the absence of
clinical imaging abnormalities. ERP’s represent the average electroencephalogram (EEG) signal in response to
stimulus, and response time and amplitude have both
proven to consistently vary with symptom severity where
EEG and evoked potential (EP) testing results have been
largely mixed and inconclusive. MSI integrates MRI anatomic data with magnetoencephalography which measures electrical signal parallel to skull surface detecting
real-time brain activity without distortion from tissue
connectivity variance between brain, bone, and skin.
MSI sensitivity surpasses that of MRI or EEG alone. The
clinical utility of these more complex imaging techniques
is in its infancy, but they show potential for a quantifiable
method of diagnosis, assessment of recovery, and further
explanation of underlying pathophysiology [5].
Effectiveness
The effectiveness of mTBI evaluation is best measured by
the ability to rule out more serious injury and ensure safe
return to normal activity. As discussed in the previous
section, consistent clinical exams, proper use of imaging,
and patience ensure high sensitivity and specificity of
diagnosis.
Role of Imaging
Indications for CT and MRI to rule out more serious
injury have classically been controversial and subjective,
and there have been efforts to standardize these practices
(Canadian Head CT Rule, New Orleans Criteria). In addition to ruling out anatomic injury, the role of imaging in
evaluation of concussion has progressed significantly with
the advent of more sophisticated techniques. Thus far,
studies have demonstrated inability to correlate postconcussive MRI findings with symptoms or long-term
outcome. Positron emission tomography (PET) scans
Tolerance
The cautious approach to returning to high risk activity
comes in large part from the vulnerability of the postconcussion patient to a second injury more severe than the
first. The Second Impact Syndrome (SIS) is a controversial
term introduced in the 1980s describing repeat head
injury within a few weeks of a concussion causing diffuse
cerebral swelling, brain herniation, and death. Controversy stems from the paucity of case data with most examples coming from disputed cases primarily in children [6].
533
C
534
C
Cerebral Concussion
Despite the low incidence, the extreme morbidity of SIS
makes it hard to ignore and encourages lengthening recovery time especially in children.
While the SIS debate continues, the increased risk for
subsequent traumatic brain injury (TBI) in patients who
have sustained at least one previous TBI is much less
contested. The significance of this increased risk encompasses more than just multiple mTBI recoveries as
a history of repeated concussions over an extended period
of months to years can result in cumulative neurologic and
cognitive deficits. This has been studied especially in
boxers and football players and is a major concern regarding long-term management. This cumulative risk as well
as the fear of SIS explains the previous concussion as
a factor in return to play guidelines.
Pharmacoeconomics
The CDC estimates the incidence of mTBI at 1.1 million
per year including 300,000 sports-related concussions.
This translates to roughly $17 billion spent on concussion
evaluation and treatment each year. These figures likely
underestimate actual disease burden due to variance in
assessment and reporting. Costs include both direct evaluation and treatment and time lost at work or on the field.
After-care
At Home
Patients sent home with a mild concussion require observation and frequent reassessment, noting any continuation and/or deterioration of symptoms. Mental status
changes and potential amnesia in the patient make it
important to clearly explain red flag symptoms and serial
evaluation instructions to the accompanying adult as well
as the patient. There has been no documented evidence
regarding the common practice of waking the patient
every 3–4 h to assess, but in patients experiencing loss
of consciousness, prolonged amnesia, or other persistent
significant symptoms this is still recommended. Decisions regarding day to day activities such as driving and
return to work or school should reflect the individual
patient’s rate of recovery. A follow-up visit to and outpatient provider is appropriate to confirm symptom
resolution.
Medical Society Guidelines (CMSG) emerged in response
to several deaths due to head injury, and, along with
algorithms by Cantu (1986), the American Academy of
Neurology (AAN) and others, their structure reflects the
initial symptom milieu of the injury (specifically presence
and duration of loss of consciousness and amnesia) and
number of previous concussions. These guidelines express
the need for longer asymptomatic waiting periods based
on injury severity.
The Vienna Summary and Agreement Statement from
2001 emphasizes a medically supervised, stepwise process
for return to play, moving from no activity to light aerobic
exercise, sport-specific training, non-contact drills, fullcontact training, and finally game play with minimum
stage duration of 24 h. This system depends first on the
complete lack of symptoms before starting any activity.
These two schools of thought lend themselves to a combination approach with the guidelines of the CMSG,
Cantu, AAN et al. determining when to begin activity
and the Vienna statement clarifying a stepwise method
of return to full-speed, full-contact participation.
Prognosis
Most concussions fall into the “simple” category as
outlined earlier. In these cases the prognosis is very
good, with symptoms resolving completely in 7–10 days
in 90% of cases. In more complex injuries, the duration
and severity of symptoms increase, but a full recovery is
common. Up to one-third of patients report increased
headaches 1 year after trauma. Multiple concussions over
time increase the risk of permanent neurologic damage,
and up to 15% of patients may experience long-term
symptoms after a single event.
References
1.
2.
3.
4.
5.
Return to Play
Many sets of guidelines offer rational and clinical
approaches for return to activity, and it is important to
clearly communicate these recommendations amongst
patients, families, and providers. In 1991, the Colorado
6.
7.
McCrory P, Johnston K, Meeuwisse W et al (2005) Summary and
agreement statement of the 2nd international conference on concussion in sport. Clin J Sport Med 15(2):48–55
Giza C, Hovda D (2001) The neurometabolic cascade of concussion.
J Athletic Train 36(3):228–235
Relander M, Troupp H, Bjorkesten G (1972) Controlled trial of
treatment of cerebral concussion. Br Med J 4:777–779
Andersson E, Emanuelson I, Bjorklund R et al (2007) Mild traumatic
brain injuries: the impact of early intervention on late sequelae,
a randomized control trial. Acta Neurochir 149(2):151–160
Mendez C, Hurley R, Lassonde M et al (2005) Mild traumatic brain
injury: neuroimaging of sports-related concussion. J Neuropsychiatry Clin Neurosci 17:297–303
McCrory P (2001) Does second impact syndrome exist? Clin J Sport
Med 11(3):144–149
Aubry M, Cantu R, Dvorak J et al (2001) Summary and agreement
statement of the first international conference on concussion in
sport, Vienna 2002. Br J Sports Med 36:6–7
Cerebral Malaria
Cerebral Malaria
SUZANNE M. SHEPHERD, WILLIAM H. SHOFF
Department of Emergency Medicine, Hospital of the
University of Pennsylvania, Philadelphia, PA, USA
Synonyms
Paludism
Definition
Cerebral malaria has been strictly defined by the World
Health Organization (WHO, 2000) as a patient with
confirmed plasmodium infection, usually P.falciparum,
who is unarousable (Glascow Coma Scale score </= 9),
and has had other potential causes of coma excluded.
Many metabolic and infectious processes may cause the
types of neurologic signs and symptoms associated with
malaria, and the presence of malaria parasite may be
incidental in endemic areas. This rather strict definition
was developed for research, as such many individuals
with cerebral malaria have less severe impairment of
consciousness. In practice, the diagnosis of cerebral
malaria is difficult with a high sensitivity but a low specificity. The diagnosis of falciparum malaria should be
considered in any patient with a febrile illness including
neurological symptoms, who has visited or lived in
a malaria-endemic area in the past 3 months. Cerebral
malaria is an acute, widespread infection of the brain
with features of diffuse encephalopathy. The neurological
manifestations of malaria develop rapidly and include
acute severe headache, irritability, agitation, delirium,
psychosis, seizures, and the hallmarks of impaired consciousness and coma. Cerebral malaria is the most serious complication of falciparum infection and the most
common cause of death. Cerebral malaria is also reported
in individuals infected with P.vivax and more recently P.
knowlesi.
Treatment
If left untreated, cerebral malaria is fatal within days of
infection. Severe malaria is considered a medical emergency and institution of immediate treatment is crucial.
Patients should be managed at the highest level of care at
the best available health care facility. ICU facilities are
limited in malarial areas; therefore, patient triage to these
scarce resources must identify those at greatest risk of
complications. Misra et al. developed a simple but
specific triage tool for adults, the malaria severity
C
assessment (MSA) score; however, this tool requires
information that is not available at the time of hospitalization. Hanson et al., using logistic regression, developed a five-point scoring system which was validated in
patient series from Vietnam and Bangladesh. The level of
acidosis (base deficit) and the Glascow Coma Scale were
the two main independent predictors of outcome and the
coma acidosis malaria (CAM) score was derived from
these variables. Mortality was found to increase with
increasing score. A CAM score <2 predicted survival
(PPV 95.8%, CI 93–97.7%) and safe treatment on
a general ward if renal function could be carefully monitored [1].
Treatment involves administration of parenteral
antimalarials, close patient monitoring to ensure early
recognition and management of common complications, and the use of adjunctive treatment measures.
Common complications include hypoglycemia, seizures,
fluid and electrolyte imbalances, anemia, coagulation
disorders, acidosis and respiratory distress and renal
dysfunction. Serum glucose, sodium, lactate, urine output, and renal function should be monitored frequently.
Hypoglycemia may occur with minimal to no clinical
signs; therefore, serum glucose should be monitored at
frequent intervals.
Pharmacologic Management
Use of antimalarials is the only treatment that clearly
reduces mortality. Antimalarials are administered intravenously for 48 h and then orally if the patient is able to take
oral medications. Even fast acting antimalarials often
require 12–18 h to kill plasmodia. Treatment response
is assessed by daily parasite count until clearance of all
P.falciparum trophozoites is achieved from the blood.
Parasitemia may increase during the initial 12–24 h
because available antimalarials do not inhibit schizont
rupture with release of merozoites. Rising parasitemia
beyond 36–48 h after the initiation of antimalarials indicates treatment failure, usually because of high-level drug
resistance. Because nonimmune hosts may have a high
pretreatment total parasite burden (1,000 parasites), it
may take up to 6 days to achieve complete elimination.
Treatment duration depends on the sensitivity of the parasite and parasite burden, but usually lasts 7 days. Parenteral quinine has been the traditional treatment of choice
for cerebral malaria, as patients with severe malaria are
assumed to have chloroquine resistance. Artemisinin
derivatives are now recommended by the World Health
Organization (WHO) as the drugs of choice for severe
malaria. Both drugs are used in combination with other
535
C
536
C
Cerebral Malaria
antimalarial drugs, such as doxycycline (100 mg bid PO or
IV x7d) with quinine, to shorten therapy duration and
prevent the emergence of resistance.
Quinine is one of four main alkaloids derived from the
bark of the Cinchona tree. Quinine kills plasmodia in the
late stages of their erythrocyte cycle via inhibition of
hemazoin biocrystallization, facilitating aggregation of
cytotoxic heme products. A loading dose of quinine is
recommended to rapidly develop anti-parasitic levels.
20 mg/kg body weight (salt), in normal saline or dextrose
saline solution, is infused over 4 hours, preferably via
infusion pump. Maintenance dosing is 10 mg/kg body
weight (salt) infused every 8 h until the patient is able to
take oral medication. Quinine has a narrow therapeutic
window. Quinine can produce hypoglycemia via promotion of insulin secretion. Quinine causes hypotension with
rapid intravenous infusion. It slows ventricular repolarization, with resultant QT prolongation. Quinine also produces cinchonism and dizziness. Quinidine has been used
preferentially in the USA, given in a loading dose of
6.25 mg/kg base (=10 mg/kg salt) infused intravenously
over 1–2 h, and then as a continuous infusion of
0.0125 mg/kg/min base (=0.02 mg/kg/min base). If continuous infusion is not feasible, 15 mg/kg base (=24 mg/kg
salt) loading dose is infused intravenously over 4 hrs, then
7.5 mg/kg base (=12 mg/kg salt) infused over 4 h every 8 h,
beginning 8 h after the loading dose. Cinchonism is
a symptom complex characterized by tinnitus, hearing
impairment, postural hypotension and vertigo or dizziness that occurs in a high percentage of individuals treated
with quinine for malaria.
Newer studies demonstrate artemisinin derivative
superiority in both rapidity of parasite clearance and
fever defervescence, but they have not demonstrated
improved effect on mortality rates [2]. Currently, two
derivatives, artesunate and artemether, are the most
widely used due to efficacy and low cost. Artemisinin,
was developed as a traditional treatment for fever and
malaria in China (Qinghaosu). Artemisinin is
a sesquiterpene lactone derived from the sweet wormwood, Artemesia annua. Artemisinin derivatives kill all
stages of the parasite within the erythrocyte and also
kill gametocytes. Artemisinin derivatives can also be
administered intramuscularly or rectally and have few
local or systemic adverse effects. Artesunate is given as
two 2.4 mg/kg doses intravenously 12 h apart on day 1,
and then is administered as 2.4 mg/kg daily for 6 days or
given orally if the patient is awake and able to swallow.
Artesunate is used in combination with Amodiaquine
10 mg/kg once a day for 3 days. These derivatives are not
fully licensed in many countries; however, intravenous
artesunate is available as an investigational new drug in
the USA for management of severe malaria.
A number of adjunctive treatments have been studied
in cerebral malaria. Fluid balance is critical. Children are
often hypovolemic and require fluid administration; however, there is no consensus with regard to the optimal type
and amount of fluid replacement. In patients with cerebral
malaria who have elevated intracranial pressure maintenance of cerebral perfusion, pressure is necessary and it
has been suggested that fluid management be optimized;
however, no guidelines have been developed. Adults with
severe malaria may develop pulmonary edema and severe
renal impairment, in these instances fluid may need to be
restricted. Albumin has been studied, as it is suggested to
improve microcirculatory flow and treat hypovolemia.
Initial data from Phase II Clinical trials in children with
malaria suggest that 4% albumin improves mortality compared with saline, particularly in those children with
coma. Albumin is currently under investigation in a large
study of children with sepsis and malaria. Other colloidal
agents, including hetastarch and dextran 70, are also currently under investigation.
Acetaminophen (paracetamol) may be used to reduce
fever. It remains unclear if reduction in core temperature
benefits cerebral consequences.
Phenobarbital sodium, phenytoin, or benzodiazepines
are utilized for seizure management. Benzodiazepines may
have reduced efficacy as malarial infection appears to
downregulate g-aminobutyric acid (GABA) receptors.
Both phenytoin and phenobarbital have been used successfully to terminate prolonged seizures. Prophylactic
anticonvulsant use has also been studied. Prophylactic
administration of a single dose of phenobarbital reduced
seizure frequency in studies of both Indian and Thai adults
and Kenyan children with cerebral malaria; however, it was
associated with an increased rate of death, probably due to
depression of hyperpnea that compensated for metabolic
acidosis in these unventilated patients.
Standard treatment regimens have been used to manage hypoglycemia. Both the administration of glucose
solutions and the administration of longer acting somatostatin analogs have been utilized successfully to manage
hypoglycemia, the later in patients receiving quinine therapy. Theoretical concern has been raised whether the
correction of hypoglycemia in the presence of tissue hypoxia may worsen brain tissue acidosis.
A number of other adjunctive treatments have
been studied. None have shown clear-cut improvement
in clinical trials. Most were studied in conjunction with
Cerebral Malaria
quinine treatment; as such their efficacy in combination
with artemisinin derivatives remains undetermined.
None are recommended as standard management at
this time.
Corticosteroids were the initial agents studied in randomized controlled trials, as they were felt to have promise
in reducing intracranial pressure and inflammatory
response. Two randomized trials in Southeast Asian adults
and a smaller study in Indonesian children did not demonstrate any benefit. In fact, in one trial dexamethasone
demonstrated an increased rate of significant complications, including sepsis, gastrointestinal bleeding and
prolonged recovery time from coma.
Anti-inflammatory drugs, mannitol, urea, iron chelators (deferoxamine), low-molecular-weight dextran, heparin, pentoxyfylline (reduces cytokine secretion, prevents
rosetting and reduce cytoadherence), hyperimmune globulin, dichloroacetate, and hyperbaric oxygen have shown
either mixed results or no value. Monoclonal antibodies
against TNF-a have been found to shorten fever duration
but have shown no impact on mortality, and may actually increase morbidity from neurologic sequellae.
N-acetylcysteine, another antioxidant which improves
erythrocyte deformability and reduces TNF release is currently under trial. Erythropoietin is also currently being
investigated, based on human and animal studies that
suggest a neuroprotective effect with reduction in inflammatory response and apoptosis in the brain [3, 4].
Blood transfusions are indicated for severe anemia,
with the amount and rapidity of transfusion different
for children and adults. In children, fresh whole blood
(10–20 ml/kg) is often transfused. In adults, small quantities of blood are administered over a more prolonged
period due to concerns about fluid overload. The role of
exchange transfusion in severe malaria is controversial.
Exchange transfusion has been utilized when the level of
parasitemia exceeds 10–20% of circulating erythrocytes.
The WHO currently recommends that individuals
with severe malaria receive exchange transfusion with
a parasitemia >20%. Exchange transfusion also allows
correction of severe anemia without the risk of fluid
overload. Exchange transfusion is expensive. No data
from well-conducted clinical trials demonstrate improved
outcomes with exchange transfusion.
Epidemiology
Malaria is felt to be the most deadly vector-borne disease
globally. An estimated 2.4 billion individuals live in
malaria-endemic areas worldwide, with 300–500 million
clinical episodes and approximately two million deaths
C
reported annually. 10% of admissions and 80% of
deaths are due to central nervous system involvement.
Plasmodium falciparum causes most cases of severe
malaria and approximately 35–43% of all cases of malaria
globally. More than 70% of falciparum malaria infections
occur in children living in sub-Saharan Africa, although
individuals of any age may become infected. Males and
females are equally affected, but malaria, especially
falciparum, can be devastating in pregnancy to both the
mother and fetus. Malaria is seen increasingly in nonendemic countries due to individuals traveling to endemic
areas for business and pleasure and infected individuals
emigrating or traveling from endemic areas. Occasionally,
malaria is seen in individuals who have traveled to an
endemic area more than one year previously, during
relapse in those previously infected, or in individuals in
non-endemic areas who have been bitten by local mosquito populations which have become infected after biting
a parasitemic individual, or in individuals who live near
airports. Malaria is infrequently transmitted congenitally,
in those who needle share, or in those who have received
blood transfusions or organ transplant. More than 1,500
cases are diagnosed in the USA each year, most of which
were acquired internationally.
Malaria is transmitted via the bite of an infected
female Anopheles spp mosquito obtaining a human blood
meal. Anopheles mosquitoes predominantly bite between
dusk and dawn. Malaria usually occurs below elevations of
1,000 m (3,282 ft). At risk areas include more than 100
countries, including portions of Central America, South
America, the Caribbean, the Middle East, sub-Saharan
Africa, Southeast Asia, the Indian Subcontinent and Oceania. Malaria, due to intense vector control efforts, ceased
to be endemic in the USA in 1947.
After malarial sporozoites are injected into the bloodstream by an infected mosquito, parasites develop in
an asymptomatic hepatic stage. Infected hepatocytes
burst, releasing merozoites which enter erythrocytes.
P. falciparum is able to infect a red cell during any
stage of its development. As such, P. falciparum cause
asynchronous cycles of schizont lysis. This asynchronous
release of trophozoites and hemazoin and other toxic
metabolites does not necessarily produce the classically
described cyclical paroxysms of fever, chills, and rigors.
In late stages of infection, infected erythrocytes adhere to
the capillary and venule endothelial cells, becoming
sequestered in many areas of the body. The brain appears
to be preferentially targeted. Parasites are metabolically
active, consuming glucose and producing increased
amounts of lactate via anaerobic glycolysis.
537
C
538
C
Cerebral Malaria
Current estimates suggest that nearly half of all
children admitted to the hospital with falciparum malaria
exhibit neurological signs and symptoms. In endemic
areas, adults and children develop cerebral disease in similar proportions. The incidence of cerebral malaria in
adults is higher in low and moderate transmission areas
and areas of varying endemicity than in hyper- and
holoendemic areas. In travelers, cerebral malaria occurs
in approximately 2.4% of those documented with
falciparum malaria infection.
The pathophysiology of cerebral malaria is not
completely understood, but appears to be multifactorial.
First, sequestration of parasitized red blood cells produces
mechanical clogging of the cerebral microvasculature.
Infected red blood cells develop parasite-mediated
changes in cytoadherent properties due to specific interaction between P. falciparum erythrocyte membrane
protein (PfEMP-1) and ligands on endothelial cells,
such as ICAM-1 or E-selectin. Parasitized cells and
nonparasitized red blood cells selectively adhere to each
other and to venule and capillary endothelium, termed
rosetting. Decreased deformability of infected cells
increases obstruction. Platelet microparticles also mediate
clumping. Obstruction of the microcirculation leads to a
critical reduction in oxygen supply and increase in
lactic acidosis locally. Hemazoin is found in cerebral
blood vessels on autopsy, suggesting that rupture of
sequestered infected erythrocytes may produce local
inflammation.
Parasite and host immune response also contribute
significantly to the pathophysiology of cerebral malaria.
P.falciparum infection and RBC lysis releases both parasite
toxins and host intracellular molecules. These are recognized by pattern recognition receptors on immune surveillance cells that promote the activation and release of
both pro-inflammatory and anti-inflammatory cytokines
from monocytes and neutrophils and upregulation of the
expression of adhesion molecules and metabolic changes
in endothelial cells. It is thought that this inflammatory
response is initially beneficial to the host by reducing
parasite growth and activating pathways to eliminate parasites and parasite and host toxins. At later stages,
uncontrolled, this inflammatory response causes host
damage directly and elimination pathways are inadequate
to remove generated toxins. Increased amounts of macrophage-released TNF-a, IL-1, IL-6, IL-10, and other proinflammatory cytokines have been documented in murine
models and patients with cerebral malaria. Several pediatric studies suggest an association between elevated levels
of IL-1 receptor antagonist and severe malaria, while high
levels of vascular endothelial growth factor have been
found to be protective against death in patients with
cerebral malaria. Nitric oxide (NO) has been suggested
as a key effector for TNF in malaria pathogenesis. Cytokines upregulate nitric oxide synthase in leukocytes, vascular smooth muscle, microglia, and brain endothelial
cells. One theory suggests that uncontrolled amounts of
nitric oxide diffuse easily through the injured blood brain
barrier. NO may change blood flow and decrease glutamate uptake, producing neuro-excitation. As a potent
inhibitor of synaptic neurotransmission, NO also reduces
the level of consciousness rapidly and reversibly, similar to
that caused by general anesthetics and alcohol. This would
explain reversible coma without residual neurological deficits. Apoptosis has been documented in the brainstems of
adults who died from cerebral malaria; however, the level
of caspase staining was not significantly higher than that
in control individuals without malaria.
The blood brain barrier is impaired in patients with
cerebral malaria and vascular permeability is increased.
T cells have been shown in murine models of cerebral
malaria to impair endothelial cell function by perforinmediated mechanisms leading to blood brain barrier leakage. Postmortem analysis of individuals with cerebral
malaria show widespread disruption of vascular cell junctional proteins (occludin and viculin). Diffuse brain swelling is demonstrated on imaging studies and autopsy
materials. This swelling is not associated with vasogenic
edema. Brain swelling is probably attributable to increased
blood volume that occurs secondary to sequestration and
increased cerebral blood flow. Greater than 80% of children with cerebral malaria develop elevated ICP and some
develop severe intracranial hypertension, and herniation
is more common in children. Intracranial hypertension is
not seen as frequently in adults.
Risk Factors
A number of factors are associated with poor outcome in
cerebral malaria. Historical factors include pretreatment
at home with antimalarials and chronic malnutrition.
Clinical factors include an abnormal respiratory pattern,
hyperpyrexia, hypoperfusion with cool extremities, tachycardia, jaundice, prolonged seizures, and the absence of
corneal reflexes or a coma score of 0 or 1. Laboratory
factors include hyperparasitemia (>500,000/mL), leukocytosis (>10,000/mL), hypoglycemia, abnormal AST, and
elevated lactate and urea levels.
Mortality risk is very high in children less than 5 years
of age. Young women during their first pregnancy are
at increased risk. Malaria complications in pregnancy
are thought to be mediated by placental sequestration
of plasmodia and pregnancy-associated anemia and
Cerebral Malaria
decreases in immune function. Fetal complications
include premature birth and low birth weight, severe
anemia and death. Nonimmune individuals are also at
increased risk. Individuals who live in malaria-endemic
areas develop partial immunity to infection after
repeated exposure; as such they experience less severe
infections.
Individuals with HIV coinfection are at increased risk
for worsened clinical outcomes in both infections. Malaria
and intestinal helminths often coexist in the same poor
populations globally; as such increasing attention is being
paid to the interaction between these organisms in
coinfected individuals. Data from some recent field studies suggest that helminth coinfection may play a protective
role in cerebral malaria, via Th2 response and the interaction between nitric oxide and the low affinity immunoglobulin E binding receptor CD23 [5].
Individuals with sickle cell trait (Hemoglobin S),
and less so with Hemoglobin C, thalassemias, glucose-6phosphate-dehydrogenase deficiency (G6PD) are protected against infection and death from falciparum
malaria. Individuals with Hemoglobin E may be protected
against vivax malaria. Individuals of West African ancestry
lacking RBC Duffy antigen are completely protected
against P. vivax infection. Several TNF gene-promoter
polymorphisms have been shown to be associated with
an increased risk of cerebral malaria, neurological sequelae
and death. Plasma levels of inducible TNF receptor proteins have been suggested as potential biomarkers of cerebral malaria severity and mortality risk.
Evaluation and Assessment
Almost all patients will have fever, rigors, and chills.
Altered sensorium may be present initially or may develop
over the course of 24–72 h. Coma usually develops rapidly
in children, often after seizure activity. If seizure activity
has occurred, the patient should remain unresponsive for
more than 30 min to 1 h after active seizure activity to
suggest the diagnosis of cerebral malaria rather than postictal state. Approximately 15–20% of adults demonstrate
seizure activity. Most seizures appear generalized, but on
EEG many are documented to have a focal origin. In
adults, coma tends to develop more slowly and may not
be associated with seizure activity. Mild neck stiffness may
be present, but true meningismus is usually absent. Photophobia is rare. Malarial retinopathy has been demonstrated to be more specific than any other clinical or
laboratory feature in distinguishing coma due to malaria
from other etiologies. Malarial retinopathy consists of
vessel changes, retinal pallor, hemorrhages, and less commonly papilledema. Retinal hemorrhages occur in
C
approximately 15% of cases and may have a white center.
Pupils are normally reactive. Transient dysconjugate gaze
may be seen. Motor examination usually demonstrates
symmetrical upper motor neuron dysfunction, although
muscle tone may be decreased. Bilateral extensor plantar
reflexes may be seen in comatose patients. Pout reflex,
bruxism, jaw spasm, opisthotonos, and decorticate and
decerebrate posturing may be present, more commonly in
children. Corneal reflexes are preserved except in deep
coma. Patients often exhibit a change in diurnal rhythm,
with excessive sleepiness during the day and difficulty
sleeping at night. Patients may exhibit somnambulism.
The criterion standard diagnostic test for malaria is the
microscopic examination of Giemsa-stained blood smears
by an appropriately trained individual, including a thin
smear to determine the level of parasitemia and a thin
smear to speciate the organism(s) present. Three negative
sets of smears at 8–12 h intervals are required to rule out
malaria. More than one species of malarial organism may
be present. Microhematocrit centrifugation and Fluorescent dye Quantitative Buffy Coat staining may also be
utilized; however, these do not allow speciation.
Rapid testing (RDT) for P. falciparum and P. vivax has
assumed a more prominent role in the last several years.
RDTs are based on antibody recognition of histidine-rich
protein 2 (HRP-2) parasite antigens. In most cases, they
have been found to be as specific as microscopy, although
they are not as reliable when parasite levels are <100
parasites/ml blood. A false positive may occur up
2 weeks post treatment due to the persistence of circulating antigen after parasite death. A list of currently available
RDTs, and technical information, can be found at the
WHO/WRPO site Malaria Rapid Diagnostic Tests
(http://www.wpro.who.int/sites/rdt).
PCR for parasite mRNA or DNA is specific, and more
sensitive than microscopy; as such it will detect organisms
at very low levels of parasitemia. It is more expensive,
requires specific equipment, and does not provide an
estimate of the parasite load.
A number of factors globally, including a lack of diagnostic materials and of trained technicians to perform
blood smears and rapid testing, low sensitivity of rapid
tests in patients with low-level parasitemia, the inability of
some treatment centers and hospitals to exclude other
diagnoses, lack of severe symptoms in some individuals
in endemic areas, a low specificity of the clinical features of
the disease, and the concomitant occurrence of other
organ dysfunctions and metabolic changes present in
severe malaria contribute to both misdiagnosis and overdiagnosis of cerebral malaria. Overdiagnosis leads to
unnecessary treatment with potentially dangerous drugs,
539
C
540
C
Cerebral Malaria
insufficient investigation of other potentially deadly
causes, high mortality rates, and the development of
resistance.
Patients will have variable degrees of anemia, thrombocytopenia, and may have jaundice, hepatosplenomegaly,
and renal dysfunction. It is important to determine if the
patient is pregnant.
A number of factors may contribute to neurological
symptoms and signs in malaria. Coinfection may be present, and other causes of fever, such as bacterial meningitis/meningoencephalitis and viral encephalitis, must be
considered and ruled out with lumbar puncture and CSF
analysis. In malaria, CSF opening pressure is normal to
elevated, fluid is clear, protein and lactate levels are elevated to varying degrees, and a mild pleocytosis is present
with a white blood cell count less than 10/mL. Fever alone
may cause impairment of consciousness, delerium and
febrile seizures. Hypoglycemia due to cerebral infection
or the use of antimalarials such as quinine may also
produce altered mental status, neurological deficits or
seizure activity. Hypoglycemia is most common in very
young children and pregnant patients. Antimalarial drugs,
including chloroquine, quinine, mefloquine and
halofantrine may cause neuropsychiatric symptoms,
including altered behavior, hallucinations, psychosis, and
delerium and seizures. Hyponatremia in elderly patients,
due to repeated vomiting, or secondary to injudicious
fluid administration may lead to altered mental status
and seizures. Severe anemia and hypoxemia may lead to
altered mental status. Focal neurological deficits are rare in
falciparum malaria and should suggest another cause.
An EEG may be helpful in delineating ongoing seizure
activity in the comatose individual. EEG may show
a number of nonspecific abnormalities.
Neuroimaging may demonstrate edema, cortical
infarcts, hemorrhage, and white matter changes; however,
these changes are non-diagnostic for cerebral malaria.
MRI may show hemorrhagic lesions and infarction.
After-care
Thin and thick smears should be obtained weekly for at
least one month after the patient is discharged to ensure
resolution of parasitemia. Individuals with residual neurological issues should be followed to determine resolution or possibly provide additional rehabilitation in those
with permanent disability.
Prognosis
Cerebral malaria carries a mortality of approximately 15%
in children and 20% in adults. A common cause of death is
acute respiratory arrest, which may be due to brain stem
herniation. Prolonged duration and deeper level of coma,
recurrent episodes of hypoglycemia, severe anemia, renal
dysfunction, repeated seizures, and higher cerebral fluid
lactate levels are predictors of a worsened prognosis.
Cerebellar ataxia may occur without impaired consciousness and may occur up to 3–4 weeks after an attack
of malaria. This usually recovers completely after
1–2 weeks. Other late neurological complications include
the post-malaria neurological syndrome (PMNS), acute
inflammatory demyelinating polyneuropathy (AIDP) and
acute disseminated encephalomyelitis (ADEM). Malaria,
and certain antimalarials such as mefloquine, can exacerbate preexisting psychiatric illness. Depression, paranoia,
delusions, and personality changes also may develop during convalescence from cases of otherwise uncomplicated
malaria. The prevalence of neuropsychiatric deficits ranges
between 6% and 29% at the time of discharge from the
hospital. Residual deficits are unusual in adults (<3%).
Neurologic defects may improve rapidly over weeks to
months or may occasionally persist following cerebral
malaria, especially in children (10%). Individuals may
experience long-term cognitive impairments in speech,
language, memory and attention, ataxias, palsies, speech
disturbances, deafness, and blindness. In one prospective
study of Ugandan children aged 5–12 years, cognitive
impairment, most prominently in attention, was present
in 26.3% of children with cerebral malaria at 2-year
follow-up [6].
Economics
Cerebral malaria is one of the most life-threatening complications of malaria, with an annual incidence of 1.12
cases per 1,000 children, and a 7–18.6% mortality rate,
often in the initial 24 h, despite rapid treatment. It
accounts for 10% of pediatric admissions in some subSaharan hospitals. In 2004, the Disease Control Priorities
in Developing Countries project estimated the global burden of malaria, expressed in disability adjusted life years
(DALYs), as 42,280,000.
References
1.
2.
3.
Hanson J, Lee SJ, Mohanty S et al (2010) A simple score to predict
the outcome of severe malaria in adults. Clin Infect Dis 50
(1):679–685
Dondorp A, Nosten F, Stepniewska K et al (2005) Artesunate versus
quinine for treatment of severe falciparum malaria: a randomized
trial. Lancet 366:717–725
Enwere GA (2005) A review of the quality of randomized clinical
trials of adjunctive therapy for the treatment of cerebral malaria.
Trop Med Int Health 10:1171–1175
Cerebral Perfusion Pressure
4.
5.
6.
Mishra SJ, Newton CRJC (2009) Diagnosis and management of the
neurological complications of falciparum malaria. Nature Rev
Neurol 5:189–198
Basavaraju SV, Schantz P (2006) Soil-transmitted Helminths and
Plasmodium falciparum Malaria: Epidemiology, clinical manifestations, and the role of nitric oxide in Malaria and geo-helminth coinfection. Do worms have a protective role in P.falciparum infection?
Mt Sinai J Med 73(8):1098–1105
John CC, Bangirana P, Byarugaba J et al (2008) Cerebral malaria in
children is associated with long-term cognitive impairment. J Pediatr
122(1):e92–e99
C
20–80 mmHg, there is a linear increase in CBF for an
increase in PaCO2. Similarly, there is a linear relationship
between CVR and CPP in the ranges of CPP 50–150
mmHg CPP. These variances together are called cerebral
autoregulation and exist in order to maintain a nearconstant cerebral blood flow.
Normal CPP is >50 mmHg. The critical threshold
below which CBF diminishes and ischemia is produced
varies per individual but normally is in the range of
50–60 mmHg; hence, it is a typical goal to maintain
CPP >60 mmHg [2].
Clinical Relevance
Cerebral Perfusion Pressure
SAMUEL WALLER1, KATHRYN M. BEAUCHAMP2
1
Department of Neurological Surgery, University of
Colorado School of Medicine, Denver, CO, USA
2
Department of Neurosurgery, Denver Health Medical
Center, University of Colorado School of Medicine,
Denver, CO, USA
Synonyms
CPP
Definition
Cerebral perfusion pressure (CPP) is the pressure at which
the brain receives blood flow. Conceptually speaking, CPP
is pressure at which the blood can force its way into the
closed box that is the cranial vault and overcome the
cranial vault’s intrinsic pressure. Clinically, CPP can be
derived by taking the difference between the mean arterial
pressure and the intracranial pressure (CPP = MAP –
ICP). One cannot consider CPP without thinking about
cerebral blood flow (CBF) as it is only intuitive that the
pressure allows for the tissue to receive its required blood
flow. Tissue requires adequate blood flow in order to
maintain normal physiologic functioning. CBF is
a difficult to measure and derive clinically without specialized equipment. Therefore, CPP is often utilized.
Clinically, CBF is the CPP divided by the cerebral
vascular resistance (CVR): CBF = CPP/CVR. It is well
known that CBF rates of less than 20 mL per 100 g
tissue/min lead to ischemia and, if prolonged, will lead
to cell death [1]. In order to conceptualize cerebral blood
flow, we then must think about cerebral vascular
resistance.
Cerebral vascular resistance is affected by the patient’s
PaCO2 and CPP. That is to say that in the ranges of PaCO2
Typically, parameters such as cerebral blood flow
and cerebral perfusion pressure matter in settings
where an injury such as a stroke, hematoma, intracranial mass lesion, hydrocephalus or similar has occurred.
In these settings, neurologic deterioration may be
impending and interventions must be implemented in
order to preserve brain tissue through preservation of
cerebral blood flow. In these settings, again CBF is
a difficult clinical number to assess at the bedside in
order to help guide therapies but CPP can be derived
relatively easily through monitoring of the patient’s
intracranial pressure.
Intracranial pressure monitoring is generally indicated in any patient whose Glascow Coma Score (GCS)
is <9 and who has an abnormal brain imaging study
or who has risk factors for intracranial hypertension
(age >40, SBP<90, or decerebrate/decorticate posturing
on motor examination). Other indications for intracranial pressure monitoring include patients with multiple
system injuries requiring therapies that may be deleterious to cerebral blood flow and intracranial pressure
such as high levels of positive end-expiratory pressure
ventilator settings, high volumes of fluid required for
resuscitation, or the need for heavy sedation. Relative
contraindications to intracranial pressure monitoring
include: (1) the awake patient, as they have a neurologic
exam to follow, (2) coagulopathic patients in whom the
risk of placing a monitor and causing an acute intracranial mass lesion (hemorrhage) is high, and (3) patients
with an exam consistent with brain death who do not
respond quickly to empiric therapies to lower intracranial pressure.
Types of intracranial pressure monitoring include:
intraventricular catheters, intraparenchymal monitors,
subarachnoid screws, subdural monitors, epidural monitors, or in infants fontanometry. Of these, intraventricular
catheters and intraparenchymal monitors are the most
common.
541
C
542
C
Cerebral Perfusion Pressure (CPP)
without risking further ischemic injury; one must
be cautious of the myocardial depressant effect of
barbiturates
c. Decompressive craniectomy, which opens the
intracranial vault and physically creates more volume for the brain to expand into
d. Hypothermia, again reduces cerebral metabolism
but has multiple side effects including increased
risk for infections, decline in cardiac index,
pancreatitis, elevated creatinine clearance, and
shivering, which can cause elevations in intracranial
pressure [3]
Means of treating intracranial pressure elevations in
order to preserve cerebral perfusion pressure include the
following [2]:
1. Elevate the head of the bed to 30 –45 in order to
increase venous drainage of the brain
2. Keep the neck inline in order to prevent restriction of
jugular venous outflow
3. Avoid tight trach or endotracheal tube taping in order
to prevent restriction of jugular venous outflow
4. Avoid hypotension (SBP <90) by ensuring intravascular volume is normalized, use pressors if needed;
this ensures cerebral perfusion is not compromised
5. Control hypertension in order to prevent cerebrovascular constriction
6. Avoid hypoxemia (pO2 <60 mmHg) in order to prevent further ischemic injury and cerebrovascular vasodilation, which increases intracranial pressure
7. Ventilate the patient to normocarbia; hyperventilation
is useful as an adjunct for short-term control of intracranial hypertension but long term can worsen ischemic injuries
8. Light sedation
More aggressive measures to control elevations in
intracranial pressure include [2]:
1. Heavy sedation and/or paralysis, which reduces sympathetic tone and hypertension caused by movement
and tensing abdominal vasculature
2. Drain 3–5 mL of cerebrospinal fluid (if an intraventricular catheter is present), which reduces intracranial
volume and therefore the related pressure
3. Mannitol or similar osmotic therapy in order to draw
fluid out of the brain parenchyma and possibly
improve blood rheology
4. Hypertonic saline, bolus with 10–20 mL of 23.4%
saline; when the serum osmolarity is less than 320,
some patients refractory to osmotic diuretics will
respond to hypertonic saline
5. Hyperventilate to a pCO2 near 30, which decreases
cerebral blood flow and the related intracranial
pressure
6. Continued refractory intracranial pressure may
require more aggressive therapy and should prompt
one to:
a. Check a noncontrasted head CT in order to ensure
there is not a new surgical intracranial lesion
b. Barbiturate coma (thiopental or pentobarbital),
which sedates, treats seizures, and reduces cerebral
metabolism and thereby cerebral blood flow
Goals of treatment of intracranial pressure include the
following:
1. Intracranial pressure<20mmHg [4]
2. Cerebral perfusion pressure >60 mmHg
Cerebral perfusion pressure critical threshold varies by
the individual but is in the range of 50–60 mmHg before
ischemic injury is encountered. Therefore, goals of treatment should be to maintain a CPP of 60 or greater, thereby
ensuring that dips in CPP to levels where ischemia occurs
are avoided.
In summary, CPP is the clinical measure whereby
physicians can ensure the brain receives the blood flow
needed in order to prevent ischemic injury. This is accomplished by measures and treatments of intracranial pressure and monitoring of hemodynamics.
References
1.
2.
3.
4.
Astrup J, Siesjo BK, Symon L (1981) Thresholds in cerebral
ischemia – the ischemic -penumbra. Stroke 12:723–725
Greenburg M (2006) Handbook of neurosurgery, 6th edn. Thieme
Medical Publishers, New York
Bratton SL, Chestnut RM, Ghajar J et al (2007a) Guidelines for the
Management of Severe Traumatic Brain Injury. Journal of
Neurotrauma. 24, supplement 1, 31–36
Bratton SL, Chestnut RM, Ghajar J et al (2007b) Guidelines for the
Management of Severe Traumatic Brain Injury. Journal of
Neurotrauma. 24, supplement 1, 65–68
Cerebral Perfusion Pressure (CPP)
Defined as the difference between the mean arterial pressure (MAP) and the intracranial pressure (ICP); CPP =
MAP – ICP. The target CPP in the setting of severe TBI is
greater than or equal to 60 mmHg.
Change
Cerebral Trauma
▶ Traumatic Brain Injury, Initial Management
C
the vascular access device. However, a short extension tube
might be connected to the catheter and might be considered a portion of the catheter to facilitate aseptic technique when changing administration sets [1].
Rationale
Cerebrospinal Fluid Pressure
Monitoring
▶ ICP Monitoring
Cervical Rib Syndrome
▶ Thoracic Outlet
Bloodstream infections associated with the insertion and
maintenance of vascular access devices are among the
most dangerous complications associated with health
care. They have been shown to be associated with increases
in patient morbidity and mortality and with a prolonged
intensive care unit and hospital stay. Moreover, catheterrelated bloodstream infection is associated with high costs
of care [2]. The method and frequency of changing catheters and catheter lines can influence the risk of infection.
Methods of Central Venous Catheter
Replacement
Central venous catheters can be replaced by percutaneously inserting a new catheter at another body site or by
placing a new catheter over a guide wire at the existing site.
Percutaneous Insertion
Cervicobrachial Syndrome
▶ Thoracic Outlet
Change
SONIA LABEAU1, DOMINIQUE VANDIJCK1, STIJN BLOT2
1
Faculty of Medicine and Health Sciences,
Ghent University, Ghent, Belgium
2
Department of General Internal Medicine & Infectious
Diseases, Ghent University Hospital, Ghent, Belgium
Synonyms
Catheter and line/tubing/administration sets change;
Replacement of vascular access devices and line/tubing/
administration sets
A catheter’s insertion site directly influences the subsequent risk for infection. The density of skin flora at the
insertion site is a major risk factor. After insertion, certain
sites are easier to maintain clean and dry.
Catheters inserted into an internal jugular vein are
associated with a higher risk for infection than those
inserted into a subclavian vein. For infection control purposes, the subclavian site is generally recommended. However, this recommendation must be balanced against
individual patient-related and noninfectious issues such
as patient comfort and mobility and the risk of mechanical
complications. In adults, it is strongly recommended to
avoid use of the femoral vein for central venous access due
to a greater risk of infection and deep venous thrombosis.
In children, the increased infection risk has not been
demonstrated [3].
Other aspects of catheter insertion, such as the use of
maximal sterile barriers, skin antisepsis, use of a checklist
and insertion cart, selection of catheter type, and technical
insertion issues, are beyond the scope of the current
procedure.
Definition
Catheter change is defined as the replacement of a catheter
in situ and its administration set(s) by a new catheter and
administration set(s).
Administration sets are defined as the area from the
spike of tubing entering the fluid container to the hub of
543
Guide Wire Insertion
Guide wire insertion has become an established method to
replace a malfunctioning catheter or to exchange
a pulmonary artery catheter for a central venous device
when invasive monitoring had become superfluous.
C
544
C
Change
The technique offers a better patient comfort and causes
a significantly lower rate of mechanical complications as
compared to percutaneous insertion at a new body
site [2].
Guide wire-assisted catheter exchange to replace
a malfunctioning catheter or to exchange an existing catheter is only recommended in the absence of evidence of
infection at the catheter site or proven catheter-related
bloodstream infection. Suspicion of catheter infection
without evidence of infection at the catheter site should
lead to removal of the catheter in situ and insertion of
a new catheter over a guide wire; if subsequent tests demonstrate catheter-related infection, the newly inserted
catheter should be removed and, if still required, another
new catheter inserted at a different body site.
In patients with catheter-related infection, replacement of catheters over a guide wire is not recommended.
If continued vascular access is required in these patients,
the concerned catheter is to be removed and to be replaced
with another catheter at a different insertion site [2].
The above recommendations also pertain to the
administration sets of pulmonary artery catheters [1]. In
peripheral arterial catheters transducers are to be replaced
at 96 h intervals. Continuous flush devices and intravenous tubing are to be replaced at the time the transducer is
replaced [1].
Pre-existing Condition
This procedure applies to adult patients with a central
venous, pulmonary arterial or peripheral arterial intravascular device in place.
Application
The application described below only pertains to the
change in administration sets of central venous catheters.
Change in tubing of other vascular access devices can be
extrapolated from this description.
The procedure pertaining to the insertion of a catheter
percutaneously or by guide wire assistance is beyond the
scope of this procedure.
Frequency of Catheter Replacement
In central venous catheters, including peripherally
inserted central catheters and hemodialysis catheters, routine catheter replacement without clinical indication has
been shown not to reduce the rate of catheter colonization,
nor the rate of catheter-related bloodstream infection [2].
The most recent SHEA/IDSA evidence-based recommendations to prevent central line-associated bloodstream
infections in acute care hospitals strongly recommend
catheter replacement on an as-needed basis only. Routine
catheter replacement is not recommended, neither percutaneously nor over a guide wire [3].
In adults, peripheral artery catheters should not be
replaced routinely to prevent catheter-related infection
[1, 3], while in pediatric patients no recommendation
for the frequency of catheter replacement is currently
available [1]. Similarly, it is recommended not to replace
pulmonary artery catheters to prevent catheter-related
infection [1].
Frequency of Replacing Intravenous Tubing
and Add-On Devices
In central venous catheters intravenous sets not used for
the administration of blood, blood products, or lipids
should be replaced at intervals not longer than 96 h [3].
All tubing used to administer blood products or lipid
emulsions should be replaced within 24 h of initiating
the infusion [1, 4]. Moreover, all fluid administration
tubing and connectors should be replaced when the central venous access device is replaced [2].
Data Collection
Collect patient and catheter data.
Inform about the need to continue catheterization.
Preparation of Work Area
Restrict activities around bed.
Assure patient privacy and safety.
Make room at the bedside area.
Ensure good visibility.
Adjust the bed height.
Preparation of Material
Clean catheter cart surface.
Ensure all needed material is present.
Apply appropriate hand hygiene.
Open sterile field and dressing packs and prepare using
aseptic technique.
Hang new intravenous solution as ordered by physician in
reach.
Spike new solution bag with new administration set,
protecting distal end from contamination.
Prime intravenous line with appropriate solution to
remove all air and clamp administration set with roller
clamp.
Preparation of Patient
If possible, explain procedure to patient.
Chemical and Physical Forces
Assist/place patient into backrest position that is both
comfortable and convenient for the procedure.
Clear space around the insertion site of clothes and
blankets.
Procedure
Take cart to bedside.
Apply appropriate hand hygiene and put on nonsterile
gloves.
Place bed protection.
Loosen and remove dressing.
Observe dressing.
Remove gloves and discard gloves and dressing.
Apply appropriate hand hygiene.
Observe and inspect catheter site.
Take culture, if appropriate.
Assess which type of dressing to use.
Peel open dressing packet and open sterile field.
Apply aseptic technique by no-touch technique.
In case of dried blood or drainage at the insertion site,
cleanse with sterile NaCl 0.9%.
Disinfect the insertion site with a 2% chlorhexidine-based
solution.
Allow the insertion site to air dry.
On the catheter tubing being changed, stop the infusion
pump if applicable, and clamp the intravenous line
with the roller clamp.
Put on sterile gloves.
Thoroughly swab catheter pigtail and line being changed 5
cm on both sides of port connection with 2% chlorhexidine-based solution and allow to air dry.
According to the type of catheter in situ, clamp off the line
of the catheter pigtail being changed with the blue slide
clamp or close the catheter lock.
Disconnect cleaned IV line from cleaned catheter pigtail.
Connect new line to catheter pigtail.
Remove blue slide clamp from pigtail or open catheter
lock.
Observe for adequate infusion flow.
Observe for leakage or blood back up from catheter pigtail
connection site.
Apply appropriate dressing using aseptic technique and
fix, avoiding traction and pressure, and bearing
patient comfort and mobility in mind.
Place new administration set and solution into infusion
pump.
Apply appropriate hand hygiene.
Post-Procedure Care
Install the patient comfortably.
C
Label IV administration set with time and date of change.
Adjust the bed height.
Remove trolley from bedside area.
Dispose of waste according to the institutional
instructions.
Clean the cart surfaces and dry well.
Documentation
Document the lines that have been changed and site
assessment in medical record and/or flow sheet.
Record date and time of next line change on flow chart.
Inform physician of signs of infection.
References
1.
2.
3.
4.
O’Grady NP, Alexander M, Dellinger EP, Gerberding JL, Heard SO,
Maki DG, Masur H, McCormick RD, Mermel LA, Pearson ML, Raad
II, Randolph A, Weinstein RA (2002) Guidelines for the prevention
of intravascular catheter-related infections. MMWR 51:1–29
Pratt RJ, Petlowe CM, Wilson JA, Loveday HP, Harper PJ, Jones S,
McDougall C, Wilcox MH (2007) EPIC2: national evidence-based
guidelines for preventing healthcare-associated infections in NHS
hospitals in England. J Hosp Infect 65:S1–S64
Marschall J, Mermel LA, Classen D, Arias KM, Podgorny K,
Anderson DJ, Burstin H, Calfee DP, Coffin SE, Dubberke ER, Fraser
V, Gerding DN, Griffin FA, Gross P, Kaye KS, Klompas M, Lo E,
Nicolle L, Pegues DA, Perl TM, Saint S, Salgado CD, Weinstein RA,
Wise R, Yokoe DS (2008) Strategies to prevent central line-associated
bloodstream infections in acute care hospitals. Infect Control Hosp
Epidemiol 29:S22–30
Labeau S, Vandijck D, Lizy C, Piette A, Verschraegen G, Vogelaers D,
Blot S (2009) Replacement of administration sets used to administer
blood, blood products, or lipid emulsions for the prevention of
central line-associated bloodstream infection. Infect Control Hosp
Epidemiol 30:494
Chelator
A chelator is a chemical compound capable of sequestering a substrate atom, often a metal, via two or more
chemical bonds.
Chemical and Physical Forces
They are involved in adsorption including Van der Waals
forces generated by atomic and molecular interactions,
ionic bonds generated by electrostatic forces and finally
hydrophobic bonds.
545
C
546
C
Chest Bleeding
Chest Bleeding
▶ Hemothorax
Chest Compression (CC)
▶ Cardiopulmonary Resuscitation
Chest Discomfort
▶ Chest Pain: Differential Diagnosis
Chest Infection
The “pain” may represent little more than a vague discomfort or sensation of heaviness all the way to the more
classic description of “elephant sitting on my chest.” Furthermore, while the chest as an anatomic entity is clearly
defined, many of the diseases that cause “chest pain” can
present with pain outside the chest as well. An example
would be the shoulder or epigastric pain presentation of
▶ acute coronary syndrome. Moreover, many of these
diseases can present with non-pain symptoms.
Unexplained shortness of breath is a common presentation of acute coronary syndrome in the elderly or of
▶ pulmonary embolism in patients of all ages. As
a practical issue, providers learn the differential diagnosis
of all of the diseases that can present with chest pain and
then learn the alternate presentations of these diseases.
Pathophysiology
Afferent visceral nerve fibers from the intrathoracic organs
traverse sympathetic ganglia en route to thoracic dorsal nerve
roots and dorsal ganglia. Somatic afferent nerve fibers synapse in the same dorsal ganglia. This complex neurologic
configuration leads to visceral pain that is often poorly localized, vague, and capable of radiation to other anatomic areas.
▶ Mediastinitis, Postoperative
Differential Diagnosis
Chest Pain: Differential Diagnosis
JOHN TOBIAS NAGURNEY
Department of Emergency Medicine, Massachussets
General Hospital, Harvard Medical School, Boston,
MA, USA
Synonyms
Ache; Chest discomfort; Heaviness
Definition
As the name implies, the term “chest pain” refers to “pain,”
an uncomfortable or unpleasant body sensation that
a patient experiences in the “chest” area. The chest is the
area of the body located between the neck and the abdomen, and more formally as the area below the clavicles but
above the inferior borders of the rib cage. It contains the
lungs, the heart, and part of the aorta. The walls of the
chest are supported by the dorsal vertebrae, the ribs, and
the sternum. This definition represents a good starting
point to think about the diseases that cause pain in this
area, but is somewhat misleading for two reasons.
The ability for the clinician to distinguish among the
many diseases presenting with chest pain is truly important. Chest pain is the second most common presenting
complaint among emergency department (ED) patients in
the USA. The diagnoses of ▶ acute myocardial infarction
or unstable angina pectoris are missed in EDs in 2–8% of
patients. The missed diagnosis of myocardial infarction
represents an estimated 20% of total dollars spent for
medical malpractice claims. Because of these among
other reasons, most ED providers are relatively conservative in their evaluation and admission practices for
patients who present with chest pain. As a result, it is
estimated that only a small percent of patients admitted
to an observation or in-patient service to rule-out acute
coronary syndromes turn out to have that disease.
Chest pain represents a series of syndromes that are
both common and difficult to diagnose. The difficulty in
diagnosis occurs for a number of reasons. The first is that
over 30 diseases or syndromes are scattered among the six
different organ systems (lungs and pleura, heart and great
vessels, gastroesophageal, nervous system, musculoskeletal system, and others, e.g., psychiatric) that are
represented in the chest (Table 1) [1].
A second reason is that many of the diseases which
present with chest pain are not easily identified by
a single highly sensitive and specific diagnostic study or
Chest Pain: Differential Diagnosis
C
547
Chest Pain: Differential Diagnosis. Table 1 Diseases presenting with chest pain (Adapted from [1])
Organ system
Emergent diagnoses
Urgent but not critical diagnoses
Nonemergent
Cardiovascular
Acute MI
Unstable angina
Aortic dissection
Aortic aneurysm
Cardiac tamponade
Pericarditis
Myocarditis
Severe aortic stenosis
Cardiomyopathy
Mitral valve prolapse
Noncritical valvular disease
Pulmonary
Pulmonary embolus
Tension pneumothorax
Pneumothorax
Mediastinitis
Pneumonia
Pleuritis
Cancer
Pneumomediastinum
Bronchitis
Gastrointestinal
Esophageal rupture
Cholecystitis
Acute pancreatitis
Esophageal spasm
Esophageal reflux
Peptic ulcer disease
Biliary colic
Hiatal hernia
Musculoskeletal
Muscle strain
Rib fracture
Arthritis
Tumor
Costochondritis
Neurological
Spinal root compression
Thoracic outlet syndrome
Other
Herpes zoster
Post herpetic neuralgia
Hyperventilation
Panic attack
procedure. For example, while an ▶ aortic dissection can
usually be identified or excluded by an aortic dissection
computerized axial tomography, magnetic resonance
imaging, a trans-esophageal echocardiogram or an angiogram, pain from musculoskeletal origin is usually
a diagnosis of exclusion. There is no single diagnostic
test that definitively diagnoses an acute coronary syndrome. This diagnosis is made by a combination of
the clinical presentation, electrocardiograms, cardiac
biomarkers, and usually an anatomic or physiologic
risk stratification test. When the data elements conflict,
the definitive final diagnosis often remains in doubt
and the term “non-cardiac chest pain” becomes the final
diagnosis [2].
Given the fact that diagnosing the cause of chest pain in
individual patients can be extremely challenging, the question becomes: what is the most reasonable approach when
caring for such a patient? The establishment of the differential diagnosis is largely achieved through a consideration of
the patient’s demographic data (age and sex), their past
medical history, a consideration of risk factors for specific
diseases, and the nuances of their chest pain story. The
context of the chest pain is important. Chest pain that occurs
after trauma has a different differential diagnosis than
nontraumatic chest pain. Typically, the provider begins by
addressing diseases in the differential diagnosis that, if
undiagnosed and untreated, can potentially lead to death
within minutes. Classically, these diseases include aortic
dissection, massive pulmonary embolism, and acute coronary syndrome. Some authors include tension pneumothorax or pericardial tamponade as well [3]. A second set of
diseases can cause potential mortality and significant morbidity, although usually less acutely than this highly lethal
group. Examples of diseases in this second category
include ▶ pneumonia and multiple rib fractures. The
third set of diseases, far more common than the others,
include diseases that cause pain, anxiety, and morbidity to
patients but usually do not result in loss of life or limb.
Examples of diseases in this category include gastroesophageal reflux disease, musculoskeletal chest pain, viral
C
548
C
Chest Pain: Differential Diagnosis
pleurodynia, and herpes zoster. Typically, the patient
remains under relatively intensive observation and monitoring until potentially life-threatening diseases are
excluded. Once this has been accomplished, potential
other diagnoses can be pursued during that hospitalization or as an outpatient. In summary, the primary goal in
caring for a patient presenting with chest pain is to perform a brief but accurate risk stratification so that lifethreatening diseases can be intervened upon [4].
Most diseases which present with chest pain occur
within relatively characteristic age and sex strata.
For example, acute coronary syndrome becomes more
common with advancing age and is rarely seen in
premenopausal women. Conversely, pulmonary embolism is commonly seen in patients of all ages. Young
women are at risk because many have risk factors such as
pregnancy. After consideration of age and sex, most providers consider the patient’s risk factors for particular
diseases. For acute coronary syndrome, aortic dissection,
and pulmonary embolism these risk factors are relatively
well defined. Unfortunately, none are hard-and-fast. For
example, approximately 10–20% of patients presenting
with acute myocardial infarction lack all five of the classic
risk factors for that disease. For many diseases, a history of
having had that disease previously represents an important risk factor. In the context of a presentation with acute
chest pain, patients with a history of myocardial infarction
or pulmonary embolism are more likely to have these
respective diseases when compared to patients without
such a history. Finally, establishing the differential diagnosis often requires that providers obtain an accurate and
complete history of the patient’s chest pain story. Unfortunately, many elements of the chest pain story lack sensitivity, specificity, or both [5, 6]. Restated, the chest pain
story allows the clinician to establish prior probabilities to
be refined by diagnostic testing but these probabilities are
at best approximate (Table 2).
Pain Location
The chest pain story begins with pain location. For practical purposes, pain that is substernal or left-sided is
equivalent. Pain in these locations is consistent with
fatal diseases such as acute coronary syndrome as well as
non life-threatening diseases such as gastroesophageal
reflux disease and ▶ pericarditis. Pain in the periphery of
the chest is more consistent with a disease of pleural,
pulmonary, or musculoskeletal origin. Associated with
location is the concept of radiation, or extension of
the pain into other areas of the body. Again, certain diseases have classic radiations. Examples include the pain of
aortic dissection which typically radiates to the back or
Chest Pain: Differential Diagnosis. Table 2 Elements of the
chest pain story
Element
Specific
details
Comment
Timing
Average
duration
Seconds, minutes, or hours?
Frequency
Only once or multiple
occurrences?
Time of onset
First time ever that the pain
occurred?
Time of most Within the past 6 h, 24 h, or
recent episode longer?
Location
Right or left chest, upper or
lower, central or peripheral?
Radiation
Where?
Quality
Best descriptive adjective for
the pain?
Precipitating
factors
Eating, breathing, exertion?
Relieving
factors
Resting, nitroglycerin,
antacids?
Associated
symptoms
Diaphoresis, nausea,
shortness of breath?
that of acute myocardial infarction which often radiates to
the left shoulder.
Quality and Intensity
The quality and intensity of the pain are the next characteristics that are usually considered. Both of them are, in general, nondiscriminating. The pain of acute myocardial
infarction may be described as “pressure,” “heaviness,”
“burning,” “aching,” or “discomfort.” The intensity of the
pain is usually also nondiscriminating as well. For example,
the intensity of pain in patients presenting with acute myocardial infarction or of acute aortic dissection is usually
severe but can be mild or even nonexistent. Conversely, the
pain from gastroesophageal reflux or musculoskeletal origin
is usually mild but may be extremely severe.
Timing of Pain
The timing of the chest pain is probably the most difficult
element of the chest pain story to capture. The concept of
timing includes when the pain originally began, its typical
duration, the frequency with which it occurs, and the
onset of the most recent episode. While no hard-and-fast
rules apply, some general principles are useful. One such
Chest Wall Stabilization
principle is that pain that has been going on for weeks or
months does not usually cause major morbidity or mortality. Conversely, for a patient presenting with an acute
myocardial infarction, the average interval between the
onset of pain and presentation to an emergency department is between 3 and 9 h. The duration of chest pain
from pulmonary, pleural, musculoskeletal, and gastrointestinal sources can be hours without interruption. The
number of times per day or week that the pain occurs can
be helpful as well. In general, pain that occurs frequently is
usually less worrisome than pain that occurs occasionally.
Finally, the time of the most recent occurrence can be used
to determine the value of certain diagnostic tests such as
cardiac biomarkers.
Relieving and Precipitating Factors
Cardiac pain is typically precipitated by exertion and
relieved by rest. It is often relieved by stopping
a strenuous activity. Chest pain from pleural, pulmonary,
or musculoskeletal sources is often worsened by coughing
or deep breathing. Examples include the pains of viral
pleuritis, pulmonary embolism, pneumonia, or intercostal
muscle strain.
Associated Symptoms
Symptoms that typically accompany the chest pain can
increase the probability of certain diseases. For example,
nausea and diaphoresis are common accompaniments to
chest pain in patients presenting with acute coronary
syndrome. A sour taste in a patient’s mouth during episodes of chest pain increases the possibility of gastroesophageal reflux disease. And acute neurologic
symptoms accompanying chest pain increase the probability that the chest pain is caused by an aortic dissection.
Cross-Reference to Disease
▶ Acute Coronary Syndrome
▶ Acute Myocardial Infarction
▶ Aortic Dissection
▶ Aortic Stenosis
▶ Pericarditis
▶ Pneumonia
▶ Pneumothorax, Tension Pneumothorax
▶ Pulmonary Embolism
References
1.
2.
Brown JE, Hamilton GC (2010) “Chest pain.” Rosen’s emergency
medicine: concepts and clinical practice, 7th edn. Mosby Elsevier,
Philadelphia, pp 132–141
Lenfant C (2010) Chest pain of cardiac and noncardiac origin.
Metabolism 59(Suppl 1):S41–S46
3.
4.
5.
6.
C
Jones ID, Slovis CM (2001) Emergency department evaluation of the
chest pain patient. Emerg Med Clin North Am 19:269–282
Jesse RL, Kontos MC (1997) Evaluation of chest pain in the emergency department. Curr Probl Cardiol 22:149–236
Goodacre S, Locker T, Morris F, Campbell S (2002) How useful are
clinical features in the diagnosis of acute, undifferentiated chest pain?
Acad Emerg Med 9:203–208
Swap CJ, Nagurney JT (2005) Value and limitations of chest pain
history in the evaluation of patients with suspected acute coronary
syndromes. JAMA 294(20):2623–2629
Chest Tube: Chest Drain or
Thoracostomy Tube
▶ Thoracocentesis and Chest Tubes
Chest Wall Stabilization
DONALD D. TRUNKEY1, JOHN C. MAYBERRY2
1
Department of Surgery, Oregon Health & Science
University, Portland, OR, USA
2
Trauma/Critical Care, Oregon Health & Science
University, Portland, OR, USA
Synonyms
Fixation or repair; Flail chest stabilization; Rib and/or
sternal fracture operative reduction and internal fixation
(ORIF)
Definition
Chest wall stabilization is a surgical procedure in which rib
and/or sternal fractures are reduced (i.e., the fracture ends
are realigned and brought into proximity) and the fractures are fixated with a plating system.
Pre-existing Condition
Chest wall injury syndromes for which operative intervention may be indicated are listed in Table 1. Category
recommendations are based upon review of literature
and upon the authors’ experience.
Flail chest is defined by three or more ribs fractured in
two or more places. Paradoxical motion of the chest wall
(i.e., flail motion) may or may not be visible. If the patient
has already been endotracheally intubated and mechanically ventilated, the flail segment will not be externally
549
C
550
C
Chest Wall Stabilization
Chest Wall Stabilization. Table 1 Recommendations for
chest wall stabilization for each indication
Chest wall injury
Category recommendation
Flail chest
II
Chest wall implosion
syndrome
II
Chest wall defect/pulmonary
herniation
I
Intractable acute pain with
displaced fractures
III
Thoracotomy for other (“on
the way out”)
III
Displaced or comminuted
acute sternal fracture
III
Rib or sternal fracture
nonunion (pseudoarthrosis)
III
apparent. The diagnosis is established by CT scan. Two
small, single center randomized trials and cohort comparison studies have demonstrated several benefits of early
flail chest ORIF including decreased intensive care length
of stay, less pneumonia, early return to work, and
improved forced vital capacity (FVC) [1, 2].
Chest wall implosion syndrome is characterized by
multiple, displaced rib fractures along the medial edge of
the scapula, a clavicle fracture/dislocation, and often
a scapular fracture. Although this injury does not meet
the anatomic definition of flail chest, these patients are
physiologically similar to patients with anterolateral flail
chest, i.e., nearly all will require mechanical ventilation for
respiratory failure [3].
Chest wall defect or acute pulmonary herniation is
a rare injury where a portion of the chest wall is traumatically missing or the lung herniates through the chest wall,
e.g., through an intercostal muscle tear with associated rib
fractures. Operative repair is indicated to debride severely
damaged tissue and to restore pulmonary mechanical
integrity. A bioprosthesis such as acellular human or porcine dermis may be necessary to cover the tissue defect.
Serial operations with staged repair are recommended for
more severe tissue defects. Operative intervention is the
standard of care based on the lack of an acceptable alternative to surgical repair [4].
An occasional patient with significant displacement
including overriding of the fractured ribs will complain
of intractable pain with attempts at mobilization which
defies the usual attempts at pain control including epidural catheter infusion. This indication has not been studied,
but in the authors’ experience ORIF of the displaced rib
fractures can result in a dramatic improvement in pain
and allow the patient to recuperate and return to normal
function more rapidly.
Thoracotomy for other indications or “on the way
out” indicates a patient with rib fractures who requires
a thoracotomy for a traumatic indication such as retained
hemothorax, pulmonary laceration, ruptured diaphragm,
or even aortic injury. As the surgeon is closing the thoracotomy it may be reasonable, depending on the nature of
the rib fractures and the condition of the patient, to take
extra time to include rib fracture ORIF with the intent of
preventing future disability. This indication also applies
to non-traumatic situations where ribs are fractured or
purposely cut during thoracotomy exposure for elective
surgery. Rib fracture ORIF for this indication can be
considered safe in select patients but has not been studied
for efficacy.
Sternal fractures are occasionally acutely repaired
when they are completely displaced or comminuted. The
literature describing the operative techniques and results
are case series only and include no comparison groups [5].
Acute sternal fracture ORIF is therefore an acceptable
option in select patients and can be considered safe, but
warrants a Category III recommendation only.
Rib or sternal fracture nonunions (pseudoarthroses)
occur in 1–5% of patients and can be a source of persistent
pain and disability. Resection of the pseudocapsule and
margins of the bony defect to reinitiate osteosynthesis in
conjunction with internal fixation has been reported as
successful and efficacious in case series [5]. The successful
use of bone grafting techniques in situations of bone loss
for both rib and sternal fracture nonunions has also been
described. Neither indication has been studied in
a controlled fashion and, therefore, warrants a Category
III recommendation only.
Four different levels of recommendations exist:
● Category I. Operative intervention is standard of care.
● Category II. Operative intervention is acceptable in
selected patients based on the results of single-center
randomized trials and case-control series.
● Category III. Operative intervention is not clearly
indicated based on insufficient evidence.
● Category IV. Operative intervention has been demonstrated to have a lack of efficacy.
Application
Several plating systems have been used but none has
proven superior to another. Both metal and absorbable
plates have been used successfully [3]. Ribs are classified as
membranous bone because of their relatively thin cortex
Cholecystitis
compared to their inner marrow and are not expected to
hold a plate and/or screws as reliably as cortical or cancellous bone. Efficacious plating systems must also take into
account the curvature of ribs and the constant stress of
respiratory effort of the patient during the several weeks of
healing process.
References
1.
2.
3.
4.
5.
Tanaka H, Yukioka T, Yamaguti Y, et al (2002) Surgical stabilization
of internal pneumatic stabilization? A prospective randomized
study of management of severe flail chest patients. J Trauma 52
(4):727–732; discussion 32
Marasco S, Cooper J, Pick A, Kossmann T (2009) Pilot study of
operative fixation of fractured ribs in patients with flail chest. ANZ
J Surg 79(11):804–808
Solberg BD, Moon CN, Nissim AA, Wilson MT, Margulies DR
(2009) Treatment of chest wall implosion injuries without thoracotomy: technique and clinical outcomes. J Trauma 67(1):8–13;
discussion
Mayberry JC, Ham LB, Schipper PH, Ellis TJ, Mullins RJ (2009) Surveyed opinion of American trauma, orthopedic, and thoracic surgeons on rib and sternal fracture repair. J Trauma 66:875–879
Richardson JD, Franklin GA, Heffley S, Seligson D (2007) Operative
fixation of chest wall fractures: an underused procedure? Am Surg
73(6):591–596; discussion 6–7
Chicago Disease
▶ Blastomycosis
Childbed Fever
C
Chirodropid Jellyfish
▶ Jellyfish Envenomation
C
Chironex fleckerfi
▶ Jellyfish Envenomation
Choice of Catheter Lumen
▶ Port Designation
Cholangiopathy
▶ HIV-Related Cholecystitis
Cholecystitis
CHRISTOPHER M. WATSON1, ROBERT G. SAWYER2
1
Division of Trauma and Acute Care Surgery,
Palmetto Health, Columbia, SC, USA
2
Department of Surgery, University of Virginia Health
System, Charlottesville, VA, USA
▶ Puerperal Sepsis
Synonyms
Acute acalculous cholecystitis; Acute calculous cholecystitis; Acute cholecystitis
Child-Pugh
Also known as Child-Turcotte-Pugh is a prognostic scoring system used in patients with cirrhosis which consists of
five components, namely, bilirubin, albumin, INR, ascites,
and hepatic encephalopathy. Based on the levels of each of
these parameters, a score of 1–3 is awarded for each component with a composite score of 6 or less equating to
Child-Pugh A, 7–9 to Child-Pugh B, and 10 or more to
Child-Pugh C disease. Prognosis worsens as an individual
moves from Child-Pugh A through to Child-Pugh
C cirrhosis.
551
Definition
Cholecystitis is defined as inflammation of the gallbladder.
The disease can present acutely without prior symptoms
but more commonly after episodes of biliary colic, associated with or without gallstones, in which case the descriptor calculous or acalculous is added, respectively. In either
case, it is believed that stasis of bile and gallbladder ischemia occur leading to inflammation of the gallbladder wall
and eventually to surrounding structures as well resulting
in a localized peritonitis. It was originally thought that the
disease was solely attributable to infection. Later, in the
552
C
Cholecystitis
early 1940s, studies on animals demonstrated that stasis
was a primary pathologic condition that was necessary but
not sufficient for cholecystitis to develop.
With further work, it became clearer that ischemia was
also an important part of the pathogenesis. Since acute
calculous cholecystitis develops as a result of impaction of
a gallstone in the cystic duct, two conditions are met: bile
stasis and localized ischemia from distension of the gallbladder wall.
In acute acalculous cholecystitis (AAC), ischemia and
stasis are also present although with distinct mechanisms.
A generalized ischemic insult, whether from trauma, surgery, or a condition such as septic shock or vasopressor
use, is thought to precede inflammation. Stasis is due to
decreased gallbladder contraction secondary to starvation
or the severe disease state itself. Traditionally, postoperative states or trauma were most commonly associated with
the development of AAC, but a review of patients undergoing cholecystectomy for AAC found that infection was
the most common admission diagnosis, with postoperative state and trauma in only 33% [1]. In the general
medical and surgical population, patients with AAC tend
to be sicker with higher Sequential Organ Failure Assessment (SOFA) scores [1], whereas in the trauma population other markers of severity, such as Injury Severity
Score, number of units of packed red blood cells transfused, and tachycardia, are associated with AAC [2].
Although prolonged nil per os (NPO) status has been
associated with this disease, the same study found 56%
had received mainly enteral nutrition, while the remainder
received mainly parenteral nutrition [1]. More indirect
evidence seems to contradict this observation.
A randomized controlled trial of postoperative patients
receiving either enteral nutrition or intravenous saline
infusion showed that gallbladder volume was lower with
the former treatment thus indicating less stasis of bile [3].
There was no discussion of the proportion of patients
developing AAC in either group.
The role of bacteria in cholecystitis is still being
defined. Matsushiro et al. evaluated 52 patients presenting
with acute cholecystitis for the presence of bacteria in the
gallbladder at the time of cholecystectomy [4]. They found
bacteria present in 52% of those gallbladders with stones
and 33% of those without stones, although the generally
agreed upon culture positivity rate in acute cholecystitis is
in the 60–80% range. Of those with stones, those gallbladders with impacted stones more likely had bacteria present. Time to surgery did not show significantly different
bacteria in this study, although in other studies, patients
undergoing cholecystectomy earlier than 72 h after symptoms began were less likely to have bacteria in their
gallbladder. Also, infected bile seems to be more common
with age. Further complicating the picture is the finding
that the region of gallbladder cultured may also determine
whether bacteria are recovered [5].
Specific organisms differ somewhat regionally, but
enteric gram-negative aerobes, especially Escherichia coli
and Klebsiella species, and Streptococcus (Enterococcus)
faecalis predominated in the Matsushiro review. Other
reviews demonstrated more anaerobes, accounting for as
much as 25% of bacterial isolates [6].
Diagnosis
Traditionally, clinical indicators of infection or inflammation and right upper quadrant abdominal pain, coupled
with data from specific imaging modalities, have been
used to diagnose both acute cholecystitis and AAC.
Although fever and an abnormal white blood cell count
(WBC) may be present, they are not invariably so. The
Tokyo Guidelines require both local and systemic signs of
inflammation to suspect cholecystitis, and typical imaging
findings to confirm cholecystitis (Table 1) [7]. In AAC,
Cholecystitis. Table 1 Tokyo guideline grading system for
acute cholecystitis
Mild (grade I)
● Does not meet the criteria for
acute cholecystitis
moderate (grade II) or severe (grade
III) cholecystitis
● Also defined as a healthy patient with
no organ dysfunction and mild local
inflammation making
cholecystectomy a low-risk
procedure
Moderate (grade
II) cholecystitis –
any one of
the following
● WBC > 18,000/mm3
● Palpable, tender, RUQ mass
● Duration of complaints >72 h
● Marked local inflammation (biliary
peritonitis, pericholecystic abscess,
hepatic abscess, gangrenous
cholecystitis, emphysematous
cholecystitis)
Severe (grade III)
cholecystitis –
organ system
dysfunction
● Cardiovascular dysfunction (requiring
vasopressors or inotropes)
● Neurologic (depressed level of
consciousness)
● Respiratory (P:F < 300)
● Renal dysfunction (oliguria,
creatinine > 2.0 mg/dl)
● Hepatic (PT-INR > 1.5)
● Hematologic (platelet count
<100,000/mm3)
Source: Adapted from [21].
Cholecystitis
fever may be present in only 13% and leukocytosis in only
54% [1]. Unlike acute appendicitis, where right lower
quadrant pain and a correlative history may lead directly
to the operating room without further study, imaging
should always be included in the work-up of presumed
acute cholecystitis. This is because no examination finding
alone has been found sufficiently accurate to justify cholecystectomy, and associated findings, such as the presence
of stones or dilated common bile duct, may change the
procedure to include an intraoperative cholangiogram or
common bile duct exploration. Also, signs of gangrenous,
emphysematous, or perforated cholecystitis will affect
prognosis and the likelihood of conversion to an open
procedure. The most important imaging studies are
focused ultrasonography (US) or scintigraphy (HIDA,
Hepatobiliary iminodiacetic Acid), and computed tomography (CT) with intravenous contrast. More recently,
modifications of these modalities have been introduced
and may increase accuracy but have not penetrated the
mainstream. Magnetic resonance imaging (MRI) may also
have a role in the diagnosis of cholecystitis in difficult
cases, but especially for possible malignancy or evaluation
for CBD stones.
Ultrasound
US findings consistent with cholecystitis include gallstones,
especially incarcerated, or debris echo; a positive sonographic Murphy’s sign; wall thickening (>4 mm); gallbladder distention (long axis >8 cm, short axis >4 cm); and
pericholecystic fluid. Of these findings, the first three are
considered the most specific [8], especially when considered together. For example, the findings of gallstones with
a sonographic Murphy’s sign or wall thickening has
a positive predictive value for cholecystitis of 92% and
95%, respectively [9]. But, sensitivity for the diagnosis of
cholecystitis is, as with all US studies, operator dependent.
In a later study, sensitivity of US diagnosis of cholecystitis
compared with histology was only 48% [10], but a metaanalysis by Shea et al. reported a sensitivity of 94% for the
diagnosis of acute cholecystitis [11]. Recently studies have
evaluated surgeon-performed US as a modality for diagnosis of cholecystitis. These studies show that resident
surgeons with minimal training could detect gallstones
and cholecystitis as well as consultant radiologists [12].
Ultrasound also has a poor sensitivity when used alone
for the detection of AAC. In a study of critically ill patients
undergoing open cholecystectomy for presumed AAC,
only 80% had an abnormal US prior to surgery [1]. Similarly, in a trauma ICU population, US had a sensitivity of
30% and specificity of 93% [13]. However, in another study
of trauma patients, all patients with thickening and layering
C
of the gallbladder wall or necrotic degeneration, edema of the
surrounding tissue, and/or impending rupture coupled with
major clinical symptoms (pain and/or abdominal distention,
hemodynamic instability requiring increasing amounts of
vasopressors and/or fluid resuscitation and organ failure)
were found to have AAC.
Scintigraphy
Hepatobiliary scintigraphy evaluates the biliary uptake of
Tc-99 m-labeled iminodiacetic acid agents (Tc-99 m IDA)
and has a high sensitivity and specificity for the diagnosis
of acute cholecystitis. A study that does not show filling of
the gallbladder with contrast within 60 min is considered
positive for cholecystitis. Another sign that is suspicious
for cholecystitis is a “rim sign,” defined as augmentation
of radioactivity around the gallbladder fossa. After its
introduction, scintigraphy was suggested as a first-line
test in patients with presumed cholecystitis. Sensitivity
and accuracy were 91% and 93% in an early study [14].
Specificity, however, was lacking. This led early investigators to suggest that a positive result indicated cholecystitis
only when serum bilirubin was less than 5 mg/dl while in
patients with bilirubin higher than 5 mg/dl the test was
considered indeterminate. A negative test was considered
reliable. However, an evaluation done a decade later found
a similar sensitivity (94%), but a specificity of only 36%
[15]. This low specificity led these later investigators to
suggest HIDA be eliminated as a first-line study.
Confounding the issue even more was a study comparing
US, HIDA, and combined US/HIDA. HIDA was found to
be more sensitive than US and again the recommendation
was made to use HIDA as a first-line study, only using
US when stones are suspected in order to evaluate for
common bile duct dilation or obstructing stones
[10, 16]. With better contrast agents and patient selection,
specificity has improved. Also, morphine can be given to
increase the tone of the sphincter of Oddi. Filling of
the GB within 30 min is considered a negative test with
a false-negative rate of only 0.5%. Filling between 30 min
and 4 h increases the false-negative rate to 15–20% [17].
However, in a corroborating study, morphine cholescintigraphy had a sensitivity of 99%, a specificity of 91%,
a positive predictive value of 0.9, a negative predictive
value of 0.99, and an overall accuracy of 94%. This study
detected both calculous and acalculous cholecystitis [18].
Computed Tomography
Although not required in all cases of presumed cholecystitis,
CT is often used when HIDA scintigraphy and US are
indeterminate or to evaluate for associated pathology such
as gangrenous or emphysematous cholecystitis. Both of
553
C
554
C
Cholecystitis
these latter findings carry a higher mortality than uncomplicated acute cholecystitis and require conversion to open
procedure more often. The presence of these signs may lead
to more urgent surgery and a more prolonged antibiotic
course postoperatively. Findings consistent with acute
cholecystitis are much the same as US, absent the sonographic Murphy’s sign of course. The detection of stones
is also limited, such that only 75% of stones are seen
on CT. As such, the most specific sign of cholecystitis is
pericholecystic inflammatory changes. Overall sensitivity,
specificity, and accuracy of CT for the diagnosis of cholecystitis in one study was 92%, 99%, and 94%, respectively
[19], and in another directly comparing US with CT, was
100% accurate, sensitive, and specific for the diagnosis of
acute cholecystitis [20].
For AAC, CT has a variable sensitivity. In a study by
Laurila et al., only 58% of patients had CT signs of AAC
prior to operation but in another study, CT was used to
correctly diagnose AAC in six of seven patients with one
false positive finding. CT may have an adjunctive role in
patients with indeterminate US studies though [2].
Magnetic Resonance Imaging (MRI) and
Other Imaging Modalities
In a comparison of MRI with US, there was no difference
in the diagnosis of acute cholecystitis with a sensitivity of
50% for both and specificities of 89% and 86% for US and
MRI, respectively [21]. The authors suggested that limited
MRI may be indicated for “sonographically challenging”
patients. This likely means patients with large amounts
of bowel gas or other anatomically hidden gallbladders
and/or ductal structures. Cost-effectiveness was not
evaluated, however. Another modality currently being
investigated for both diagnosis and treatment of cholecystitis in the critically ill patient is bedside laparoscopy.
Conclusion
In conclusion, US should be performed as a first-line study
for presumed cholecystitis because of its broad availability
and ability to be performed at bedside. If in a patient with
a clinical picture of acute cholecystitis and an US that
shows stones and either a thickened GB wall or
a sonographic Murphy’s sign, then the patient should be
treated for cholecystitis. If the US is indeterminate, and
clinical suspicion is low, morphine-HIDA scintigraphy
should be used to rule-out the diagnosis. If, however, the
clinical suspicion is high, CT scanning should be
performed to try to rule-in the diagnosis. CT is also
indicated for patients with known cholecystitis that may
have emphysematous or gangrenous cholecystitis who
would otherwise have been treated conservatively, since
these signs indicate the need for urgent surgery. If on
any of the imaging studies the patient has distal CBD
dilation, an MRCP may be useful to evaluate for
obstructing CBD stones unless the surgeon is experienced
with CBD exploration.
Treatment
Controversy exists over the optimal treatment of acute
cholecystitis. In an attempt to better define the categories
of severity of cholecystitis and thus guide treatment, The
Tokyo Guidelines were developed [7]. Experts in the fields
of cholecystitis and cholangitis convened to develop standardized diagnostic criteria, a severity grading system, and
a treatment guideline based on this grading system
(Table 1). The categories were based on factors increasing
the likelihood of conversion to an open procedure and the
possibility of complications during surgery. Certain highrisk situations may increase the likelihood of conversion to
an open procedure, such as a white-cell count of more
than 18,000 cells/mm3 at the time of presentation, duration of symptoms of greater than 72–96 h, and an age over
60 years, all of which are indicators of a more advanced
disease and increased likelihood of perforation or emphysematous changes [22]. These guidelines have not gained
widespread acceptance yet and need to be validated in
well-constructed trials.
Patients with mild cholecystitis should be treated
with antibiotics with or without early laparoscopic cholecystectomy, depending on the patient’s operative risk.
Those with moderate cholecystitis are the most difficult
to draw firm conclusions regarding treatment. These
patients can also be treated with early laparoscopic cholecystectomy, especially if symptoms have been present
for less than 96 h. In a prospective cohort study of laparoscopic versus open cholecystectomy for gangrenous
cholecystitis, patients having a cholecystectomy completed laparoscopically had significantly shorter ICU
stays, less ileus, but more abscess formation [23]. Bile
leaks were more common in the laparoscopic group
(12% versus 6% in the open group) but this did not
reach statistical significance. Since conversion to open
cholecystectomy is higher in this group, attempts at laparoscopic surgery should only be made in those that could
tolerate an open surgical procedure and by an experienced
laparoscopic surgeon. A Cochrane Review was performed
evaluating studies of timing of cholecystectomy for acute
cholecystitis. The authors noted that early laparoscopic
cholecystectomy was feasible and preferred in some select
patients as long as an experienced laparoscopic surgeon
performed the procedure. This recommendation was
Cholecystitis
based on the observation that 17.5% of patients undergoing delayed treatment had recurrent cholecystitis requiring operation and of those undergoing laparoscopic
surgery, 45% required conversion to an open procedure
[24]. Because of the small size of the included studies,
conclusions could not be made regarding the more rare
complications, such as bile duct injury. Large population
studies seem to imply a higher rate of bile duct injury in
the early group. If these patients are poor operative risks,
percutaneous cholecystostomy drainage is an alternative. In
the most severe patients with organ failure, percutaneous
drainage is preferred, but in the rare situation in which this
cannot be accomplished, laparoscopic cholecystectomy
should be performed if possible, with early conversion to
open surgery if needed. Whether cholecystostomy should be
followed by interval surgery or endoscopic sphincterotomy is
also debated. In a study of patients in the ICU that had
interval surgery during the same admission, the conversion
rate was 14% compared to the hospital-wide conversion rate
of 1.4% [25]. In another study of patients with Grade II or
III cholecystitis in the ICU treated with percutaneous
cholecystostomy tube placement only two of 21 patients
at a mean of 17.5 months follow-up presented with recurrent cholecystitis, and both of these were successfully
treated by conservative means [26].
The use of antibiotics in patients with acute cholecystitis is not controversial but the length of treatment continues to be a source of debate. In patients undergoing
early cholecystectomy (<72 h from the onset of symptoms), standard perioperative (<24 h) antibiotics should
be administered. In patients that are very ill from
cholecystitis, had a delay in treatment >72 h, are
immunosuppressed, are > 60 years old, or have concomitant cholangitis, a prolonged course of treatment, usually
no longer than 7 days, is indicated. Although the Tokyo
Guidelines do not comment on antibiotic length of treatment it can be extrapolated that patients with Grade I or II
cholecystitis can have perioperative dosing lasting less
than 24 h, while those with Grade III should likely receive
a 7–14 day course. If infected with a resistant pathogen or
associated bacteremia is noted, antibiotics may need to be
continued for 14 days.
The choice of antibiotic should include coverage
for gram-negative enteric pathogens, as well as anaerobic
bacteria. Enterococcal species need not be covered.
In community acquired infections that are mild, ampicillin/sulbactam, ticarcillin/clavulinate, or ertapenem
may be selected. In high-risk patients, and those with
recent hospitalization or antibiotic use of more broadspectrum agents, such as pipericillin/tazobactam or
meropenem.
C
555
After-care
Most patients that have cholecystitis treated adequately
require no special aftercare. Patients can expect to spend
between 1 and 7 days in the hospital depending on the
severity of the cholecystitis, whether the surgery was laparoscopic or open, and whether prolonged antibiotics are
administered. Patients are allowed to resume a regular diet
as soon as ileus resolves which again is dependent on the
type of surgery. Some patients may experience early fatty
meal intolerance but this is expected to resolve within
a few weeks as the patient alters their diet to compensate.
Patients treated with percutaneous drain placement do
require special care. The patient will be discharged with
the tube in place. Most will have had the tube clamped
prior to discharge and are educated about tube care and
what symptoms should prompt resumption of drainage. If
the tube was placed for calculous disease, a contrast study
is performed to evaluate for remaining stones. If stones
remain, a decision to remove these is made in conjunction
with a surgeon, endoscopist, and interventional radiologist. If the patient is an operative candidate, cholecystectomy can be performed. In older more debilitated patients
an endoscopic sphincterotomy can be performed with the
expectation of good results. An alternative is exchange of
the percutaneous cholecystostomy tube using a guidewire
to a larger bore tube followed by stone extraction. When it
has been verified that all stones are cleared and the common bile duct is patent, the tube can be removed.
Prognosis
Prognosis after cholecystectomy is excellent. If performed
by an experienced laparoscopic surgeon the rate of complications is very low. Most studies comparing early to late
cholecystectomy show that delayed surgery results in
a relatively large number of patients presenting with recurrent cholecystitis requiring urgent operation prior to the
planned cholecystectomy. A large number of these patients
will need open surgery. From other studies that evaluate
interval cholecystectomy, it appears that conversion rates
are lower in those that actually make it to planned
operation.
References
1. Laurila J, Syrja LA, Laurila PA, Ala-Kokko TI (2004) Acute acalculous
cholecystitis in critically ill patients. Acta Anaesthesiol Scand
48:986–991
2. Pelinka LE, Schmidhammer R, Hamid L et al (2003) Acute acalculous
cholecystitis after trauma: a prospective study. J Trauma 55:323–329
3. Sustic A, Krznaric Z, Naravic M et al (2000) Infuence on gallbladder
volume of early postoperative gastric supply of nutrients. Clin Nutr
19(6):413–416
C
556
C
Chronic Bronchial Sepsis
4. Matsushiro T, Sato T, Umezawa A et al (1997) Pathogenesis and the
role of bacteria in acute cholecystitis. J Hepatobiliary Pancreat Surg
4:91–94
5. Manolis EN, Filippou DK, Papadopoulos VP, Kaklamanos I,
Katostaras T, Christianakis E, Bonatsos G, Tsakris A (2008) The
culture site of the gallbladder affects recovery of bacteria in symptomatic cholelithiasis. J Gastrointest Liver Dis 17(2):179–182
6. Claesson B, Holmlund D, Mätzsch T (1984) Biliary microflora in
acute cholecystitis and the clinical implications. Acta Chir Scand
150:229–237
7. Mayumi T, Takada T, Kawarada Y, Nimura Y, Yoshida M et al
(2007) Results of the Tokyo consensus meeting Tokyo guidelines.
J Hepatobiliary Pancreat Surg 14:114–121
8. Bennett GL, Balthazar EJ (2003) Ultrasound and CT evaluation of
emergent gallbladder pathology. Radiol Clin North Am 41
(6):1203–1216
9. Ralls PW, Colletti PM, Lapin SA et al (1985) Real-time sonography in
suspected acute cholecystitis: prospective evaluation of primary and
secondary signs. Radiology 155:767–771
10. Kalimi R, Gecelter GR, Caplin D, Brickman M, Tronco GT,
Love C, Yao J, Simms HH, Marini CP (2001) Diagnosis of
acute cholecystitis: sensitivity of sonography, cholescintigraphy, and
combined sonography-cholescintigraphy. J Am Coll Surg 193
(6):609–613
11. Shea JA, Berlin JA, Escarce JJ, Clarke JR, Kinosian BP,
Cabana MD, Tsai WW, Horangic N, Malet PF, Schwartz JS et al
(1994) Revised estimates of diagnostic test sensitivity and specificity
in suspected biliary tract disease. Arch Intern Med 154(22):
2573–2581
12. Eiberg JP, Grantcharov TP, Eriksen JR, Boel T, Buhl C, Jensen D,
Pedersen JF, Schulze S (2008) Ultrasound of the acute abdomen
performed by surgeons in training. Minerva Chir 63(1):17–22
13. Puc MM, Tran HS, Wry PW, Ross SE (2002) Ultrasound is not
a useful screening tool for acute acalculous cholecystitis in critically
ill trauma patients. Am Surg 68(1):65–69
14. Bennett MT, Sheldon MI, dos Remedios LV, Weber PM
(1981) Diagnosis of acute cholecystitis using hepatobiliary scan
with technetium-99 m PIPIDA. Am J Surg 142(3):338–343
15. Johnson H Jr, Cooper B (1995) The value of HIDA scans in the
initial evaluation of patients for cholecystitis. J Natl Med Assoc 87
(1):27–32
16. Alobaidi M, Gupta R, Jafri SZ, Fink-Bennet DM (2004) Current
trends in imaging evaluation of acute cholecystitis. Emerg Radiol
10(5):256–258, Epub 2004 Mar 17
17. Hicks RJ, Kelly MJ, Kalff V (1990) Association between false negative
hepatobiliary scans and initial gallbladder visualization after 30 min.
Eur J Nucl Med 16:747–753
18. Flancbaum L, Choban PS, Sinha R, Jonasson O (1994) Morphine
cholescintigraphy in the evaluation of hospitalized patients with
suspected acute cholecystitis. Ann Surg 220(1):25–31
19. Bennett GL, Rusinek H, Lisi V, Israel GM, Krinsky GA, Slywotzky CM
et al (2002) CT findings in acute gangrenous cholecystitis. AJR Am
J Roentgenol 178:275–281
20. De Vargas Macclucca M, Lanciotti S, De Cicco ML, Bertini L,
Colalacomo MC, Gualdi G (2006) Imaging of simple and complicated acute cholecystitis. Clin Ter 157(5):435–442
21. Oh KY, Gilfeather M, Kennedy A, Glastonbury C, Green D, Brant W,
Yoon HC (2003) Limited abdominal MRI in the evaluation of acute
right upper quadrant pain. Abdom Imaging 28(5):643–651
22. Strasberg SM (2008) Acute calculous cholecystitis. N Engl J Med 358
(26):2804–2811
23. Stefanidis D, Bingener J, Richards M et al (2005) Gangrenous cholecystitis in the decade before and after the introduction of laparoscopic cholecystectomy. JSLS 9:169–173
24. Gurusamy KS, Samraj K, Fusai G, Davidson BR (2008) Early versus
delayed laparoscopic cholecystectomy for biliary colic. Cochrane
Database Syst Rev 8(4):CD007196
25. Spira RM, Nissan A, Zamir O, Cohen T, Fields SI, Freund HR
(2002) Percutaneous transhepatic cholecystectomy and delayed laparoscopic cholecystectomy in critically ill patients with acute calculous cholecystitis. Am J Surg 183:62–66
26. Griniatsos J, Petrou A, Pappas P et al (2008) Percutaneous
cholecystostomy without interval cholecystectomy as definitive
treatment of acute cholecystitis in elderly and critically ill patients.
South Med J 101(6):586–590
Chronic Bronchial Sepsis
▶ Bronchitis and Bronchiectasis
Chronic Bronchitis
▶ Decompensated
Disease
Chronic
Obstructive
Pulmonary
Chronic Kidney Disease (CKD)
▶ Decreased Estimated Glomerular Filtration Rate (eGFR):
Interpretation in Acute and Chronic Kidney Disease
Chronic Lung Disease
▶ Decompensated
Disease
Chronic
Obstructive
Pulmonary
Chronic Obstructive Airway
Disease
▶ Decompensated
Disease
Chronic
Obstructive
Pulmonary
Chylothorax
C
Chylous pleural effusion
into the venous blood near the junction of the left jugular
and left subclavian veins. Therefore, in the event that
a patient has not had any recent oral intake, the appearance of the chyle may actually be clear. The diagnosis of
a chylothorax is confirmed by the presence of chylomicrons in the pleural fluid.
The thoracic duct is the final common channel
through which all lymphatic fluid in the body reenters
the blood stream. The thoracic duct originates at the
cysterna chyli, typically between the third lumbar vertebrae and the tenth thoracic vertebrae. It then ascends
along the anterior surface of the vertebral bodies, lying
between the aorta and the azygos vein. At the level of the
fifth thoracic vertebrae (T5), the thoracic duct crosses over
from right to left and continues its ascent posterior to the
aortic arch. Finally, it courses through the thoracic inlet
where it ultimately empties into the venous system somewhere near the junction of the left internal jugular vein
and left subclavian vein. While anatomic variations of the
thoracic duct do exist, this is the most common course.
Therefore, a thoracic duct injury below T5 will produce
a right sided chylothorax but an injury above T5 will
produce a left sided chylothorax.
Definition
Evaluation
Chylothorax is defined as the presence of chyle in the
thoracic cavity. This typically occurs when chyle leaks
from the thoracic duct or one of its major branches into
the pleural space. This leakage of chyle can be due to
congenital abnormalities, traumatic injury of the thoracic
duct, invasion of the thoracic duct by a tumor or malignancy, infection, or thrombosis of the venous system.
Chyle is lymphatic fluid that is typically laden with free
fatty acids, cholesterol and phospholipids resulting in
a milky color to the fluid. The predominant cell type
within chyle is lymphocytes. The concentration of free
fatty acids, cholesterol and phospholipids varies
depending upon absorption of these products from the
small intestine. The ingestion of triglycerides and phospholipids results in their absorption by the intestinal epithelium. Upon absorption, those triglycerides that contain
fatty acids of 12 carbons or less are absorbed directly into
the blood stream. These are termed medium-chain triglycerides. Triglycerides composed of fatty acids that are
longer than 12 carbons (long-chain triglycerides) are not
directly absorbed into the bloodstream. Instead, longchain triglycerides are complexed with cholesterol, phospholipids, and binding proteins to form lipoproteins.
Once assembled, the lipoproteins are transported through
the lymphatic system, eventually arriving in the thoracic
duct. Once in the thoracic duct, lipoproteins are emptied
Patients with a chylothorax may have symptoms that are
commonly associated with any type of pleural effusion
including shortness of breath, fatigue, chest discomfort,
and cough. The presence of chyle in the pleural space does
not cause any irritation of the pleura. Therefore, patients
will not typically complain of pleuritic chest pain if their
effusion is secondary to a chylothorax. Patients with
chylothorax will have evidence of a pleural effusion on
plain radiographs and/or computed tomography (CT) of
the chest. However, radiographic imaging alone cannot
distinguish chylothorax from other causes of pleural effusions. Definitive diagnosis of a chylothorax requires sampling of the pleural fluid. While the classic description of
a chylothorax is the return of milky white fluid, this is not
always present and the return of clear fluid from the
pleural space does not exclude chylothorax. The presence
of chylomicrons in the pleural fluid is the gold standard
for diagnosing a chylothorax.
Once the diagnosis of chylothorax has been made, the
etiology of the chyle leak must be further investigated. The
etiology of a chylothorax typically falls into one of three
categories: congenital, traumatic, or neoplastic. By far the
two most common causes of chylothorax are trauma and
neoplasms [1]. Obtaining a thorough history will often
elucidate the cause. Common surgical procedures associated with the development of a chylothorax include
Chronic Salicylate Toxicity
▶ Salicylate Overdose
Churg–Strauss Syndrome
▶ Pulmonary-Renal Syndrome
Chylothorax
LAURA J. MOORE
Department of Surgery, The Methodist Hospital Research
Institute, Houston, TX, USA
Synonyms
557
C
558
C
Chylothorax
esophagectomy, pneumonectomy, repair of aortic aneurysm, radical lymph node dissections of the neck, chest, or
abdomen, and surgery for the removal of mediastinal
tumors. In addition, blunt or penetrating trauma can
result in injury to the thoracic duct with subsequent
development of a chylothorax. Obstruction of the
thoracic duct by tumor is the most common cause for
non-traumatic chylothorax. Lymphoma is the by far the
most common malignancy seen in non-traumatic cases of
chylothorax, accounting for 70% of the cases. Other
potential but uncommon causes include congenital
atresia of the thoracic duct, mediastinal radiation, and
transdiaphragmatic passage of chylous ascitic fluid in
patients with cirrhosis [1].
In the event that the etiology of the chylothorax remains
unclear, diagnostic imaging may be helpful. CT scan of the
chest may reveal underlying tumor or mediastinal lymphadenopathy that had been previously undiagnosed. If available, lymphangiography or lymphoscintigraphy can be
utilized to define lymphatic anatomy and identify the
source of the leak [2]. This can be potentially useful for
operative planning purposes.
Treatment
Having a basic understanding of lipid metabolism and
thoracic duct anatomy is helpful in understanding the role
of various therapies in the management of chylothorax (see
above). The treatment plan should be individualized for
each patient and should take into account the underlying
etiology, duration, symptoms, nutritional status, and other
co morbid conditions. Treatment options can be broadly
categorized into nonoperative and operative therapies.
Most clinicians would favor an initial trial of nonoperative
therapy for a period of 1–2 weeks. However, in those
patients this may be associated with longer hospital stays
and an increased risk of complications. Therefore, the risk
versus benefit of nonoperative therapy must be critically
evaluated on a patient by patient basis.
Nonoperative Management
The initial step in the nonoperative management of
chylothorax is placement of a tube thoracostomy to
drain the pleural space and allow for re-expansion of the
lung. Tube thoracostomy is preferred over repeated
thoracentesis because it allows for apposition of the pleural surface which may promote sealing of the site of the
leak and because thoracentesis alone rarely results in complete drainage of the effusion. In addition, repeated
thoracentesis unnecessarily exposes the patient to the
risk of pneumothorax or hemothorax.
A key component of the nonoperative management of
chylothorax is an assessment of the patient’s nutritional
status. Because chyle is rich with triglycerides, proteins,
and electrolytes the ongoing loss of these substances can
result is significant malnutrition and electrolyte abnormalities. Hyponatremia and hypocalcemia are the most
commonly encountered electrolyte disturbances. The
severity of these derangements is dependent upon the
volume and duration of the chyle leak. Monitoring the
patient’s nutritional status through weekly weights, serum
prealbumin and transferrin levels, and nitrogen balance is
critical. Manipulation of a patient’s enteral intake can
decrease the volume of chyle generated and therefore
increase the chances of the leak sealing with nonoperative
management. As mentioned above, long-chain triglycerides are unable to be absorbed directly into the blood
stream by the enterocytes. Therefore, they must be packaged as lipoproteins and travel through the thoracic duct
before re-entering the blood stream. By removing longchain triglycerides from the diet, the volume of chyle
transported through the thoracic duct can be significantly
decreased. Instituting a low fat, medium-chain triglyceride diet will result in closure of the leak in 50% of cases [3].
Total parenteral nutrition may be utilized in the event that
dietary modification is unsuccessful and surgical management is not an option.
In those patients with chylothorax due to malignancy,
therapies targeted the primary malignancy may be of
benefit but the results are inconsistent [1]. Chemical
pleurodesis may be useful in patients that are not surgical
candidates that have failed chemotherapy and radiation
therapy. Talc, tetracycline, and bleomycin have all been
used successfully for chemical pleurodesis. In addition,
somatostatin has been shown to reduce the production
of intestinal chyle with results decrease in chyle leak [6].
Operative Management
The surgical treatment of chylothorax involves ligation of
the thoracic duct. Surgical treatment should be considered
first line therapy in those patients with post surgical
chylothorax. This is because conservative management of
post surgical chylothorax has been associated with
increased mortality when compared with surgical treatment [4, 5]. Patients that have failed a trial of nonoperative
therapy should also be managed surgically. As a general
rule of thumb, two groups of patients will likely fail conservative management; (1) those patients with a chyle leak
of greater than 1.5 L/day and (2) those patients with
a sustained chyle leak of 1 L/day for 5 consecutive days.
In these patients, surgical intervention should be considered, as it will likely result in better outcomes.
Circulatory Assist Devices
Once the decision to pursue operative intervention has
been made, there are several techniques that can be utilized
to ligate the thoracic duct. Operative approaches include
open and thoracoscopic. In general, operating on the same
side as the effusion is preferred. Selective ligation of the
thoracic duct at the site of the leak may be performed if the
leak can be identified. Methylene blue may be mixed with
a fat source such as olive oil or cream and administered
enterally to help visualize the site of the leak. Once the leak
is identified, the thoracic duct is ligated above and below the
site of the leak. In the event that the leak cannot be easily
identified, further dissection around the thoracic duct to
identify the leak is discouraged, as this may lead to further
injury to the thoracic duct and its tributaries. Instead, mass
ligation of the soft tissues lying between the aorta, spine,
esophagus, and pericardium should be performed just
above the diaphragmatic hiatus in the right chest.
After-care
The main focus following resolution of a chylothorax is to
ensure correction of any nutritional, immunologic, or
electrolyte abnormalities that may have occurred. This
can include weekly assessments of nutritional status, monitoring for evidence of immunosuppression, and electrolyte replacement.
Prognosis
The prognosis for patients with chylothorax is highly
variable and dependent upon the underlying etiology.
With more aggressive management, there has been a
decrease in the morbidity and mortality associated with
this condition. Patients with iatrogenic or traumatic
chylothorax have the best prognosis for recovery. Those
patients with malignant chylothorax tend to have a worse
prognosis.
Cross Reference
▶ Pleural Disease and Pneumothorax
References
1.
2.
3.
4.
5.
Nair SK, Petko M, Hayward MP (2007) Aetiology and management
of chylothorax in adults. Eur J Cardiothorac Surg 32(2):362–369
Ngan H, Fok M, Wong J (1988) The role of lymphography in
chylothorax following thoracic surgery. Br J Radiol 61(731):1032–1036
Fernández Alvarez JR, Kalache KD, Graŭel EL (1999) Management of
spontaneous congenital chylothorax: oral medium-chain triglycerides versus total parenteral nutrition. Am J Perinatol 16(8):415–420
Al-Zubairy SA, Al-Jazairi AS (2003) Octreotide as a therapeutic
option for management of chylothorax. Ann Pharmacother
37(5):679–682
Orringer MB, Bluett M, Deeb GM (1988) Aggressive treatment of
chylothorax complicating transhiatal esophagectomy without thoracotomy. Surgery 104(4):720–726
C
559
Chylous Pleural Effusion
▶ Chylothorax
C
Circulation
▶ Capillary Refill
Circulatory Assist Devices
ARES KRISHNA MENON1, RÜDIGER AUTSCHBACH2
1
Klinik f. Thorax-, Herz-, Gefäßchirurgie, Klinikum der
RWTH, Aachen, Germany
2
Clinic for THG Surgery, University of Aachen,
Aachen, Germany
Synonyms
Biventricular Assist Device (BVAD); Left Ventricular
Assist Device (LVAD); Left Ventricular Assist System
(LVAS); Mechanical Circulatory Assist; Mechanical Circulatory Support (MCS); Right Ventricular Assist Device
(RVAD); Ventricular Assist Device (VAD)
Definition and History
After the first use of cardio pulmonary bypass (CPB) in
the 1950s and the increasing number of cardiac procedures, the need for extended circulatory assistance in
patients who could not be weaned from CBP was obvious.
After the first experimental use of ventricular assist devices
(VAD) in 1963 De Bakey introduced the first clinical use of
a VAD in a patient after aortic valve replacement. Only
a few months later the group of Denton Cooley presented
their first successful use of an assisted circulation as
a bridge to transplantation (BTT). During these
pioneering works two different systems were surveyed:
Pneumatically driven rubber-tube or sac pumps which
offer a pulsatile flow and continuous flow devices like,
for example, centrifugal pumps.
As recorded in the recommendations of the National
Heart Advisory Group the importance of mechanical support was recognized by the National Institute of Health in
the USA in 1964. The former initial goal was to develop
a total artificial heart (TAH). While the first TAH program
was abandoned in 1991due to the enormous rate of severe
complications, the National Heart and Lung Institute
560
C
Circulatory Assist Devices
meanwhile put its effort in the development and evaluation of left ventricular assist devices (LVADs). This led to
the Food and Drug Administration (FDA) approval of
LVAD for BTT use in 1994. Thus, under high volume
sponsored research during the last 20 years, two different
types of devices became available: Pulsatile VADs as well
as the newer and smaller continuous flow pumps. Both
systems are usable for intracorporeal and paracorporeal
implantation. According to the degree of individual disease,
more or less all appliances can be used as a univentricular
support for LVAD, as a right ventricular assist device
(RVAD), or as a biventricular assist device (BVAD).
Pre-existing Condition
The treatment of heart failure is of tremendous growing
interest even at the intensive and intermediate care units
in our hospitals.
In heart failure or even in cardiogenic shock patients
the caring physician has to decide whether to treat the
patient with medication only or to use circulatory support
to stabilize hemodynamics and preserve organ function.
The so-called Intention to Treat (ITT) in the rising use of
VADs for mechanical cardiac support is the key issue and
has essential influence on the choice of the individual
device: Whether as for Bridge to Recovery (BTR), Bridge
to Transplantation (BTT), or for long-term circulatory
support as the so-called Destination Therapy (DT).
Other patients fall in the category of Bridge to Candidacy
(BTC). These are patients who at the time of an urgent
device implantation are either critically ill and have not
been completely evaluated for OHTor are bearing a major
or relative contraindication to transplantation. Furthermore, the type of the support needed has to be considered:
Is an univentricular assist device sufficient? or is the use of
a biventricular device (BVAD) crucial?.
Indications for Assisted Circulation
Usually, the use of a VAD is indicated in case of severe heart
failure which is refractory to the conservative treatment
options. If the patient is not able to offer adequate systemic
oxygen delivery to maintain normal end-organ function
despite maximal medical therapy, mechanical support is
indicated. The common hemodynamic criteria for device
implantation include a systolic blood pressure less than
80 mmHg, mean arterial pressure less than 65 mmHg,
cardiac index less than 2.0 L/min/m2, pulmonary capillary
wedge pressure greater than 20 mmHg, and a systemic
vascular resistance greater than 2100 dyn-s/cm [1].
The large variety of diseases treated with assisted circulation devices includes both acute and chronic forms of
heart failure.
The acute cardiogenic shock is one of the main reasons
for treating the patient in an emergency ward or chest pain
unit. There are several reasons for cardiogenic shock.
Acute myocardial infarction, for example, complicated
by cardiogenic shock has a very high mortality rate.
A trend towards early intervention reached a better outcome by early and more aggressive coronary reperfusion
strategies such as percutaneous intervention, coronary
bypass surgery, or aortic counterpulsation. Moreover, up
to 6% of patients after heart operation are still suffering
from low output syndrome, the post-cardiotomy shock,
especially after complex surgical procedures like heart
transplantation, multivalve replacement, or treatment of
severely impaired left ventricular function. Depending on
the age of patients who require assisted circulation, there
are some other typical indications. Myocarditis or dilated
cardiomyopathy (DCM) affects the younger patient group
with an often unpredictable outcome. The global dilatation of both, the left and right ventricle, often leads to
a biventricular heart failure and therefore requires an
adequate biventricular support. Moreover, a rare indication for VAD therapy is a complex ventricular arrhythmia,
if refractory to medical treatment.
The second and also large cohort of patients which is
considered for AC is the chronic heart failure group. An
estimated 2–5 million patients are suffering from heart
failure worldwide [2]. The continued aging of mankind
leads to a growing number of patients. The incidence and
prevalence of this disease is obviously age dependant: On an
average 2–5% of the population aged 65–70 years and about
10% in the group of persons aged more than 70 years are
affected, and around 500,000 new cases per year are registered. In spite of all advances in medical treatment of severe
heart failure the prognosis of the patients is poor. In patients
with severe heart failure more than 50% die within 1 year.
These patients have to be divided in two groups: Those
who are eligible for orthotopic heart transplantation
(OHT), and those, who are not. OHT is the only treatment
that provides substantial individual benefit, but with fewer
than 4,000 donors available per year worldwide its impact
is epidemiologically trivial. Additionally, we find
a growing number of patients who are ineligible for cardiac transplantation because of advanced age, presence of
diabetes mellitus with end-organ damage, chronic renal
failure, or pulmonary hypertension. Therefore, the limitations of cardiac transplantation procedures have stimulated the development of alternative approaches to the
treatment of severe heart failure.
For these reasons within the chronic heart failure
group of patients assisted circulation is exercisable as
a BTT or as destination therapy.
Circulatory Assist Devices
Device Selection
Due to the above-mentioned circumstances the treating
physician has to decide, which specific blood pump would
be the appropriate tool for the individual patient. The
operative risk of the implantation procedure has to be
weighed against the potential lifestyle and survival benefit
with mechanical support, the already stated intention to
treat.
C
assisted circulation, the ECMO is widespread and leads
to a remarkable improvement in survival rates of these
high-risk cases.
The very new Tandem Heart paracorporeal centrifugal
pump (CardiacAssist, Inc., Pittsburgh, Penn., USA) can
easily be implanted via percutaneous insertion of the groin
vessels without a surgical procedure. In doing so, the
inflow cannula is brought up the femoral vein and through
the atrial septum into the left atrium percutaneously.
Application
Axial Flow Pumps
Short-Term Circulatory Support
A large variety of technical devices do exist to support the
failing heart for a short time period. These devices have
the advantage of an easy implantation technique based on
the hope of an early cardiac recovery or bridging the
patient to use a more permanent ventricular device.
Intra-Aortic Balloon Pump (IAPB)
Kantrowicz and coworkers presented the first clinical use
of an Intra Aortic Balloon Pump (IABP) for the treatment
of cardiogenic shock after myocardial infarction in 1968.
Once percutaneously placed in the descending aorta, its
diastolic pulsation and systolic deflation is triggered by
ECG or arterial pressure, resulting in reduction of
afterload and improvement of coronary perfusion. The
application of IABP is widespread because of its uncomplicated use and improved outcome in the treatment of
myocardial
infarction,
postcardiotomy
shock,
postinfarction VSD, or acute mitral valve regurgitation
caused by posterior wall infarction [3]. Critical limp perfusion is a rare, but severe complication, and therefore
IABP use has to be considered deliberatively in case of
peripheral vascular disease.
Centrifugal Pumps
Originally used for CPB, centrifugal pumps were thereafter in many cases also applied for assisted circulation
because of the low costs, uncomplicated implantation
techniques, and easy handling. The Biomedicus BioPump (Medtronic Inc., Minneapolis MN, USA), the
Sarns centrifugal pump (3-M Health Care, Ann Arbor,
Michigan, USA), and the newer Centrimag (Levitronix
Inc.) are the most common pumps in this field. Placed
paracorporeally, the implantation could either be achieved
via cannulation of the groin vessels or – in case of
postcardiotomy shock – via connection to the cannulas
of the CBP intraoperatively. In case of a collateral respiratory failure, the connection to an oxygenator is possible,
resulting in an extra corporeal membrane oxygenation
system (ECMO). Especially in the pediatric field of
The microaxial blood pump Impella Recover Device
(Impella CardioSystems AG, Aachen, Germany) is a
newer short-term support system for up to 7 days.
Brought through the aortic valve inside the left ventricle
percutaneously, this pump generates flow up to 5 L/min.
Therefore, it can be used as an ideal tool for postcardiotomy
support or myocardial infarction with cardiac shock to
establish a rapid unloading of the failing left ventricle.
Since last year a paracardiac right-ventricular device
(RVMBP) of the Impella family was available until the
product was withdrawn from the European market.
Pulsatile Short-Term Pumps
A dual chamber polyurethane blood sac pump, the
Abiomed BVS 5000i (Abiomed Cardiovascular, Inc.,
Danvers, Mass. USA) is a passively filled, pulsatile shortterm assist device for the use after postcardiotomy shock.
This device can be used for univentricular as well as for
biventricular support generating flows up to 6 L/min. Its
cost-effectiveness and the ease of implantation have lead
to a widespread use, especially for the BTR short term, or
a bridging to another, more permanent system, the bridge
to bridge (BTB). The same company introduced another,
more complex pulsatile, paracorporeal, fully automated
device with pneumatically driven full-to-empty mode: the
AB 5000. Similar to the older paracorporeal long-term
devices, such as Berlin Heart Excor (Berlin Heart Inc.
Berlin, Germany) or the Thoratec PVAD (Thoratec Inc.,
California, USA), the AB 5000 is able to reach a complete
unloading of the failing left or right ventricle and, got FDA
approval for 30 days in the USA, so far.
All of these short-term devices have the advantage of
a more or less easy implantation and application. The
main disadvantage of almost every short-term pump is
the impossible mobilization of the patient. Only the
newer, more costly devices such as the Centrimag or the
AB 5000 do allow for a better mobilization of the individual patient. However, they touch the boarder of the permanent VADs not only clinically, but financially in
particular.
561
C
562
C
Circulatory Assist Devices
Long-Term Circulatory Support
Pulsatile Devices
The first generation of LVADs are electromechanically or
pneumatically controlled mechanical assist systems. They
are used for BTT or DT and generate pulsatile blood flow
up to 10 L/min. Some examples for the permanent use of
VADs are the paracorporeal systems Excor (Berlin Heart,
Germany) (Fig. 1), Thoratec PVAD (Thoratec, California,
USA), and the Medos HIA (Medos Inc., Aachen
Germany). The pump chambers of the Excor and HIA
are offered in different sizes so that pediatric use is possible. Patients treated with these large, bulky devices are
difficult to mobilize, also because of the risk of kinking
the grafts and the large control units. The following,
implantable pulsatile devices are brought into a huge
preperitoneal pocket connected to a percutaneous driveline. The HeartMate XVE (Thoratec Inc.) is the most used
implantable VAD with more than 4,000 implantations
worldwide. The peculiarity of the HeartMate XVE is its
structured inner surface, leading to a neointima formation
to reduce the risk of thrombus formation. Because this
device has biological valves, anticoagulation is not necessary. A large amount of clinical experience has been gained
with the Heartmate LVAD. The pioneering REMATCH
trial [4] was established by using this device for DT.
Historically, it is necessary to mention two other systems which were withdrawn from the market in 2005 and
2008 respectively: The LionHeart 2000 LVAD (Arrow
International, PA, USA) and the Novacor LVAS(Baxter
Healthcare/Worldheart Inc.) devices. The Novacor is
implanted in the same approach as the Heartmate VXE,
including the typical connection to a console by
a percutaneous driveline. The fully implantable Lion
Heart was powered by transcutaneous energy transfer,
thereby obviating the need for external lines, which is
a common course of infection in LVAD recipients.
A pump controller was implanted as well regulating the
external power supply. The external power pack with
rechargeable and replaceable batteries could be removed
from the transcutaneous site maximal 30 min. The inside
of this system achieves unidirectional blood flow by
mechanical heart valves and therefore necessitates Warfarin or Heparin treatment. The system was licensed for
trials in Europe and the USA for long-term support in
patients with end-stage-heart failure. Because of some
major technical failures, for example, fatal fracture of the
blood sac, the device was displaced from the market in
2005. The Novacor device was developed in the 1970s.
Its regulatory approval in Europe and the USA for BTT
came in the 1994 and 1998, respectively, followed by
a regulatory approval for long-term support in Europe.
More than 1,800 implantations could be accomplished
worldwide. It carries biological valves for achievement of
unidirectional flow, although because of the inner structure of this device systemic anticoagulation is mandatory
as well in the LionHeart. Here, similar to the technical
failures of the LionHeart, the durability was obviously
very limited and consequently the Novacor LVAD was
withdrawn from the market in 2008.
Continuous Flow Devices
Circulatory Assist Devices. Figure 1 Excor
One of the most promising advances in the field of circulatory assist devices is the development of axial flow
pumps, like the HeartMate II (Thoratec) (Fig. 2), the
Micromed DeBakey (MicroMedTechnology Inc.,
Houston , TX, USA), the Incor (Berlin Heart), and the
Jarvik 2000 (Jarvik Heart Inc., NY, USA) (Table 1). These
devices, the so-called second generation of VADs, generate
continuous flow via a very small electromagnetically actuated impeller that rotates at high speeds and are able to
provide up to 10 L/min flow. Moreover, in the implantable
pulsatile devices the particular inflow cannula is
connected to the LV apex, the outflow graft to the ascending aorta. The remarkably small size of these devices allows
Circulatory Assist Devices
Circulatory Assist Devices. Figure 2 HeartMate II
for enormous reduction in surgical trauma caused by
a diminishment of the preperitoneal or even intrapericardial pump pocket. This is why the use for patients
with a small body surface area is now possible, resulting
in FDA approval for pediatric use for the MicoMed device.
The other systems are now under trial for this indication.
Moreover, these axial pumps are generating no relevant
noise. Permanent anticoagulation therapy is necessary, but
after early experiences are initiated and not until
a minimum 12–24 h after implantation. The unique
design of the Jarvik consists of an impeller, which is placed
in the LV apex directly as a sort of inflow cannula housing
the pump. Therefore, less invasive implantation is possible
via a lateral thoracotomie leading the outflow graft to the
descending aorta, in case a sternotomy should be avoided.
This device operates at fixed rate motor speeds that are set
by the controller at between 8,000 and 12,000 rpm with
an average capacity of 5–7 L/min. Another implantation
feature of this small pump is a titanium pedestal screwed
into the very well-vascularized skull with a transcutaneous connector that attaches to the power cord. The
MicroMed DeBakey AD is a titanium electromagnetically
actuated axial flow pump with a maximum flow capacity
C
of 10 L/min at 10,000 rpm, but usually is initiated at
8,000 rpm, resulting in a 5–6 L flow per minute. It carries
a special ultrasonic flow probe at the outflow graft
site, which allows for exact flow measurements.
The HeartMate II (Fig. 3) is a newer device and obtained
its approval in Europe a few years ago and FDA approval
for BTT was obtained in 2009. Fabricated with titanium, it
operates at speeds between 6,000 and 12,000 rpm resulting
in a flow up to 10 L/min in a fixed or automatic operating
mode (http://www.thoratec.com/about-us/media-room/
videos.aspx). An overview is shown in Table 1.
The recently developed implantable centrifugal circulatory assist devices represent the so-called third generation of implantable LVADs. Examples are the VentrAssist
(Ventracor Inc., Australia), the HVAD (Hardware Inc.,
USA), and the DuraHeart (Terumo Inc., Japan).These
devices use the magnetic technology in which rotating
blades or an impeller is magnetically suspended within
a column of blood, obviating the need for contact-bearing
moving parts.
The DuraHeart is a magnetically suspended centrifugal pump with impeller blades, magnetic bearing, and
a direct motor. Its relatively large volume (200 mL)
requires an implantation pocket, which is clearly bigger
than the ones needed for axial pumps. The DuraHeart
works with speeds of 2,000–3,000 rpm and creates a flow
between 5–6 L/min. The VentrAssist device is a smaller
titanium centrifugal pump with a carbon coating at its
inner surface. It was implanted worldwide in more than
200 patients as a LVAD in CE marked use-and-pilot trials,
but the company became bankrupt in spring 2009. The
HVAD system was introduced recently and has just gained
CE approval. It has a volume of only 50 mL and is directly
implanted at the surface of the LV apex, allowing an
intrapericardial pocket. The speed range of 2,000–3,000
rpm creates a flow up to 8–10 L/min. All centrifugal VADs
require systemic anticoagulation.
The Total Artificial Heart
Severe failure of both the left and right ventricle of the
human heart necessitates sometimes even more than the
implantation of a paracorporeal BVAD. In selected cases
like structural heart diseases, for example, hypertrophic
cardiomyopathy or complex congenital cardiac diseases
after a large number of operations with mechanical
valve prosthesis inside, the orthotopic positioning of
a totally implantable artificial heart (TAH) is required.
The CradioWest (Syn Cardia Inc., Tucson, AZ, USA)
is a pneumatically driven orthotopic, implantable
biventricular assist system and at present the only available
TAH. Its rigid pump housing contains dual spherical
563
C
564
C
Circulatory Assist Devices
Circulatory Assist Devices. Table 1 Characteristics of the most common continuous flow left ventricular assist devices (LVAD).
Status: June 30, 2009
Terumo
DuraHeart
Heartware
HVAD
Jarvik Heart
Jarvik 2000
MicroMed
De Bakey
HeartAssist
Thoratec
BerlinHeart
HeartMate II Incor
Ventracor
VentrAssista
System
Axial
Axial
Centrifugal
Centrifugal
Centrifugal
Axial
centrifugal
Weight gr.
280
200
298a
590
145
90
92
Size mm
81 43
120 43
40 60
5–10
5–10
5–10
Volume
30 mL
71 31
Max. flow
73 46 85 Volume
50 mL
5–7
5–10
Yes
No
5–10
5–10
100
100
Yes
Yes
No
Under
investigation
Under
investigation
under
investigation
Yes
No
No
No
No
Yes
Implantations 2900
CE
Yes
500
Yes
Yesa
FDA-adult
Yes
Under
investigation
FDA-pediatric No
Under
investigation
200
300
a
Company went bankrupt 2009
polyurethane chambers. The inflow and outflow conduits
are made of Dacron and carry mechanical valve prosthesis
(Medtronic Inc. USA). The stroke volume is about 70 mL
and the CardioWest is able to generate a maximum flow of
10 L/min. The condition of insufficient space inside the
patients’ thorax is a major problem. It requires
a minimum BSA of 1.7 m2 or ventricular volumes of the
native heart from more than 1.5 L. In Europe the pneumatic drivelines are connected to a smaller console, which
allows for a better mobilization of the patient. The CW got
the CE and FDA approval for BTT use.
Implantation Technique
A very large variety of surgical implantation techniques are
necessary to accommodate an appropriate function of the
specific device. In most cases cardiac support systems are
implanted for left heart failure, since isolated insufficiency
of the right ventricle is rare. Whereas in the short-term
devices mostly an access to the groin vessels is sufficient;
the devices for permanent support require a median
sternotomy or another adequate access to the left ventricle
and the aorta. The fully heparinized patient is put on CPB
and the apex of the left ventricle is exposed for the insertion of the inflow cannula of the VAD, which is usually
done by beating the heart on pump without cardioplegic
arrest. After having prepared the device pocket in the
preperitoneal or intrapericardial position, the driveline is
tunneled and brought out of the skin in the right upper
quadrant. The correct position of the LV apex is cut with
a special core knife, the myocardium is removed, and the
Circulatory Assist Devices. Figure 3 External equipment
LV is accurately inspected. The thrombi have to be
removed carefully and trabecular structures have to be
excised in case they might hinder the free flow to the
inflow cannula. The apex cannula is then fixed to the left
Circulatory Assist Devices
ventricle by 2-0 polypropylene sutures reinforced by felt
pledges. The device once brought in correct position is
then connected and the outflow graft, a Dacron vascular
prosthesis, is sutured into an end-to-side anastomosis to
the aorta followed by the careful de-airing of the device.
Subsequently, the patient is weaned from CPB while the
VAD is initiated. In this context, the correct position of
the inflow cannula has to be monitored carefully by
transesophageal echocardiography (TEE) to ensure an
unrestricted blood flow to the assist device. In LVAD
implantation particular interest is then given to the right
ventricle and the right atrium to safeguard the adequate
systolic function and to obviate a persistent PFO, which
might have been hidden by an assumed high left atrium
pressure before a LVAD implantation due to low cardiac
output. Protamine is admitted and meticulous hemostasis
is established before chest drains are placed and the sternum is closed with permanent wires. To minimize trauma
of the device or the outflow graft, a Gore-Tex surgical
membrane can be used to cover these delicate structures
before the sternum is closed. The use of phosphodiesterase
inhibitors, inhaled nitric oxide and aggressive inotropic
support of the right ventricle in case of any impaired
systolic right ventricular function should be applied very
liberally. Again, the TEE is an ideal and essential tool for
the effective treatment of a patient after LVAD implantation beside the information from the measurements of the
pulmonary artery catheter. After having reached stable
conditions in the operating theatre a seamless constant
treatment of these patients should always be the main goal.
Such is the postoperative care at the ICU. Decent monitoring of a stable right heart hemodynamic, excellent
oxygenation, and proper function of the device to guarantee a sufficient perfusion of all organs is the main target
of the intensive care doctor, who then should always be
very alert to the drain blood loss and the urine output.
Well-dosed substitution of blood products like plasma or
platelets should be applied whenever it is needed. Antibiotics should be given minimum 48 h postoperatively for
prophylactic use.
Outcome of Circulatory Assist Device
Treatment
Randomized Evaluation of Mechanical assistance for the
Treatment of Congestive Heart Failure (REMATCH) trail
is a landmark in the history of clinical trials in heart failure
[4]. The study included end-stage heart failure patients
who were ineligible for cardiac transplantation and randomized them to either surgical therapy (implantation of a
HeartMate XVE LVAD) or optimal medical treatment. All
patients were classified in NYHA class IV, LV EF <25%, and
C
either peak oxygen consumption < 12–14 mL/KG/min
or dependence on inotropes. Within the cohort of these
critically ill the 1- and 2-year survival rates of 52% and
23% in LVAD recipients were significantly better than the
25% and 8% survival observed in patients treated with
maximum medical therapy [4]. Despite more adverse
events in the LVAD group the survival rate and quality of
life were better in these patients. Later on, this tendency
continued resulting in an improvement of the 2-year survival of 37% in the LVAD group versus 12% in the medical
group (Late REMATCH). Bleeding, infection, and
multiorgan failure were the major cause of early mortality
after LVAD implantation. Long-term mortality was mostly
related to device dysfunction and infectious complications. Sepsis and local infections were the most common
cause of morbidity and mortality in LVAD recipients and
account for 25% of deaths. In the post-Rematch area, the
1-year survival increased to 56% with an in-hospital mortality of 27% after LVAD surgery [5]. Again the main
causes for death were sepsis, right heart failure, and
multiorgan failure. The Interagency Registry for Mechanically assisted Circulatory support (INTERMACS) database, funded by the US National Heart, Lung and Blood
Institute (NHLBI), is a new registry for patients who
receive durable FDA-approved mechanical circulatory
support devices for the treatment of advanced heart failure
[6]. The patients’ clinical status before VAD implantation
was classified into seven different INTERMACS levels,
grated from 1, representing the sickest patients in severe
cardiogenic shock, to 7, representing an advanced NYHA
level III. The first report presented in 2008 gives a further
positive tendency in the development of VAD treatment in
more than 400 patients. Included were BTT, as well as DT,
or BTR patients. The overall survival rate was 56% after 1
year, but with a close look at the extent of support, LVAD
recipients had a much better survival rate (67%), than the
BVAD cohort (<40%). The main causes of death were
central neurologic events (18% of all deaths), multiorgan
failure (16%), right ventricular failure and arrhythmias
(15%), and infections (8%). By multivariable analysis
the risk factors for early death were INTERMACS level 1,
older age, ascites at the time of implant, higher bilirubin
level, and placement of a BVAD or Total Artificial Heart.
The results of an early European study on the axial flow
HeatMate II device for LVAD proved far better than the
early experiences with pulsatile devices [7]. After 1 year
a comparable survival was observed in both, the DT
(69%), and the BTT (63%) group of patients. Main causes
of death in this multicentre trial were multiorgan failure
and cerebrovascular accidents. The survival remained stable in this cohort of LVAD recipients even after 6 months.
565
C
566
C
Circulatory Collapse
This correlates to the findings of the first studies with axial
flow devices in the USA.
Adverse Events (AE)
In the Rematch trial, non-neurologic bleedings, neurologic events, and perioperative bleeding were the most
common complications [4]. Recently, the INTERMACS
data analysis showed comparable results, with bleeding
and infection as the most common adverse events in the
early and late postoperative period [6]. Neurologic events
were most likely in the first 1–2 months after implant.
Device malfunction, formerly the second most frequent
cause of death (Rematch), was relatively uncommon during the duration of follow-up, with 84% freedom at
6 months. Moreover, malfunction of the newer axial flow
devices was totally absent in the European HeartMate II
study. The most common adverse events in this trial were
bleeding requiring surgery (21% of all AE), cardiac
arrhythmias (19%), and sepsis (11%) which occurred
mainly without exception in the early postoperative
period (<90 days), whereas the driveline and local infections were the most common AE of the late period [7].
A remarkable reduction of neurologic events was also
notable in the newer data analysis. Right heart failure,
one of the most common AE after LVAD implantation in
the earlier studies, seems to play a cumulatively minor role
in last examinations. Coming out of an incidence of 20%,
the right heart failure after LVAD implantation is clearly
reduced to less than 10% as well in the INTERMACS data
base, as well in the European HeartMate II study.
Conclusions
Circulatory assist devices have become a major therapeutic option in treatment of either acute or chronic heart
failure patients. In the last years, long-term circulatory
support has made a great deal of progress, and the trends
towards better device durability and reduced complication
rates will most likely continue to improve through the
development of more innovative ventricular assist devices.
References
1.
2.
3.
4.
Pagani FD, Aaronson KD (2003) Mechanical devices for temporary
support. In: Franco KL, Verrier ED (eds) Advanced therapy in cardiac surgery, 2nd vol. BC Decker, Hamilton, Ontario
Hunt SA, Abraham WT, Chin MH et al (2005) ACC/AHA 2005
Guideline update for the diagnosis and management of chronic
heart failure in the adult—summary article. Circulation 112:1825
Aggarwal S, Cheema F, Oz M, Naka Y (2008) Long-term mechanical
circulatory support. In: Cohn LH (ed) Cardiac Surgery in the Adult.
McGraw-Hill, New York, pp 1609–1628
Rose EA, Gelijns AC, Moskowitz AJ, Moskowitz AJ, Heijan DF,
Stevenson LW, Dembitsky W, Long JW, Ascheim DD, Tierney AR,
5.
6.
7.
Levitan RG, Watson JT, Meier P (2001) Randomized Evaluation of
Mechanical Assistance for the Treatment of Congestive Heart Failure
(REMATCH) Study Group. Long-term mechanical left ventricular
assistance for end-stage heart failure. N Engl J Med 345:1435–1443
Lietz K, Long JW, Kfoury AG, Slaughter MS et al (2007) Outcomes of
left ventricular Assist Device Implantation as Destination Therapy in
the post-REMATCH Era-Implications for Patient Selection. Circulation 116:497–505
Kirklin JK, Naftel DC, Stevenson LW, Kormos RL, Pagani FD, Miller
MA, Ulisney K, Young JB (2008) INTERMACS Database for Durable
Devices for Circulatory Support: First Annual Report. J Heart Lung
Transplant 27:1065–1072
Stüber M, Sander K, Lahpor J, Ahn H, Litzler PY, Drakos SG,
Musumeci F, Schlensak Ch, Friedrich I, Gustaffson R, Oertel F,
Leprince P (2008) HeartMate II Left Ventricular Assist Device, early
European experience. Eur J Cardiothorac Surg 34:289–294
Circulatory Collapse
▶ Shock, Ultrasound Assessment
Cl H2O
▶ Free-Water Clearance
Classification of Pulmonary
Hypertension
Functional classification of pulmonary hypertension
modified after the New York heart association functional
classification according to the World Health Organization
1998. Class I: No limitation of physical activity. Ordinary
physical activity does not cause undue dyspnea or fatigue,
chest pain or near syncope. Class II: Slight limitation of
physical activity. They are comfortable at rest. Ordinary
physical activity causes undue dyspnea or fatigue, chest
pain or near syncope. Class III: Marked limitation of
physical activity. They are comfortable at rest. Less than
ordinary physical activity causes undue dyspnea or
fatigue, chest pain or near syncope. Class IV: Inability to
carry out any physical activity without symptoms. These
patients manifest signs of right heart failure. Dyspnea and/
or fatigue may even be present at rest. Discomfort is
increased by any physical activity.
Clostridium difficile-Associated Diarrhea
Clearance
The volume of blood that is cleared from a given solute in
the time unit.
C
Clostridium difficile Infection
▶ Clostridium difficile-Associated Diarrhea
C
▶ eGFR, Concept of
Clostridium difficile-Associated
Diarrhea
Clenched-Fist Injury
▶ Bite Injuries
Clinical Pulmonary Infection
Score (CPIS)
Clinical score suggested for the diagnosis of VAP composed of the severity of infiltrate, body temperature, tracheal secretions, oxygenation derangement, positivity of
endotracheal aspirate cultures, and white blood cell
response.
Closed Forequarter Amputation
▶ Scapulothoracic Dissociation
ANDREW M. MORRIS
Mount Sinai Hospital and University Health Network,
University of Toronto, Toronto, ON, Canada
Synonyms
Antibiotic-associated diarrhea; Clostridium difficileassociated disease; Clostridium difficile infection; Clostridium difficile diarrhea; Pseudomembranous colitis
Definition
Clostridium difficile-associated diarrhea (CDAD) is a
colonic infection caused by the overgrowth of the anaerobic Gram-positive bacillus, C. difficile. Patients may be
asymptomatically colonized, but CDAD severity ranges
from mild watery diarrhea to severe diarrhea with
pseudomembranous colitis. Although C. difficile has various virulent factors, two pro-inflammatory exotoxins
(Toxins A and B) appear to contribute most to the watery
diarrhea [1].
Epidemiology
Closed Head Injury (CHI)
▶ Traumatic Brain Injury-Fluid Management
Clostridium botulinum
▶ Biological Terrorism, Botulinum Toxin
Clostridium difficile Diarrhea
▶ Clostridium difficile-Associated Diarrhea
567
CDAD is an emerging infectious disease in most
healthcare institutions worldwide. Recently, a more virulent fluoroquinolone-resistant strain, known as the
ribotype 027 (BI/NAP1) strain, has emerged. The prevalence of C. difficile colonization ranges from 7–11% in
acutely ill hospitalized patients to 1–2% in the general
population. An estimated 178,000 cases of nosocomial
CDAD occur in the USA annually, reflecting an incidence
of roughly 50 per 1,000 patient-days or 5 per 1,000 admissions, although there is wide variability in reported rates,
which are rising worldwide. Data on incidence in critical
care units outside an outbreak setting are unclear,
although one study reported a rate of 3.2 per 1,000
patient-days. Transmission is via C. difficile spores, which
can remain on surfaces for prolonged periods and can also
be transmitted directly person-to-person. However,
CDAD usually requires altered fecal flora, which is most
commonly caused by antibiotic use but can also be altered
568
C
Clostridium difficile-Associated Disease
by chemotherapy, radiation, proton-pump inhibitors,
anti-peristaltic agents, stool softeners, enemas, and nasogastric feeds or drainage.
Prevention
The only effective method of prevention is avoiding (or
minimizing) antimicrobial use. Most antimicrobials reduce
the concentration of healthy fecal flora, allowing overgrowth
of C. difficile. Infection control measures such as hand
washing and barrier precautions clearly reduce transmission from index cases and can avert or halt outbreaks.
site of care. Mortality in the ICU setting has recently been
reported to be 37%. Population-based mortality of CDAD
appears to be rising, with associated mortality in the USA
of 5.7 per million population in 1999 and 23.7 per million
in 2004. Whether this is due to a higher case fatality, an
increasing incidence of disease, or both is uncertain.
Economics
The attributable patient cost of CDAD in the USA ranges
from $6.408 to $9.124, costing US hospitals $1.14 to $1.62
billion annually [2].
Treatment
References
The most important first step in managing CDAD is to
remove predisposing factors such as antimicrobials,
proton-pump inhibitors, etc. Many cases of mild disease
can be effectively managed with metronidazole 500mg po
(preferred over iv) tid for 10–14 days. More severe or
refractory cases often require vancomycin 125mg qid
enterally (oral, nasogastric, or via enema). General surgeons should be consulted rather early in the course of
illness in moderate to severe cases: undoubtedly, deaths
may occur because of delayed surgery. Relapses occur in
5–10% of cases; management of relapses is beyond the
scope of this text.
1.
2.
Poutanen SM, Simor AE (2004 Jul 6) Clostridium difficile-associated
diarrhea in adults. CMAJ 171(1):51–58
Scott II RD (2009) The direct medical costs of healthcare-associated
infections in U.S. hospitals and the benefits of prevention. Department of Health and Human Services, Centers for Disease Control
and Prevention
Clostridium difficile-Associated
Disease
▶ Clostridium difficile-Associated Diarrhea
Evaluation
CDAD should be considered in all patients with new,
unexplained watery diarrhea. In the ICU, certain feeds
(especially high-osmotic feeds) and bowel regimens may
be the underlying cause of diarrhea, but they may also be
contributing factors to CDAD. Diagnosis of CDAD is
challenging because of the lack of a highly sensitive and
specific test. CDAD is unlikely in patients with fewer than
three bowel movements per day, and testing is therefore
not advised. When testing is indicated, the best test
(>90% sensitive, and >97% specific) is a quantitative
PCR (qPCR) which gives results in hours. The C. difficile
qPCR tests for a gene that codes for toxin B or its regulators, although most laboratories do not perform this test.
The more common enzyme immunoassays test for either
toxin A or toxins A and B; they are only about 70%
sensitive although specificity and turnaround time are
comparable to qPCR. Tissue culture cytotoxicity assay
has similar diagnostic characteristics to qPCR, but results
are only available after about 48 h and so it is falling out of
favor.
Prognosis
CDAD carries overall 1–2% mortality, although there is
a wide variability of reported mortality, depending on the
Closure Time (PFA)
The closure time, or platelet function analyzer, is an in
vitro test of primary hemostasis. The assay measures the
time necessary for whole blood to occlude a ring coated
with collagen and adenosine diphosphate (ADP) or collagen and epinephrine while circulated through a cartridge
at high shear flow.
CMR
▶ Cardiac Magnetic Resonance Imaging
Cnidaria
▶ Jellyfish Envenomation
Coagulation, Monitoring at the Bedside
CO (Cardiac Output)
▶ MostCare Monitor
Coagulation, Monitoring at the
Bedside
WERNER BAULIG, DONAT R. SPAHN, MICHAEL T. GANTER
Institute of Anesthesiology, University Hospital Zurich,
Zurich, Switzerland
Definition
Bedside coagulation monitoring is useful and essential in
assessing patients’ hemostatic status with minimal time
delays. The primary goal of therapeutic interventions in
the coagulation system is to keep the optimal and individual balance between sufficient hemostasis and prevention
of thrombosis. In severely bleeding patients, early evidence
suggests that treatment directed at aggressive and targeted
hemostatic resuscitation can lead to dramatic reductions
in mortality. For example, by specific and goal-directed
treatment
guided
by
transfusion
algorithms,
coagulopathic patients may be optimized readily, thereby
minimizing exposure to blood products, reducing costs
and improving patients’ outcome.
Pre-existing Condition
Point of care (POC) monitoring of blood coagulation at
the patient’s bedside is becoming increasingly important
in the perioperative period to guide both pro- and anticoagulant therapies. This monitoring, for example, allows
diagnosing potential causes of hemorrhage, to guide
hemostatic therapies, to predict the risk of bleeding during
consecutive surgical procedures, and to identify patients at
risk for thrombotic events [1].
Routine laboratory-based coagulation tests (e.g., PT/
INR, aPTT, Fibrinogen) measure clotting times and factors in recalcified plasma after activation with different
coagulation activators. Platelet numbers are given to complete overall coagulation assessment. Although accurate,
standardized, and used for a long time, the value obtained
by routing coagulation testing has been questioned in the
perioperative setting because values are measured in
plasma, no information on platelet function (PF) is available, and there is a time delay of at least 45–60 min from
sampling to obtaining the results. POC coagulation
C
monitoring may overcome several limitations of routine
coagulation testing. Blood is analyzed bedside close to the
patient and not necessarily in the central laboratory. The
coagulation status is assessed in whole blood, better
describing the physiological clot development by letting
the plasmatic coagulation system interact with platelets
and red cells. Furthermore, results are available earlier and
clot development can be visually displayed real-time using
certain devices.
According to their main objective and function, POC
coagulation analyzers can be categorized into (i) techniques analyzing combined plasmatic coagulation, platelet
function, and fibrinolytic system, i.e., viscoelastic techniques, (ii) instruments assessing therapeutic
anticoagulation like the activated clotting time (ACT) or
heparin management devices, and (iii) specific ▶ platelet
function analyzers.
Viscoelastic Coagulation Monitoring
Thrombelastography (TEG®), Rotational
Thrombelastometry (ROTEM®)
Thrombelastography is a method to assess the overall
coagulation function and was first described by Hartert
in 1948. Because the thrombelastograph measures the
shear elasticity of the blood sample, thrombelastography
is sensitive to all interacting cellular and plasmatic components such as coagulation and fibrinolysis. The
thrombelastograph measures and graphically displays the
time until initial fibrin formation, the kinetics of fibrin
formation and clot development, and the ultimate
strength and stability of the fibrin clot as well as fibrinolysis. In the earlier literature, the terms thrombelastography, thrombelastograph, and TEG have been used
generically. However, in 1996, thrombelastograph and
TEG® became a registered trademark of the Haemoscope
Corporation (Niles, IL, USA) and from that time onwards
these terms have been employed to describe the assay
performed using hemoscope instrumentation only. Alternatively, Pentapharm GmbH (Munich, Germany) markets
a modified instrumentation using the terminology rotational thrombelastometry, ROTEM®.
The TEG® (Haemonetics Corp., formerly Haemoscope
Corp, Niles, IL, USA) measures the clot’s physical property
by the use of a stationary cylindrical cup that holds the
blood sample and is oscillated through an angle of 4 45’.
Each rotation cycle lasts 10 s. A pin is suspended in the
blood by a torsion wire and is monitored for motion (Fig. 1,
TEG®). The torque of the rotation cup is transmitted to
the immersed pin only after fibrin–platelet bonding has
linked the cup and pin together. The strength of these
569
C
C
Coagulation, Monitoring at the Bedside
fibrin–platelet bonds affects the magnitude of the pin
motion. Thus, the output is directly related to the strength
of the formed clot. As the clot retracts or lyses, these bonds
are broken and the transfer of cup motion is again diminished. The rotation movement of the pin is converted by
a mechanical-electrical transducer to an electrical signal
finally being displayed as the typical TEG® tracing (Fig. 2,
TEG®). The ROTEM® (tem International GMBH, formerly Pentapharm GmbH, Munich, Germany) technology avoids some limitations of traditional instruments for
thrombelastography, especially the susceptibility to
mechanical shocks. Signal transmission of the pin
suspended in the blood sample is carried out via an optical
detector system, not by a torsion wire and the movement
is initiated from the pin, not from the cup. Furthermore,
4
5
the instrument is equipped with an electronic pipette
(Fig. 1, ROTEM®).
▶ TEG®/ROTEM® both measure and graphically display the changes in viscoelasticity at all stages of the
developing and resolving clot (Fig. 2, TEG®/ROTEM®),
i.e., the time until initial fibrin formation (TEG® reaction
time [R]; ROTEM® clotting time [CT]), the kinetics of
fibrin formation and clot development (TEG® kinetics
[K], alpha angle [a]; ROTEM® clot formation time
[CFT], alpha angle [a]), the ultimate strength and stability
of the fibrin clot (TEG® maximum amplitude [MA];
ROTEM® maximum clot firmness [MCF]), and clot lysis
(fibrinolysis). TEG®/ROTEM® are fibrinolysis-sensitive
assays and allow for diagnosis of hyperfibrinolysis in
bleeding patients. To determine the fibrinogen influence,
5
5
4
2
4
2
3
3
1
3
1
TEG
ROTEM
1
2
SONOCLOT
Coagulation, Monitoring at the Bedside. Figure 1 Working principles of viscoelastic point of care (POC) coagulation devices.
TEG®. rotating cup with blood sample (1), coagulation activator (2), pin and torsion wire (3), electromechanical transducer (4),
data processing (5). ROTEM®. Cuvette with blood (1), activator added by pipetting (2), pin and rotating axis (3), electromechanical
signal detection via light source and mirror mounted on axis (4), data© processing (5). SONOCLOT®. Blood sample in cuvette (1),
containing activator (2), disposable plastic probe (3), oscillating in blood sample mounted on electromechanical transducer
head (4), data processing (5)
125
PF
CFT
K
R
MA
CL
100
MCF
CT
75
LY
α
α
CR
TEG
Time
ROTEM
Clot Signal
Clot firmness
50
Clot firmness
570
Time
25
ACT
0
0
5
SONOCLOT
10
15
20
25
30
Minutes
Coagulation, Monitoring at the Bedside. Figure 2 Typical TEG®/ROTEM® tracing and Sonoclot Signature. TEG®. R = reaction
time, K = kinetics, a = slope between R and K, MA = maximum amplitude, CL = clot lysis. ROTEM®. CT = clotting time, CFT = clot
formation time, a = slope of tangent at 2 mm amplitude, MCF = maximal clot firmness, LY = Lysis. SONOCLOT®. ACT = activated
clotting time, CR = clot rate, PF = platelet function
Coagulation, Monitoring at the Bedside
tests can be performed eliminating platelet function by
a GPIIb/IIIa inhibitor (e.g., fib-TEM). This concept has
been proven to work and a good correlation of this modified MA/MCF with fibrinogen levels determined by
Clauss method has been shown. Most common tests for
both technologies are listed in Table 1. The repeatability of
measurements by both devices has shown to be acceptable,
provided they are performed exactly as outlined in the
user’s manuals.
Sonoclot Coagulation and Platelet Function
Analyzer (Sonoclot®)
The Sonoclot Analyzer® (Sienco Inc., Arvada, CO) has
been introduced in 1975 by von Kaulla et al. The
Sonoclot® measurements are based on the detection of
C
viscoelastic changes of a whole blood or plasma sample.
A hollow probe is immersed into the blood sample and
oscillates vertically in the sample (Fig. 1, Sonoclot®). The
changes in impedance to movement imposed by the developing clot are measured. Different cuvettes with different
coagulation activators/inhibitors are commercially available (Table 1). Normal values for tests run by the
▶ Sonoclot® Analyzer depend largely on the type of
sample (whole blood vs plasma, native vs citrated sample)
and cuvette used.
The Sonoclot® Analyzer provides information on the
entire hemostasis process both in a qualitative graph,
known as the Sonoclot® Signature (Fig. 2, Sonoclot®)
and as quantitative results: the activated clotting time
(ACT), the clot rate (CR), and the platelet function (PF).
Coagulation, Monitoring at the Bedside. Table 1 Commercially available tests for viscoelastic point of care coagulation devices
(Modified according to [1])
Assay
Activator inhibitor
Proposed indication
®
Thrombelastograph hemostasis system (TEG )
Kaolin
Kaolin
Overall coagulation assessment including platelet function
Heparinase
Kaolin + heparinase
Specific detection of heparin effect (modified kaolin test adding
heparinase to inactivate present heparin)
Platelet mapping
ADP arachidonic acid
Platelet function, monitoring anti-platelet therapy (aspirin, ADP-, GPIIb/
IIIa inhibitors)
Native
None
Nonactivated assay
Also used to run custom hemostasis tests
®
Rotational thrombelastometry (ROTEM )
ex-TEM
TF
Extrinsic pathway; fast assessment of clot formation and fibrinolysis
in-TEM
Contact activator
Intrinsic pathway; assessment of clot formation and fibrin polymerization
fib-TEM
TF + GPIIb/IIIa antagonist
Qualitative assessment of fibrinogen function
ap-TEM
TF + Aprotinin
Fibrinolytic pathway; fast detection of fibrinolysis when used together
with ex-TEM
hep-TEM
Contact activator +
heparinase
Specific detection of heparin (modified in-TEM test adding heparinase to
inactivate present heparin)
na-TEM
None
Nonactivated assay
Also used to run custom hemostasis tests
Sonoclot® coagulation and platelet function analyzer
SonACT
Celite
kACT
Kaolin
High-dose heparin management
High-dose heparin management
gbACT+
Glass beads
Overall coagulation and platelet function assessment
H-gbACT+
Glass beads + heparinase
Overall coagulation and platelet function assessment in presence of
heparin; detection of heparin
Native
None
Nonactivated assay
Also used to run custom hemostasis tests
ACT = activated clotting time, TF = tissue factor, ADP = adenosine diposphate, GPIIb/IIIa = glycoprotein IIb/IIIa receptor
571
C
572
C
Coagulation, Monitoring at the Bedside
The ACT is the time in seconds from the activation of
the sample until the beginning of a fibrin formation.
This onset of clot formation is defined as a certain upward
deflection of the Sonoclot® Signature and is detected
automatically by the machine. Sonoclot®’s ACT corresponds to the conventional ACT measurement (see
below), provided that cuvettes containing a high concentration of typical activators (celite, kaolin) are being used.
The CR, expressed in units/min, is the maximum slope of
the Sonoclot® Signature during initial fibrin polymerization and clot development. PF is reflected by the timing
and quality of the clot retraction. PF is a calculated value,
derived by using an automated numeric integration of
changes in the Sonoclot® Signature after fibrin formation
has completed (see manufacturer’s reference). In order to
obtain reliable results for PF, cuvettes containing glass
beads for specific platelet activation (gbACT+) should be
used. The nominal range of values for the PF goes from 0,
representing no PF (no clot retraction and flat Sonoclot®
Signature after fibrin formation), to approximately 5,
representing strong PF (clot retraction occurs sooner and
is very strong, with clearly defined, sharp peaks in the
Sonoclot® Signature after fibrin formation).
patient hypothermia, inadequacy of specimen warming,
hemodilution, quantitative and qualitative platelet abnormalities, or aprotinin infusion. Furthermore, low factor
XII levels, which are found in patients with sepsis and
patients undergoing renal replacement therapy may lead
to falsely high ACT values.
Heparin Concentration Measurement
Bedside Monitoring of Anticoagulation
Because of the limitations of ACTestimating plasma levels
of heparin, POC devices have been developed to more
accurately measure heparin concentration. The most studied device is the Hepcon HMS Plus Hemostasis Management System (Medtronic, Minneapolis, MN). It calculates
heparin doses before initiation of CPB by performing
a heparin dose response, measuring heparin concentrations, and calculating protamine doses based on residual
heparin levels. A number of clinical studies report that
Hepcon guided anticoagulation results in higher total heparin but lower protamine doses than conventional management and may thereby decrease activation of the
coagulation and inflammatory cascade [2]. Results are provided readily, however, higher costs, more complex handling, greater dimensions compared to a conventional
ACT device, and lack of large studies showing benefit on
patient’s outcome limited its widespread use so far.
Activated Clotting Time
Monitoring Oral Anticoagulants
The ACT is a functional test of the intrinsic clotting
pathway and has been developed for guiding unfractioned
heparin-induced anticoagulation at the bedside, particularly during cardiac surgery, extracorporeal membrane
oxygenation (ECMO), and coronary interventions. Originally described by Hattersley in 1966, ACT reflects the
amount of time to form a clot by contact activation of the
coagulation cascade.
Several ACT instruments are commercially available
and ACT measurements can be performed using different
coagulation activators, each with unique characteristics
and various interactions. Results from different ACT
tests cannot be used interchangeably. This variability
highlights the importance of establishing appropriate
instrument-specific reference values for monitoring
anticoagulation.
ACT monitoring of heparinization is not without limitations, and its use has been criticized because of significant variability and the poor correlation with plasma
heparin concentrations during cardiopulmonary bypass
(CPB). It has been suggested that many factors – patient,
operator, and equipment – can alter ACT. Therefore, ACT
prolongation during CPB is not necessarily caused by
heparin administration alone and may be associated with
Several POC coagulation devices have been developed to
measure the effects of oral anticoagulants (warfarin therapy) and to provide modified prothrombin time (PT)/
INR values. The last-generation devices include the Harmony (Lifescan Inc./Johnson & Johnson, Milpitas, CA)
and the INRatio (Hemosense, Inc., Milpitas, CA). Harmony uses thromboplastin as coagulation activator and
detects clot formation by light transmission; INRatio uses
electrochemical detection of changes in impedance in the
blood sample. Results are available immediately in both
devices and correlation with PT/INR performed by conventional laboratory coagulation analyzers was good (R >
0.9). No vein puncture is required and test results are
readily available for clinical use, particularly during phases
of rapid changes in the coagulation state [1].
Platelet Function Monitoring
Currently, an increasing number of patients are on antiplatelet medication, such as cyclooxygenase-1 (COX-1)
inhibitors, adenosine diphosphate (ADP) antagonists,
and glycoprotein (GP) IIb/IIIa inhibitors. In these
patients, knowledge of residual platelet function (PF)
is highly warranted in order to maintain an optimal
and individual balance between platelet function and
C
Coagulation, Monitoring at the Bedside
Whole Blood Impedance Platelet
Aggregometry
The novel impedance aggregometer ▶ Multiplate®
(Dynabyte, Munich, Germany) represents a significant
progress in platelet aggregometry and avoids several methodological problems of the original turbidimetric platelet
aggregometry, especially by using whole blood, disposable
test cuvettes, standardized commercially available test
reagents, an automated pipetting system, and rapidly
available results. Furthermore, this assay has a high sensitivity in detecting effects of acetylsalicylic acid,
thienopyridines, and GPIIb/IIIa inhibitors on platelets.
The principle of Multiplate® impedance platelet
aggregometry is based on two silver-coated conductive
copper electrodes immersed into whole blood and the ability of activated platelets to adhere to the electrode surface.
The instrument continuously measures the change of electrical resistance, which is proportional to the amount of
platelets attached to the electrodes. The measured impedance values are transformed to arbitrary aggregation units
(AU), which are plotted against the time (Fig. 3). Three
parameters are provided: aggregation units (AU), velocity
(AU/min), and area under the aggregation curve (AUC),
where AUC has the highest diagnostic power. The device has
five channels, therefore, parallel testing of five blood samples
with different platelet activators at the same time is possible.
Multiplate® has some limitations: it requires high
sample volumes, test results are not independent of the
actual platelet number, and running the tests is timeconsuming and expensive. Additionally, as with other
platelet function tests, a resting time of 30 min after
blood sampling is recommended before running the
tests, which may impede immediate detection of platelet
dysfunction intraoperatively.
150
135
120
Aggregation
C
105
Aggregation
inhibition, i.e., bleeding and thrombosis. Traditional
assays, such as turbidimetric platelet aggregometry, are
still considered clinical standards of PF testing. Turbidimetric platelet aggregometry is one of the most widely
used tests to identify and diagnose PF defects. However,
conventional platelet aggregometry is labor intensive,
costly, time-consuming, and requires a high degree of
experience and expertise to perform and interpret.
Another important limitation of this technique is that
platelets are tested under relatively low shear conditions
and in free solution within platelet-rich plasma conditions
that do not accurately simulate primary hemostasis.
Because of these disadvantages of conventional platelet
aggregometry, new automated technologies have been
developed to measure PF and several techniques can be
used at the bedside [3].
573
90
S1
Velocity
75
60
S2
45
AUC
30
15
0
0
1
2
3
Min
4
5
6
Coagulation, Monitoring at the Bedside. Figure 3 Whole
blood impedance platelet aggregometry: Mulitplate® tracing.
The measured impedance values are transformed to arbitrary
aggregation units (AU), which are plotted against the time.
Measurements are performed in duplicates (S1, S2) and
averaged against each other. Velocity (AU/min), aggregation
(AU), and area under the aggregation curve (AUC) (Modified
according to [5])
VerifyNowTM/Ultegra
The ▶ VerifyNowTM Analyzer (Accumetrics, San Diego,
CA) incorporates the technique of optical platelet
aggregometry. Initially, this technique was distributed as
Ultegra Rapid Platelet Function Analyzer (RPFA). The
original RPFA assay measured agglutination of fibrinogen-coated beads in response to platelet stimulation. Activated platelets stick to the beads with a consecutive
increase in light transmission (Fig. 4). Variation of light
absorbance over time is displayed as platelet aggregation
units. Early clinical investigations yielded conflicting
results and the assay has been modified to the
VerifyNowTM assay, now detecting effects of acetylsalicylic
acid, ADP-, and GPIIb/IIIa antagonists. This assay has
been used, for example, to determine clopidogrel response
in clinical trials and its results correlated well with those of
platelet aggregometry.
VerifyNowTM tests are easy to perform, and only small
sample volumes without necessity of pipetting are
required. The absence of flow conditions and the scarce
consistency over time in the identification of aspirinresistant individuals are the limitations of this assay.
574
C
Coagulation, Monitoring at the Bedside
1
5
2
3
6
1
2
4
3
7
Coagulation, Monitoring at the Bedside. Figure 4 Working
principle of the VerifyNowTM/Ultegra device. The VerifyNowTM
assay uses platelet agonists (arachidonic acid [aspirin assay],
adenosine diphoshate [P2Y12 assay], or thrombin receptor
agonist peptide [IIb/IIIa assay]) to activate platelets. As the
platelets are activated and start to aggregate with the
fibrinogen-coated beads light transmission increases, which
will be measured by the light detector. Light source (1),
platelet (2), fibrinogen-coated beads (3), activated platelets
attached to beads (4), whole blood (5), platelet agonist (6),
light detector measuring light transmission (7)
Platelet Function Analyzer (PFA-100®)
The PFA-100® assay (Dade Behring, Schwalbach,
Germany) has been clinically introduced in 1985 by
Katzer and Born as a screening test for inherent and
acquired platelet disorders, as well as von Willebrand’s
disease. Citrated whole blood is aspirated at high shear
rates through a capillary with a membrane-coated
microaperture (collagen and either epinephrine [COLEPI] or ADP [COL-ADP]). Both shear stress and platelet
agonists lead to attachment, activation, and aggregation of
platelets forming a plug and occluding this microaperture
(Fig. 5). The time taken to occlude the aperture is known
as closure time (CT) and is a function of platelet number
and reactivity, von Willebrand factor activity, and hematocrit. The main advantages of this assay are that it does not
require fibrin formation, provides rapid results, and is
particularly useful in the diagnosis of von Willebrand’s
disease and overall platelet dysfunction. However, to
get valid results a hematocrit 30% and platelet counts
100 103/L are required. Additionally, citrate concentration, blood type, and leukocyte count may interfere
Coagulation, Monitoring at the Bedside. Figure 5 Platelet
function analyzer PFA-100®. Citrated whole blood is aspirated
at high shear rates through a capillary (1) with a membrane
coated microaperture (2). The membrane may be coated with
collagen and epinephrine (COL-EPI), or collagen and
adenosine diphosphate (COL-ADP) to activate platelets (3).
The closure time of PFA-100® is the time taken for activated
platelets to occlude the membrane
with its accuracy. While early reports suggested a high
sensitivity for detection of acetylsalicylic acid by
prolonged PFA-100® COL-EPI closure time in association
with normal values for COL-ADP, more recent investigations cannot confirm these results.
Modified Thrombelastography: Platelet
Mapping
Since conventional TEG®/ROTEM® are not sensitive to
targeted pharmacological platelet inhibition, a more
sophisticated test has been recently developed for the
TEG® to specifically determine platelet function in presence of anti-platelet therapy (modified TEG®, Platelet
Mapping). Briefly, the maximal hemostatic activity of
the blood specimen is first measured by a kaolin-activated
whole blood sample. Then, further measurements are
performed in presence of heparin to eliminate thrombin
activity: reptilase and Factor XIII (Activator F) generate
a cross-linked fibrin clot to isolate the fibrin contribution
to the clot strength. The contribution of the ADP or TxA2
receptors to the clot formation is provided by the addition
of the appropriate agonists, ADP, or arachidonic acid. The
results from these different tests are then compared to each
other and the platelet function is calculated.
Platelet mapping seems to be a suitable procedure for
the assessment of all three classes of anti-platelet agents,
Coagulation, Monitoring at the Bedside
but at present the sensitivity and specificity compared to
laboratory platelet aggregometry has not been determined
in detail. Additionally, the reagents are expensive, multiple
channels are required to run the tests, and well-trained
personnel are required for optimal performance, limiting
its use as POC procedure.
Platelet-Activated Clotting Time
Platelet-activated clotting time (PACT; HemoSTATUS,
MedtronicHemoTec, Inc., Parker, CA) is a modified
whole blood-activated clotting time test (ACT) adjoining
platelet activation factor (PAF) to the reagent mixture for
detection of platelet responsiveness by shortening the kaolin-activated clotting time in whole blood samples. Until
now, only two studies investigating the correlation to
clinical bleeding in patients undergoing cardiac surgery
have been performed and their results were controversial.
ICHOR/Plateletworks System
This platelet count ratio assay from Helena Laboratories
(Beaumont, TX) simply compares whole blood platelet
count in a control EDTA blood sample with the platelet
count in a similar sample that has been exposed to
a platelet activator. In patients without platelet dysfunction or anti-platelet drug treatment, the presence of the
agonist reduces platelet counts close to zero, due to aggregation of most of the platelets. The findings of recent
studies indicate that adding the agonist ADP to the test
sample appears useful for the assessment of both P2Y12
inhibitors (clopidrogrel) and GPIIb/IIIa antagonists. Minimal sample preparation and whole blood processing are
advantages of this assay. The main disadvantage, however,
is the lack of sufficient investigations.
Impact Cone and Plate(Let) Analyzer
The Impact Cone and Plate(let) Analyzer (CPA, DiaMed,
Israel) tests whole blood platelet adhesion and aggregation
under artificial flow conditions. A small amount of whole
blood is exposed to a uniform shear in a spinning cone and
platelet adhesion to the polystyrene wells is automatically
analyzed by an inbuilt microscope. The quantity of moistening the surface of the plates (surface covering) depends
on platelet function, fibrinogen, von Willebrand’s factor
levels, and the bioavailability of GPIb and GPIIa/IIIa
receptors. Test duration accounts less than 6 min. The
addition of arachidonic acid and ADP to the test specimens may assess the effect of acetylsalicylic acid and ADP
antagonists on platelets. The Impact Analyzer is a simple
and rapid whole blood platelet analyzer requiring small
sample volumes. Test results are however dependent
C
575
on platelet count and hematocrit. Furthermore, only limited published data are available on its clinical performance so far.
Applications
In patients sustaining severe trauma or undergoing major
surgery, such as cardiac, aortic, and hepatic surgery,
maintaining an adequate coagulation status is essential
besides preserving sufficient blood volume and oxygen
carrying capacity. These patients require sophisticated
and real-time coagulation monitoring to adequately assess
and treat hemostasis based on the underlying cause of
bleeding (e.g., metabolic disorders, hypothermia, lack of
clotting factors, dilutional coagulopathy, platelet dysfunction, hypo-, or hyperfibrinolytic state).
Monitoring Pro-coagulant Therapy
Modern practice of coagulation management is based on
the concept of specific component therapy and requires
rapid diagnosis and monitoring of the pro-coagulant therapy (i.e., clotting times, clot kinetics, and clot strengthening). Fibrinogen is a key coagulation factor (substrate to
form a clot). Fibrinogen levels can be assessed by measuring clot strength (MCF/MA) in the presence of platelet
inhibition by a GPIIb/IIIa inhibitor (e.g., fib-TEM) or by
assessing Sonoclot®’s CR. Fibrinogen substitution should
be considered in a bleeding patient, if MCF levels are lower
than 9 mm in a fib-TEM test. Factor XIII is needed for
cross-linking fibrin, therefore stabilizing the clot, increasing clot strength and resistance to fibrinolysis. There are
reports on patients with unexplained intraoperative bleeding due to decreased factor XIII and subsequent stabilization after substitution.
In order to study thrombin generation, modified
TEG®/ROTEM® parameters (based on the original tracing) have been introduced recently: maximum velocity of
clot formation (maximum rate of thrombus generation,
MaxVel), time to reach MaxVel (time to maximum thrombus generation, tMaxVel), and total thrombus generation
(area under the curve, TTG). These parameters are supposed to be more sensitive to rVIIa than standard TEG®/
ROTEM® parameters and dilute tissue factor should be
used as coagulation activator for best sensitivity.
Antifibrinolytic drugs (e.g., tranexamic and epsilon
aminocaproic acid) are used to treat hyperfibrinolysis
and to reduce bleeding and transfusion requirements in
complex surgical procedures. Antifibrinolytic therapy
may be predicted in vitro in TEG®/ROTEM® with certain
tests already containing an antifibrinolytic agent
(e.g., ap-TEM). Ap-TEM predictive for a good patient
response would then show a significantly improved
C
576
C
Coagulation, Monitoring at the Bedside
initiation/propagation phase compared to ex-TEM and or
disappearance of signs of hyperfibrinolysis. There are no
conclusive studies on monitoring desmopressin (DDAVP)
therapy so far.
During hepatic surgery and particularly orthotopic
liver transplantation (OLT) large derangement in the coagulation status makes POC coagulation monitoring highly
desirable. Decreased synthesis and clearance of clotting
factors and platelet defects lead to impaired hemostasis
and hyperfibrinolysis [5]. Systemic inflammatory response
syndrome (SIRS), sepsis, and disseminated intravascular
coagulation (DIC) may further complicate a preexisting
coagulopathy. Finally, dramatic hyperfibrinolysis may
occur during the anhepatic phase of OLT and immediately
following organ reperfusion, resulting from accumulation
of tissue plasminogen activator due to inadequate hepatic
clearance, a release of exogenous heparin, and endogenous
heparin-like substances, as well as an overt activation of
the complement system. In addition to the hemorrhagic
risk associated with hepatic surgery and OLT, hypercoagulability and thrombotic complication have been
described in the postoperative period and this can adequately be assessed with TEG®/ROTEM®.
Monitoring Anticoagulant Therapy
The complex process of anticoagulation with heparin for
cardiopulmonary bypass (CPB), antagonism with protamine, and postoperative hemostasis therapy in patients
undergoing cardiac surgery cannot be performed without
careful and accurate bedside coagulation monitoring.
ACT and Sonoclot® Analyzer have been used to guide
heparin management for CPB measuring the activated
clotting time (ACT) and its accuracy and performance
have been shown to be comparable. Furthermore, the
Sonoclot® Analyzer has been shown to reliably detect
pharmacological GPIIb/IIIa inhibition and successfully
used to assess the coagulation status and platelet function
successfully in patients undergoing cardiac surgery [6].
Viscoelastic POC coagulation devices have been
applied, with limited success, to predict excessive bleeding
after CPB. However, large prospective and retrospective
studies have demonstrated a significant decrease in perioperative and overall transfusion requirement if hemostasis management was guided by TEG®/ROTEM®-based
algorithms.
To detect non-heparin-related hemostatic problems
even in presence of large amounts of heparin during
CPB, tests with heparinase have been developed for each
instrument (Table 1) and algorithms based on heparinasemodified TEG® resulted in a significant reduction of
hemostatic products.
Additionally, perioperative administration of drugs
with specific anti-platelet activity theoretically requires
specific platelet function monitoring at the bedside to
guarantee optimal hemostatic management. However,
the current commercially available platelet function POC
devices are of limited use since these devices often require
frequent quality controls and well-trained personnel to
run the tests accurately, are time consuming, and expensive. Furthermore, large studies showing the reliability and
clinical usability are lacking for most of these POC platelet
analyzers.
Monitoring Hypercoagulability and
Thrombosis
Recognized risk factors for thrombosis are generally
related to one or more elements of Virchow’s triad (stasis,
vessel injury, and hypercoagulability). Major surgery has
been shown to induce a hypercoagulable state in the postoperative period and this hypercoagulability has been
implicated in the pathogenesis of postoperative thrombotic complications, including deep vein thrombosis
(DVT), pulmonary embolism (PE), myocardial infarction
(MI), ischemic stroke, and vascular graft thrombosis.
Identifying hypercoagulability with conventional nonviscoelastic laboratory tests is difficult unless the fibrinogen concentration or platelet count is markedly increased.
However, hypercoagulability is readily being diagnosed by
viscoelastic POC coagulation analyzers and TEG®/
ROTEM® have been increasingly used in the assessment
of postoperative hypercoagulability for a variety of surgical procedures. Hypercoagulability is being diagnosed if
the R/CT time is short and the MA/MCF is increased
(exceeding 65–70 mm) [1].
References
1.
2.
3.
4.
5.
6.
Ganter MT, Hofer CK (2008) Coagulation monitoring: current techniques and clinical use of viscoelastic point-of-care coagulation
devices. Anesth Analg 106:1366–1375
Aziz KA, Masood O, Hoschtitzky JA, Ronald A (2006) Does use of the
Hepcon point-of-care coagulation monitor to optimise heparin and
protamine dosage for cardiopulmonary bypass decrease bleeding and
blood and blood product requirements in adult patients undergoing
cardiac surgery? Interact Cardiovasc Thorac Surg 5:469–482
Michelson AD (2009) Methods for the measurement of platelet
function. Am J Cardiol 103:20A–26A
Heindl B, Spannagl M (2008) Gerinnunsmanagement beim
periopereativen Blutungsnotfall. Uni-Med Verlag Bremen, 1-Auflage,
p 57
Dickinson KJ, Troxler M, Homer-Vanniasinkam S (2008) The surgical application of point-of-care haemostasis and platelet function
testing. Br J Surg 95:1317–1330
Gibbs NM (2009) Point-of-care assessment of anti platelet agents in
the perioperative period: a review. Anaesth Intensive Care
37:354–369
Coagulopathy
Coagulopathy
JEFFRY L. KASHUK
Division of Trauma, Acute Care and Critical Care Surgery
and Section of Acute Care Surgery, Penn State Hershey
Medical Center, Hershey, PA, USA
Synonyms
Acute coagulopathy of trauma; posttraumatic DIC
Definition
Hemorrhagic shock leading to postinjury coagulopathy
accounts for approximately half of deaths worldwide of
patients arriving at the hospital with acute injury. This
death rate has improved only marginally over the past
25 years despite the widespread adoption of damage control techniques. Accordingly, postinjury coagulopathy,
defined as continued hemorrhage and ooze despite appropriate surgical control of the bleeding site, remains the
main challenge for improved outcome in this critically
injured cohort.
Previous studies have shown that among patients
presenting with massive acute blood loss, the majority
succumb to refractory coagulopathy despite surgical control of their bleeding. Although the entity has been recognized for over 40 years, the pathogenesis of associated
coagulation abnormalities and appropriate treatment
has remained a matter of debate. Contributing factors
to the “bloody vicious cycle,” proposed by our group
over 25 years ago [1], focused on acidosis, hypothermia,
and dilutional effects from excess crystalloid.
Recent evidence, however, suggests that coagulopathy
exists very early after injury and that the condition is
initially independent of clotting factor deficiency, as over
one third of multiply injured patients are coagulopathic by
conventional laboratory assessment on arrival to the emergency department. The fact that this subset of patients also
has an increased incidence of subsequent multiple organ
failure (MOF) and death underscores the importance of
understanding the pathogenesis of early postinjury
coagulopathy. Brohi and Cohen [2] have suggested that
the mechanism of acute endogenous coagulopathy is
mediated by the thrombomodulin pathway via activated
protein C, leading to increased fibrinolysis. Such a process
may be teleologically protective by inducing an “autoanticoagulation” state that could potentially protect critical tissue beds in the circulation from thrombosis in the
face of an activated coagulation system responding to
systemic shock and tissue factor release.
C
577
Treatment
A uniform approach to management of postinjury
coagulopathy remains a substantial challenge, due to the
fact that hemostasis represents a fusion of multiple
dynamic reactions with complex interactions of thrombin,
fibrinogen, platelets, other protein clotting factors, Ca2+,
and endothelium. Furthermore, the contributions of tissue factor release modified by hypothermia and acidosis in
the development of early acute coagulopathy appear
important, and this process may be initiated by either
endothelial-based tissue factor or collagen pathways in
the setting of systemic shock[3]. Our updated “bloody
vicious cycle” [4] emphasizes the fact that early postinjury
coagulopathy (“acute endogenous coagulopathy”) occurs
very soon after injury and is unrelated to clotting factor
deficiency and thus resistant to factor replacement.
Rather, this injury complex is triggered by cellular
ischemia and exposed tissue factor, activating endothelium well before clotting factor depletion occurs. However,
with continued blood loss and clot formation in tissue,
factor depletion ultimately occurs, leading to a “systemic
coagulopathy,” which unquestionably requires factor repletion to restore coagulation homeostasis. Regardless of the
mechanisms involved, current clinical massive transfusion
protocols promoting “damage control resuscitation”; i.e.,
pre-emptive transfusion of plasma, platelets, and fibrinogen, appropriately represent an initial attempt to replete
substrate for the coagulation system. But appropriate continued use of these expensive, limited resources with potential untoward effects mandates rapid assessment of the
patient’s response to the administration of blood components via real-time assessment of coagulation function.
Strategies for Blood Component
Replacement
Traditionally, fresh frozen plasma (FFP) is prepared by
isolating the plasma from the cellular components, via
centrifugation of whole blood within 6–8 h of collection.
However, with the advent of apheresis methods little platelet-poor plasma is made and most FFP is platelet-rich
plasma, which is then frozen. Some plasma (especially
AB plasma) is collected by apheresis and many centers
use thawed plasma, often referred to as FP24. Regardless
of the plasma source, the hemostatic activity of the various
coagulation factors can be maintained for long periods of
time when frozen; however, upon thawing the concentrations of the various components decrease with the most
significant factors being V and VIII. In the injured patient
requiring factor replacement, the conversion of prothrombin to thrombin requires the coagulation factors XII,
XI, IX, and VIII, along with activated factors X and V.
C
578
C
Coagulopathy
Thus, the initial management of postinjury coagulopathy
requires the administration of thawed fresh frozen plasma
(FFP), which contains the above-mentioned coagulation
factors and up to 400 mg of fibrinogen. Red blood cell
concentrates contain minimal amounts of plasma and
coagulation factors.
Consequently, isolated administration of RBC transfusion in the absence of plasma will further potentiate
postinjury coagulopathy because of its limited hemostatic potential. The exact dosing and timing of FFP
administration is one of the most widely debated topics
in trauma.
The “evidence-based” European guidelines for the
management of bleeding in major trauma recommends a
dose of 10–15 mg/kg of FFP in patients with massive bleeding complicated with coagulopathy, defined as INR >1.5,
although these guidelines readily recognize a lack of prospective data. A significant drawback of this approach is the
time discrepancy related to the assessment of coagulation
parameters and the coagulation status at the time when
laboratory values become available. Based on this notion,
current US protocols have recommended the pre-emptive
substitution of plasma by a standardized ratio of FFP
to RBC.
We noted that >85% of transfusions were accomplished within 6 h postinjury [4]. Accordingly, we have
focused on this narrower time frame for assessing the
effects of resuscitation strategies. Furthermore, our results
suggested that the survival threshold appeared to be in the
range of 1:2–1:3 of FFP to RBC.
Platelet concentrates have been traditionally prepared
from pooling platelets obtained through centrifugation
via individual units of whole blood. Currently, apheresis
or “single-donor” collections result in fewer donor
exposures for a given dose of platelets. Furthermore,
apheresis platelets contain between 210 and 250 mL of
donor plasma, although clotting factors that are present
will diminish rapidly at typical storage temperatures
(20–24 C). Clearly, the lack of an accurate assessment of
platelet function, as opposed to platelet count, appears to
be a significant limiting factor. Thus, the relationship of
platelet count to hemostasis and the contribution of
the platelet to formation of a stable clot in the injured
patient remain largely unknown. The complex relationship of thrombin generation to platelet activation requires
dynamic evaluation of clot function, as opposed to static
measurements of platelet count, or older methods of clot
assessment, such as the bleeding time, which is of no use in
the trauma setting. Accordingly, there is no direct evidence
to support an absolute trigger for platelet transfusions in
trauma.
While the “classic” threshold for platelet transfusion
has been 50 K/mm3, a higher target level at 100 K/mm3 has
been suggested for multiply injured patients and patients
with massive hemorrhage. A pool of four to eight platelet
concentrates, or one single-donor platelet apheresis unit,
have been suggested to provide adequate hemostasis
related to thrombocytopenia in bleeding patients, increasing the platelet count by 30–50 K/mm3. Similar to plasma
and packed red cell administration, platelet transfusion is
also associated with immunological complications, with
a reported incidence of >200 per 100,000 transfused
patients. Based on the fact that platelet counts >100
109/L are unlikely to contribute to coagulopathy, routine
platelet administration in this patient cohort appears
unjustified at this time.
Cryoprecipitate is the cold insoluble fraction formed
when FFP is thawed at 4 C. “Cryo” is rich in factors VIII,
XIII, VWF, and fibrinogen. Generally, fibrinogen levels
greater than 50 mg/dL have been considered sufficient to
support physiologic hemostasis. Although recent reports
have suggested that fibrinogen should be replaced early in
coagulopathic trauma patients with hypofibrinogenemia,
none have recommended pre-emptive administration.
Many guidelines recommend a replacement threshold
for plasma fibrinogen levels <100 mg/dL (1 g/L), using
either fibrinogen concentrate (3–4 g) or cryoprecipitate
(50 mg/kg or 15–20 units). It is often underappreciated,
however, that FFP, pooled platelets, and even packed red
blood cells contain fibrinogen. Accordingly, evaluation of
plasma fibrinogen levels after administration of component therapy with FFP and platelets during massive resuscitation may avoid unnecessary use of cryoprecipitate.
Four units of FFP contain approximately 1,500 mg of
fibrinogen, equivalent to one pooled cryoprecipitate
pack (1,400 mg). A pooled ten pack of platelets contains
approximately 300 mg of fibrinogen. Currently there is no
scientific evidence available to support pre-emptive fibrinogen replacement in patients at risk for postinjury
coagulopathy.
Thrombelastography
The complexity of the coagulation process and the current
evolving understanding of the fundamental mechanisms
driving postinjury coagulopathy underscore the lack of
available evidence-based studies linking coagulation
with mortality. Rapid, real-time functional assessment of
coagulation function appears imperative to guide goaldirected therapy of specifically identified coagulation
abnormalities.
Recent experience with thrombelastography in our
institution [5] suggests that this technology may provide
Coagulopathy
a real-time viscoelastic analysis of the blood clotting process, and could serve as the template for clinical applications of the cell-based model of coagulation. Subsequent
treatment protocols could then be tailored based on specific evaluation of clot formation as a representative assay
of the coagulation process.
Whole blood (0.35 mL) is placed in a rotating
metal cuvette heated to 37 C. A piston is suspended in
the sample, and the rotational motion is transferred to
the piston as fibrin strands form between the wall of the
cuvette and the piston. An electronic amplification system
allows for the characteristic tracing to be recorded (see
Fig. 1).
Thrombelastography (TEG) assesses clot strength
from the time of initial fibrin formation, to clot retraction,
ending in fibrinolysis. Of significance, TEG is the only
single test that can provide information on the balance
between two important and opposing components of
coagulation, namely thrombosis and lysis, while the battery of traditional coagulation tests, which include bleeding time, prothrombin time (PT), partial thromboplastin
time (PTT), thrombin time, fibrinogen levels, factor
assays, platelet counts, and functional assays are based on
isolated, static end points. Furthermore, TEG takes into
account the interaction of the entire clotting cascade and
C
platelet function in whole blood. The PT is limited as
a measure of only the extrinsic clotting system, which
includes activation of factor VIIa, Xa, and IIa, while the
PTT test is limited by enzymatic reactions in the intrinsic
system, including the activation of factor XIIa, XIa, IXa,
and IIa. Furthermore, it is well known that hypothermia
affects various aspects of the coagulation process and leads
to functional coagulation abnormalities. Platelet dysfunction is directly influenced by concentrations of thrombin
and fibrinogen, and previous work in our laboratory and
by others has demonstrated platelet dysfunction related to
hypothermia, acidosis, and hypocalcemia.
Rapid thromboelastography (r-TEG) differs from conventional TEG because tissue factor is added to the whole
blood specimen, resulting in a rapid reaction and subsequent analysis. Given the importance of rapid, real-time
assessment of coagulation function in trauma, r-TEG
appears to be ideal for this purpose. Our recent studies
with this technique suggest that a reduction of blood
product use may be accomplished [5]. Furthermore, an
important aspect of such monitoring is that the results are
available point of care (POC), transmitted directly to the
operating room computer screens within minutes,
enabling prompt resuscitation strategies based on the
r-TEG results.
Coagulation
Fibrinolysis
Torsion wire
Maximum
Amplitude (mm)
Plate lets (MA)
Pin
LY
α
Cup
0.36 mL whole blood
(Clotted)
TEG
ACT
Heating element,
sensor and controller
Enzymatic
a
4°45
Time (s)
Fibrinogen
(K, α)
Thrombolysins
(Ly3O, EPL)
b
Coagulopathy. Figure 1 Technique of Thrombelastography (reprinted with permission from Hemoscope Corporation,
Niles, IL). (a) A torsion wire suspending a pin is immersed in a cuvette filled with blood. A clot forms while the cuvette is rotated
45 degrees, causing the pin to rotate depending on the clot strength. A signal is then discharged to the transducer that reflects
the continuity of the clotting process. The subsequent tracing (b) corresponds to the entire coagulation process from thrombin
generation to fibrinolysis. The R value, which is recorded as TEG-ACT in the rapid TEG specimen, is a reflection of enzymatic
clotting factor activation. The K value is the interval from the TEG-ACT to a fixed level of clot firmness, reflecting thrombin’s
cleavage of soluble fibrinogen. The α is the angle between the tangent line drawn from the horizontal base line to the beginning
of the cross-linking process. The MA, or maximum amplitude, measures the end result of maximal platelet-fibrin interaction, and
the LY 30 is the percent lysis which occurs at 30 minutes from the initiation of the process, which is also calculated as the EPL, or
estimated percent lysis
579
C
580
C
Coagulopathy
The various components of the r-TEG tracing are
depicted in Fig. 1. The r value represents initial thrombin
generation and is a reflection of enzymatic clotting factor
activation. It is recorded as TEG-ACT for the r-TEG assay,
which includes tissue factor.
K is the interval measured from the TEG-ACT to
a fixed level of clot firmness or the point that the amplitude of the tracing reaches 20 mm; this reflects thrombin’s
ability to cleave soluble fibrinogen. The a is the angle
between the tangent line drawn from the base horizontal
line to the beginning of the cross-linking process, measured in degrees, and is affected primarily by the rate of
thrombin generation, which directly influences the conversion of fibrinogen to fibrin; thus the higher the angle,
the greater the rate of clot formation. The maximum
amplitude (MA) measures the maximum amplitude, and
is the end result of maximal platelet–fibrin interaction via
the GPIIb-IIIa receptors, which simulates the end product
of coagulation via the platelet plug. G is a computergenerated value reflecting the complete strength of the
clot from the initial fibrin burst through fibrinolysis and
is calculated from A (amplitude), which begins at the
bifurcation of the tracing. This is based on a curvilinear
relationship: G = (5000 A)/(100 A).
Conceptually, G is the best measure of clot strength
because it reflects the contributions of the enzymatic and
platelet components of hemostasis. Normal coagulability
is defined as G between 5.3 and 12.4 dynes/cm2
(Haemoscope Corporation, Niles, IL). The r-TEG tracing
represents a global analysis of hemostatic function from
initial thrombin generation to clot lysis.
Component Blood Product Therapy Guided
by Rapid(r) TEG
Transfusion therapy guided by r-TEG has become an
integral part of resuscitation in our institution. Using
this technology, a variety of coagulation abnormalities
have been noted, which in the past would have been
overlooked. The various r-TEG values, being derived
from a single measurement of whole blood coagulation,
are not independent measurements, but a continuum of
blood coagulation with interactions between all components. For instance, thrombin liberates fibrinopeptides
from fibrinogen, allowing association with other fibrinogen molecules for “soluble fibrin” and subsequently
thrombin-activated factor XIII converts “soluble” into
“cross-linked” fibrin. Furthermore, thrombin affects
platelet function due to combined effects with factor
VIII and von-Willibrand factor. In contrast, routine laboratory coagulation tests represent variables that cannot
always be compared to r-TEG by simple linear association. Our current protocol of component transfusion
therapy emphasizes goal-directed treatment based on
r-TEG findings, with the rapid availability of sufficient
FFP to provide a final ratio in the range of 1:2–1:3 of
FFP to packed red blood cells. Goal-directed therapy
enables accurate, stepwise correction of coagulation dysfunction by comparative assessment of the r-TEG tracings generated. Primary fibrinolysis has a distinctive
tracing, which should prompt treatment with epsilonaminocaproic acid (Amicar), a lysine analogue which
binds reversibly to the kringle domain of the enzymogen
plasminogen, preventing its activation to plasmin, which
can therefore not split fibrin. Furthermore, we have
observed post-fibrinolysis consumptive coagulopathy,
which represents diffuse clotting factor deficiency secondary to massive consumption of factors after fibrinolysis. This severe deficit of thrombin may be an indication
for r-VIIa, and we have noted rapid improvement with
normalization of r-TEG patterns after such treatment.
Platelet dysfunction is evident by narrowed maximum
amplitude (MA), and decreased clot strength (G value),
and the impact of fibrinogen are readily detected on r-TEG
as expressed by the angle and K value. r-TEG may allow
for improved resuscitation based on real-time coagulation monitoring. The potential benefits of such an
approach include (1) reduction of transfusion volumes
via specific, goal-directed treatment of identifiable coagulation abnormalities, (2) earlier correction of coagulation
abnormalities with more efficient restoration of physiological homeostasis, (3) improved survival in the acute
hemorrhagic phase due to improved hemostasis from
correction of coagulopathy, and (4) improved outcomes
in the later phase due to attenuation of immunoinflammatory complications, including adult respiratory
distress syndrome (ARDS) and multiple organ dysfunction (MOF).
References
1.
2.
3.
4.
5.
Kashuk J, Moore EE, Milikan JS et al (1982) Major abdominal
vascular trauma – a unified approach. J Trauma 22:672
Brohi K, Cohen MJ, Ganter MT et al (2008) Acute coagulopathy of
trauma: hypoperfusion induces systemic anticoagulation and
hyperfibrinolysis. J Trauma 64:1211–1121
Furie B, Furie BL (2008) Mechanisms of thrombus formation.
N Engl J Med 359:938–949
Kashuk JL, Moore EE, Johnson JL et al (2008) Postinjury life threatening coagulopathy: is 1:1 fresh frozen plasma: packed red blood cells
the answer? J Trauma 65:261–270
Kashuk JL, Moore EE, Wohlauer M et al (2009) Point of care rapid
thrombelatography improves management of life threatening
postinjury coagulopathy J Trauma (in press)
Cocaine
Cocaine
JUDD E. HOLLANDER
Department of Emergency Medicine, University of
Pennsylvania, Philadelphia, PA, USA
Synonyms
Cocaine toxicity; Stimulant toxicity
Definition
Medical complications temporally associated with cocaine
use may occur in many different organ systems.
Most severe cocaine-related toxicity and deaths
follow intense sympathetic stimulation (e.g., tachycardia,
hypertension, dilated pupils, and increased psychomotor
activity). Increased psychomotor activity generates heat
production, which can lead to severe hyperthermia and
rhabdomyolysis.
Cocaine-associated cardiovascular effects are common. Myocardial infarction (MI) due to cocaine occurs
in approximately 6% of patients presenting with cocaineassociated chest pain [1] and is increased 24-fold in the
hour after cocaine use. In patients aged 18–45 years, 25%
of MIs are attributed to cocaine use and is most common
in patients without large cocaine exposures. Cardiac
conduction disturbances (e.g., prolonged QRS and QTc)
and cardiac dysrhythmias (e.g., sinus tachycardia,
atrial fibrillation/flutter, supraventricular tachycardias,
idioventricular rhythms, ventricular tachycardia, and
ventricular fibrillation) may occur after cocaine use.
The neurologic effects are varied. Altered mental status
and seizures are typically short lived and without serious
sequelae but serious conditions such as cerebral infarction,
intracerebral bleeding, subarachnoid hemorrhage, transient ischemic attacks, and spinal infarction also occur.
Cocaine is associated with a sevenfold increased risk of
stroke in women.
Pulmonary complications of cocaine include asthma
exacerbation, pneumothorax, pneumomediastinum,
noncardiogenic pulmonary edema, alveolar hemorrhage,
pulmonary infarction, pulmonary artery hypertrophy,
and acute respiratory failure. The inhalation of cocaine
is typically associated with deep Valsalva maneuvers
to maximize drug delivery and can cause pneumothorax,
pneumomediastinum, and noncardiogenic pulmonary
edema.
The intestinal vascular system is particularly sensitive
to cocaine effects because the intestinal walls have a wide
C
distribution of alpha-adrenergic receptors with resulting
acute intestinal infarction.
Patients who present after ingesting packets filled
with cocaine are “body packers” or “body stuffers.” Body
packers swallow carefully prepared condom or latex
packets filled with large quantities of highly purified
cocaine to smuggle it into the country. Body stuffers are
typically smaller time drug dealers who swallow packets of
cocaine while avoiding police. Toxicity occurs when
cocaine leaks from the ingested packets. The most severe
manifestations of cocaine toxicity are seen in body packers
carrying large quantities of cocaine who have dehiscence
of a package with a large amount of cocaine.
Chronic cocaine use can predispose patients to other
medical conditions. Chronic users develop left ventricular
hypertrophy that can lead eventually to a dilated cardiomyopathy and heart failure. This is in contrast to the acute
cardiomyopathy from cocaine that appears to have
a reversible component after cessation of cocaine use.
Chronic severe cocaine users can present with lethargy
and a depressed mental status that is a diagnosis of exclusion (cocaine washout syndrome). This self-limited syndrome usually abates within 24 h but can last for several
days and is thought to result from excessive cocaine usage
that depletes essential neurotransmitters.
Treatment
The initial management of cocaine-toxic patients should
focus on airway, breathing, and circulation. Treatments
are directed at a specific sign, symptom, or organ system
affected and are summarized in Table 1.
Sympathomimetic Toxidrome/Agitation
Patients with sympathetic excess and psychomotor agitation are at risk for hyperthermia and rhabdomyolysis.
Management focuses on lowering body temperature, halting further muscle damage and heat production, and
ensuring good urinary output. The primary agents used
for muscle relaxation and control of agitation are benzodiazepines. Doses beyond those typically used for patients
without cocaine intoxication may be required. Antipsychotic agents are useful in mild cases, but their safety in
severe cocaine-induced agitation is not clear. Elevations in
core body temperatures should be treated aggressively
with iced water baths or cool water mist with fans.
Some cases of severe muscle overactivity may require
general anesthesia with nondepolarizing neuromuscular
blockade. Nondepolarizing agents are preferred over succinylcholine, because succinylcholine may increase the
risk of hyperkalemia in patients with cocaine-induced
581
C
582
C
Cocaine
Cocaine. Table 1 Treatment summary for cocaine-related medical conditions
Medical condition
Treatments
Cardiovascular
Dysrhythmias
Sinus tachycardia
Observation
Oxygen
Diazepam or lorazepam
Supraventricular tachycardia
Oxygen
Diazepam or lorazepam
Consider diltiazem, verapamil, or adenosine
If hemodynamically unstable: cardioversion
Ventricular dysrhythmias
Oxygen
Diazepam or lorazepam
Consider Sodium bicarbonate and/or lidocaine or amiodarone
If hemodynamically unstable: defibrillation
Acute coronary syndrome
Oxygen
Aspirin
Diazepam or lorazepam
Nitroglycerin
Heparin
For ST-segment elevation (STEMI): Percutaneous intervention (angioplasty and stent
placement) preferred. Consider fibrinolytic therapy.
Consider morphine sulfate, phentolamine, verapamil, or glycoprotein IIb/IIIa inhibitors
Hypertension
Observation
Diazepam or lorazepam
Consider nitroglycerin, phentolamine, and nitroprusside
Pulmonary edema
Furosemide
Nitroglycerin
Consider morphine sulfate or phentolamine
Hyperthermia
Diazepam or lorazepam
Cooling methods
If agitated, consider paralysis and intubation
Neuropsychiatric
Anxiety and agitation
Diazepam or lorazepam
Seizures
Diazepam or lorazepam
Intracranial hemorrhage
Surgical consultation
Cocaine washout syndrome
Supportive care
Consider Phenobarbital
Rhabdomyolysis
IV hydration
Consider sodium bicarbonate or mannitol
If in acute renal failure: hemodialysis
Body packers
Activated charcoal
Whole-bowel irrigation
Laparotomy or endoscopic retrieval
Cocaine
C
rhabdomyolysis. Plasma cholinesterase metabolizes both
succinylcholine and cocaine; therefore, prolonged clinical
effects of either or both agents might occur when both
are used.
use, since these dysrhythmias are presumably related to
sodium channel-blocking effects of cocaine. Lidocaine can
be used when dysrhythmias appear to be related to
cocaine-induced ischemia.
Hypertension
Seizures
Patients with severe hypertension can usually be safely
treated with benzodiazepines. When benzodiazepines
alone are not effective, nitroglycerin, nitroprusside,
or phentolamine can be used. Beta-blockers are
contraindicated because in the setting of cocaine intoxication, they cause unopposed alpha-adrenergic stimulation with subsequent exacerbation of hypertension.
Benzodiazepines and phenobarbital are the first- and
second-line drugs, respectively. Phenytoin is not
recommended in cases associated with cocaine. Although
no studies have compared barbiturates to phenytoin for
control of cocaine-induced seizures, barbiturates are theoretically preferable because they also produce central nervous system (CNS) sedation and are generally more
effective for toxin-induced convulsions. Newer agents have
not been well studied in the setting of cocaine intoxication.
Myocardial Ischemia or Infarction
Patients with cocaine-associated myocardial ischemia or
infarction should be treated with aspirin, benzodiazepines, and nitroglycerin as first-line agents. Benzodiazepines decrease the central stimulatory effects of cocaine,
thereby indirectly reducing its cardiovascular toxicity.
Benzodiazepines have a comparable and possibly an additive effect to nitroglycerin with respect to chest pain resolution and hemodynamic parameters for patients with
chest pain. Weight-based unfractionated heparin or
enoxaparin, as well as clopidogrel are reasonable to use
in patients with documented ischemia. Patients who do
not respond to these initial therapies can be treated with
phentolamine or calcium channel-blocking agents. In the
acute setting, beta-blockers are contraindicated, as they
can exacerbate cocaine-induced coronary artery vasoconstriction [1, 2].
When patients have ST-segment elevation and require
reperfusion, primary percutaneous coronary intervention
(PCI) is preferred over fibrinolytic therapy due to a high
rate of false-positive ST-segment elevations in patients
with cocaine-associated chest pain, even in the absence
of acute myocardial infarction (AMI), as well as the possibility of an increased rate of cerebral complications in
patients with repetitive cocaine use [2].
Dysrhythmias
Supraventricular dysrhythmias may be difficult to treat.
Initially, benzodiazepines should be administered. Adenosine can be given, but its effects may be temporary. The
use of calcium channel blockers in association with benzodiazepines appears to be most beneficial. Beta-blockers
should be avoided.
Ventricular dysrhythmias can be managed with benzodiazepines, lidocaine, or sodium bicarbonate. Bicarbonate is preferred in patients with QRS widening and
ventricular dysrhythmias that occur soon after cocaine
Cerebrovascular Infarction
Cocaine can lead to both ischemic and hemorrhagic
strokes. Most of these patients should be managed similarly to patients with non-cocaine-associated cerebrovascular infarctions with two exceptions: The utility of tPA in
patients with recent cocaine-associated cerebrovascular
events is unknown; blood pressure management should
follow the recommendations that are mentioned above.
Aortic Dissection
Cocaine use can lead to aortic dissection. Various studies
have found that 1–37% of aortic dissections may be due to
cocaine. Treatment is similar to other patients with aortic
dissection but medical management should be adjusted to
try avoid beta-blockade.
Body Stuffers and Packers
Body stuffers who manifest clinical signs of toxicity should
be treated similarly to other cocaine-intoxicated patients.
Gastrointestinal decontamination with activated charcoal
should be performed. Assessment for unruptured cocaine
packages should be considered. In some cases, whole
bowel may be necessary.
Body packers are typically asymptomatic at the time of
detention when passing immigration. In patients who
present with symptoms or develop symptoms of cocaine
toxicity or rapidly deteriorate because of exposure to huge
doses of cocaine, immediate surgical removal of the ruptured packages may be necessary.
Evaluation/Assessment
Patients manifesting cocaine toxicity should have
a complete evaluation focusing on the history of cocaine
use, signs, and symptoms of sympathetic nervous system
excess, and evaluation of specific organ system complaints.
583
C
584
C
Cocaine Toxicity
It is important to determine whether signs and symptoms
are due to cocaine itself, underlying structural abnormalities, or cocaine-induced structural abnormalities.
Laboratory Tests
Since some patients may deny cocaine use, urine testing may
be helpful. If the patient manifests moderate or severe toxicity, laboratory evaluation may include a complete blood
cell count, serum electrolytes, glucose, blood urea nitrogen,
creatinine, creatine kinase (CK), cardiac marker determinations, arterial blood gas analysis, and urinalysis. Hyperglycemia and hypokalemia may result from sympathetic excess.
Rhabdomyolysis can be diagnosed by an elevation in CK.
Cardiac troponin I or T should be used to identify acute MI
in symptomatic patients with cocaine use.
with or without clopidogrel for patients who received
stent placement. The role of nitrates and calcium channel
blockers remains speculative and should be used for symptomatic relief. The use of beta-adrenergic antagonists,
although useful in patients with previous MI and cardiomyopathy needs special consideration in the setting of
cocaine abuse. Since recidivism is high in patients with
cocaine-associated chest pain (60% admit to cocaine use
in the next year), beta-blocker therapy should probably be
avoided in many of these patients.
References
1.
Imaging and Other Tests
Chest radiography and electrocardiography should be
obtained in patients with potential cardiopulmonary complaints. Computerized tomography (CT) of the head can be
used to evaluate seizure or stroke. Patients with concurrent
headache, suspected subarachnoid hemorrhage, or other
neurologic manifestations may necessitate lumbar puncture after head CT to rule out other CNS pathology.
After-care
The appropriate diagnostic evaluation should follow general principles for the specific complication that occurred.
For risk stratification in patients who presented with
potential coronary artery disease, it is recommended that
most patients receive imaging with some form of stress
testing or CT coronary angiography.
2.
3.
4.
McCord J, Jneid H, Hollander JE, de Lemos JA, Cercek B, Hsue P,
Gibler WB, Ohman EM, Drew B, Philippides G, Newby LK
(2008) Management of cocaine-associated chest pain and myocardial
infarction a scientific statement from the American Heart Association Acute Cardiac Care Committee of the Council on Clinical
Cardiology. Circulation 117:1897–1907
Hollander JE (1995) Management of cocaine associated myocardial
ischemia. N Engl J Med 333:1267–1272
Hollander JE, Hoffman RS, Burstein J, Shih RD, Thode HC, the
Cocaine Associated Myocardial Infarction Study (CAMI) Group
(1995) Cocaine associated myocardial infarction. Mortality and
complications. Arch Intern Med 155:1081–1086
Weber JE, Shofer FS, Larkin GL, Kalaria AS, Hollander JE (2003)
Validation of a brief observation period for patients with cocaine
associated chest pain. N Engl J Med 348:510–517
Cocaine Toxicity
▶ Cocaine
Prognosis
Patient prognosis is dependent upon the type of complication the patient had from cocaine use. Continued
cocaine usage, however, is associated with an increased
likelihood of recurrent symptoms, and therefore, aggressive drug rehabilitation may be useful.
Cessation of cocaine is the hallmark of secondary
prevention. Recurrent chest pain is less common and
MI and death are rare in patients who discontinue cocaine
[2–4]. Aggressive risk factor modification is indicated in
patients with MI or with evidence of premature atherosclerosis, coronary artery aneurysm, or ectasia. This
includes smoking cessation, hypertension control, diabetes control, and aggressive lipid-lowering therapy. While
these strategies have not been tested specifically for
patients with cocaine, they are standard of care for patients
with underlying coronary artery disease.
Patients with evidence of atherosclerosis may be candidates for long-term antiplatelet therapy with aspirin
Coccidioidomycosis
JULIE P. CHOU1, TOM LIM1, ANDREW G. LEE2,
CHRISTOPHER H. MODY3
1
Department of Internal Medicine, University of Calgary,
Calgary, AB, Canada
2
Department of Radiology, University of Calgary, Calgary,
AB, Canada
3
Departments of Internal Medicine and Microbiology,
Immunology and Infectious Disease, University of
Calgary, Calgary, AB, Canada
Synonyms
California valley fever; Desert fever; San Joaquin valley
fever; Valley fever
Coccidioidomycosis
C
585
Definition
Coccidioidomycosis is an infection caused by the dimorphic fungi of the genus Coccidioides. Coccidioides species
are endemic to semiarid regions of the western hemisphere, including the San Joaquin Valley of California,
the south-central region of Arizona, and northwestern
Mexico. They can also be found in parts of Central and
South America.
Infection is generally acquired through inhalation,
whereby infectious arthroconidia reach the lower respiratory tract. Only a small proportion of infected individuals
will come to medical attention, as the majority of infections
are subclinical. Although a wide spectrum of manifestations
is possible, the majority of primary infections present with
symptoms and signs comparable to community-acquired
pneumonia or an upper respiratory tract infection. In addition to nonspecific symptoms such as chest pain, cough,
and fever, other presenting complaints may include marked
fatigue, arthralgias, erythema nodosum, and erythema
multiforme. Peripheral eosinophilia and an elevated erythrocyte sedimentation rate can be observed. A pulmonary
infiltrate with or without hilar adenopathy can be evident
on chest x-ray or CT scan (Fig. 1).
In immunocompetent hosts, primary pulmonary
coccidiodomycosis is usually a self-limiting disease. However, patients with suppressed cellular immunity, such as
those with HIV infection, solid organ transplant recipients, or individuals receiving chronic corticosteroid
treatment, are predisposed to disseminated disease.
African-American, Hispanic men, and pregnancy, place
individuals at increased risk for disseminated disease.
Extrapulmonary infection can be found in any organ
system, but most commonly affects skin, bones, joints,
and meninges. CSF analysis should be performed in
patients with primary coccidioidomycosis presenting
with CNS symptoms, and in patients who are severely ill
warranting intensive care unit admission, or patients that
may find it difficult to be followed by a physician.
Others at risk include patients who are receiving
TNF-alpha inhibitor therapy. They are more likely to
develop symptoms when infected. Preexisting diabetes
mellitus is associated with a higher likelihood of developing chronic pulmonary coccidioidomycosis, in particular
cavitary disease. Because of the concern for hemoptysis
from cavities, these patients require close monitoring.
Treatment
Immunocompetent Patients
Immunocompetent patients are unlikely to present to the
intensive care unit and treatment is usually not required in
C
Coccidioidomycosis. Figure 1 Coccidioidomycosis in two
different patients. (a) A peripheral pulmonary infiltrate on
computed tomography. (b) Thin-walled cavities as a late
sequela of coccidioidomycosis
primary pulmonary coccidioidomycosis. However, if the
symptoms persist for greater than 6 weeks, therapy should
be considered.
Immunosuppressed Patients
Patients with risk factors for disseminated disease are
offered treatment when they present with primary pulmonary coccidioidomycosis. All forms of disseminated coccidioidomycosis require antifungal therapy.
First-line therapy for treating chronic coccidioidomycosis is an oral azole. Ketoconazole, fluconazole, and
itraconazole have all been well studied. Itraconazole may
be superior in treating bone and join disease than fluconazole. Amphotericin B is reserved for the most severe cases
of coccidioidomycosis or for those who fail to respond to
azoles. Although the evidence is lacking for superiority in
treatment using liposomal amphotericin B, it should be
considered for therapy in individuals with underlying
renal disease.
586
C
Coelenterata
The duration of therapy is generally prolonged for
chronic coccidioidomycosis, with a minimum course of
12–18 months. A longer course might be considered in
immunocompromised patients. Moreover, if the meninges are involved, lifelong azole antifungal therapy is
required because of a high relapse rate. Although intravenous amphotericin B lacks efficacy in the treatment of
Coccidioides meningitis, intrathecal amphotericin B can
be used in cases refractory to azole therapy, or in situations
when a more rapid response is desired.
As for other classes of antifungal agents, the
echinocandins have yet to be adequately assessed in coccidioidomycosis. By contrast, there are small series and
case reports suggesting efficacy of voriconazole and
posaconazole when used as salvage therapy. However, no
definitive recommendation can be made at this time.
Relapses are common in this population of patients. While
the prognosis is good for most people with pulmonary
coccidioidomycosis, many patients have protracted
fatigue after resolution of pulmonary symptoms.
References
1.
Dewsnup DH, Galgiani JN, Graybill JR, Diaz M, Rendon A,
Cloud GA, Stevens DA (1996) Is it ever safe to stop azole therapy
for Coccidioides immitis meningitis? Ann Intern Med 124:305–310
Coelenterata
▶ Jellyfish Envenomation
Evaluation/Assessment
The diagnosis can be established by detecting the presence
of anticoccidioidal antibody in the serum via a variety of
techniques including ELISA, immunodiffusion, tube precipitin, and complement fixation assays. It should be
remembered; however, that critically ill patients may not
mount an effective antibody response and thus, may have
false negative serology. Alternatively, the diagnosis can
also be confirmed by identifying coccidioidal spherules
in tissue or by culturing the organism from a clinical
specimen.
In the future, polymerase chain reaction (PCR) and
real-time PCR may play a greater role in diagnostics.
However, no commercial methods are currently available
for direct detection of C. immitis from patient specimens.
After-care
For patients with severe infection, immunosuppression or
other risk factors for dissemination, lifelong follow-up
may be required. In milder cases, patients require close
monitoring of their disease every 2–4 weeks following
initial diagnosis. After noting improvement of their symptoms, clinic visits may be extended to intervals of every
3–6 months, for up to 2 years. Changes in complement
fixation serologic titers can be useful, as a rise in titer is
usually associated with disease progression. Radiographic
abnormalities should be reexamined on a periodic basis.
Coelenterate
▶ Jellyfish Envenomation
Cold Sore
▶ Herpes Simplex
Collapse
▶ Syncope
Collapsed Lung
▶ Pneumothorax
Prognosis
For patients that have had a critical illness with disseminated disease, the outlook depends on the anatomic site of
infection and the underlying immune status of the patient.
Colloid Challenge
▶ Fluid Challenge
Colloids
Colloids
LEWIS J. KAPLAN, ROSELLE CROMBIE, GINA LUCKIANOW
Department of Surgery, Yale University School of
Medicine, New Haven, CT, USA
Synonyms
Plasma volume expander (PVE); Synthetic colloid (as
opposed to biologically active colloids such as fresh frozen
plasma and human albumin)
Trade Names
As the number of colloid products is legion, a complete
listing of all manufactured colloids available throughout
the world is beyond the scope of this chapter. A partial
listing of commonly utilized colloids is presented in
Table 1.
Class and Category
All colloids belong to the class of drugs known as plasma
volume expanders. The category of different colloids
relates to the specific composition of each colloid. Nonetheless, unique differences within each category explain
the differing efficacies and plasma half-lives, and often
frequency of use. Following is a general categorization of
commonly utilized colloids for plasma volume expansion.
Of note, hypertonic saline preparations are not colloids
even though they are used for plasma volume expansion
and will not be further discussed within this chapter.
General Principles of Colloids [1]
Colloids are defined as a preparation of a homogenous
noncrystalline substance that is dispersed throughout
another substance that is usually a water-based solution
(for medical use). The colloid may be large macromolecules or microparticles, which do not settle and are not
separable from their suspending solution by filtration or
centrifugation. Colloids are generally polydispersed,
representing a span of molecular sizes that characterize
a single preparation. Molecular weight (MW) which is
generally constant may be described in two different
fashions:
1. Weight-averaged MW: (# molecules at each weight
X particle weight)/total weight of all molecules
2. Number-averaged MW: mean of all particle weights
Furthermore, the weight distribution pattern may be
assessed by the colloid oncotic pressure ratio, a ratio that
C
reflects the osmotic activity to a colloid solution across
membranes with different pore sizes.
In general practice, the size, persistence, efficacy at
plasma volume expansion, side effect profile, and, of
course, product approval by regulatory agencies, tend to
govern clinician product selection. The clinician should
remain acutely aware that the colloid preparations contribute very little free water to the patient’s system and
therefore, should always be utilized with maintenance
solutions to avoid inadvertently creating a hyperoncotic
state (see Adverse Reactions below).
Starches [1]
Starches are synthetic colloid preparations derived from
amylopectin extracted from either maize or sorghum.
Amylopectin is a D-glucose polymer that is synthetically
modified with hydroxyethyl substitutions at the second
carbon (C2) as well as the sixth carbon (C6) with rather
few substitutions occurring at the third carbon (C3);
hydroxylation retards the rate of hydrolysis by plasma
nonspecific a-amylases. Starches are characterized by
their average molecular weight and average molecular
size as they exist as a polydispersed preparation of different molecular weight and sizes. Thus, starches may be
further classified by their average molecular weight into
high MW (>450 kDa), medium MW ( 200 kDa), and
low MW (70–130 kDa). Furthermore, they are characterized by the C2/C6 substitution ratio; the greater the ratio,
the slower the degradation. The number of hydroxyethyl
groups per 100 glucose groups is known as the degree of
substitution (DS) or substitution ratio (MS); ratios are
expressed as a number spanning 0–1. In a fashion similar
to the C2/C6 substitution ratio, the greater the DS or MS,
the longer the half-life (t½).
By way of example, Hextend is a commercially available starch used in the USA. It may be characterized as
a large MW starch (670 kDa) with a high degree of
substitution (0.7). The last two characteristics are the
concentration of the prepration and the diluent in
which the colloid is prepared. Hextend is a 6% starch
preparation is a balanced salt solution. Changing the
diluent may change important consequences of administration rendering the product functionally different.
For instance, Hextend’s predecessor, Hespan, is the identical starch in every way but was prepared in a saline
base. Hespan contains a FDA black box warning with
regard to volume of administration and induced bleeding
risk; no such black box exists for Hextend. Table 1 presents commonly utilized colloid preparations and their
characteristics.
587
C
588
C
Colloids
Colloids. Table 1 Common starch-based colloids used for resuscitation
Colloid
MW/DS
Concentration
Diluent
C2/C6
HES 130/0.4
6%
10%
NSS
9:1
Volulyte
HES130/0.4
6%
Balanced
solution
6:1
Pentastarch
HES 200/0.5
6%
10%
NSS
5:1
Hextenda
HES 670/0.7
6%
Balanced
solution
4.5:1
Hespana
HES 670/0.7
6%
NSS
4.5:1
Voluven
a
HES = hydroxyethyl starch
NSS = 0.9% normal saline solution
MW = molecular weight in kiloDaltons (kDa)
DS = degree of substitution
Note: not all colloids are available in the US Food and Drug Administration; approved colloids are indicated by (a)
Gelatins [1]
Gelatins are preparations created from the hydrolysis of
bovine collagen and then further modified by either
succinylation (polygeline; Gelofusine) of urea linkage
(Hemaccel). Succinylation results in no change in MW
but a significant increase in molecular size; no such
changes occur with urea linkage. The diluents are different
between the two products with only Hemaccel being prepared with calcium and potassium. It is important to note
that the only cases of prion-related disease derived from
cattle involve food-based disease transmission, not pharmaceutical preparations.
Dextran 40 appears to have clinical use at present due to
issues with allergic reaction and bleeding with Dextran 70.
Combination Preparations
Combinations of hypertonic saline and hyperoncotic
starch are available as well. These preparations rely on
starch plasma volume expansion and the concentrationdependent movement of water from the extravascular
space to the intravascular domain on the basis of creating
a hyperoncotic plasma space. Their efficacy or outcome
advantage over other colloid solutions has yet to be
demonstrated.
Dextrans [1]
Indications [2, 3]
Dextrans are fairly homogenous preparations of D-glucose
polymers principally joined by a-1,6 bonds creating linear
macromolecules that are characterized by their concentration into two commercially available preparations, Dexran
40 (MW avg = 40 kDa) and Dextran 70 (MW avg =
70 kDa). The glucose moieties are derived from enzymatic
cleavage of sucrose generated by Leuconostoc bacteria utilizing the enzyme dextran sucrase yielding high molecular
weight detrains that are modified into the final product
using acid hydrolysis and ethanol-based fractionation
processes.
Clearance is proportional to MW with 50–55 kDa
molecules being readily renally filtered and excreted
unchanged in the urine such that 70% of a Dextran 40
dose is excreted unchanged over a 24-h period. Molecules
with a larger MW undergo GI clearance or cleavage within
the reticuloendothelial system via extant dextranases. Only
Colloids are indicated for the treatment of suspected or
proven hypovolemia that requires plasma volume expansion. However, as there is increasing evidence that
hyperchloremic metabolic acidosis (HCMA) deleteriously
impacts outcomes, likely through activation of inflammatory pathways, colloid administration may have a selective
advantage. In general, one needs to administer much less
colloid than crystalloid to achieve equivalent plasma volume expansion, and therefore, one delivers much less
chloride to the patient’s system. Thus, plasma volume
expansion with colloids reduces the likelihood of creating
a HCMA when large volume plasma volume expansion is
required for the restoration of appropriate perfusion.
Dosage
Dosage of different preparations varies with local geography
as a reflection of different regulatory bodies’ approval
Colloids
process. However, certain commonalities may be articulated. The general goal of a plasma volume expansion
challenge is to provide 5% plasma volume expansion
(PVE) for those with hypovolemia but no hypotension,
and to provide 10% PVE for those with hypotension as an
initial bolus. Thus, based on the properties of Hextend,
250 cc of the solution is appropriate for hypovolemia, but
500 cc is ideal for hypotension. Given the properties of the
smaller MW starch Voluven, the same volumes would be
used in identical scenarios. However, the dose of Voluven
would need to be repeated more frequently based on its
shorter t½ than Hextend. The reader should be aware
that more frequent dosing is not a deleterious property,
but rather reflects the biologic behavior of the colloid,
and may be advantageous in certain circumstances
(see Contraindications below).
Preparation/Composition
The preparation and composition of many of the commercially available colloid solutions are presented in
Table 1. The reader should be aware that new products
and preparations are in development and therefore,
the listing in Table 1 should be viewed as only a partial
presentation.
Contraindications
The main contraindications for synthetic colloid administration are allergy or intolerance to the colloid or its
diluent. A further contraindication is hypervolemia in
a patient with dialysis-dependent renal failure as if one
induces heart failure or pulmonary edema, starch is not
dialyzable and one must wait for enzymatic degradation.
Along similar lines, chronic renal insufficiency (although
not dialysis dependent) as well as evolving acute renal
failure are relative contraindications for long half-life
starches. The need for >20 cc/kg bw in 24 h is a contraindication for Hespan specifically, although not for Hextend
according to the US Food and Drug Administration. Some
authors have advanced the notion that sepsis is
a contraindication for starch-based colloid administration
but the authors of this chapter do not believe that such
a claim is justified (see below).
Adverse Reactions
Allergic reaction is exceedingly rare with starch or gelatin
products, but appears to be a limiting factor with large
MW Dextrans. Perhaps the most notable adverse reaction
is that of suspected renal dysfunction presenting as acute
kidney injury or acute renal failure in patients who have
undergone PVE using starch-based synthetic colloids in
the setting of sepsis. The reader is encouraged to critically
C
review this literature as there are several key features that
call the conclusions into serious question.
A review of actual practice in 3,147 patients in 198
European Union ICUs during 2 weeks in May 2002 identified that colloid administration was often a combination
of different colloids, and not limited to a single colloid [4].
Furthermore, in clinical practice, there was no difference
in any measured or derived index of renal function, or the
need for renal replacement therapy, with regard to colloid
administration of any variety including hyper- or hypooncotic albumin, starches, gelatin, and dextran; approximately 15% of ICUs used more than one colloid at the
same time on a given patient.
The reader should recall that since there are a multitude of colloid preparations, comparing across multiple
studies is exceptionally difficult as the preparations differ
in MW, DS, diluent, volume, whether or not there was
a fluid administration protocol, when in the clinical
course did the patient receive colloid, and whether there
was a pressure or flow-based monitoring system guiding
therapy. Furthermore, delays prior to presentation, persistence of shock, timing of pressor use, specific pressors
used, rapidity of resuscitation (before or after the vigorous
volume expansion popularized by the Early Goal-Directed
Therapy trial), the rapidity of source control, and the
number of hypotensive episodes all influence renal perfusion. None of the trials are controlled for the presence of
hyperchlormic metabolic acidosis, an entity known to
reduce renal perfusion in an independent fashion. Furthermore, the definitions utilized for renal failure all differ,
and many studies were performed prior to the recognition
of acute kidney injury as a distinct entity. Additionally, the
indications for renal replacement therapy are not uniform
and some studies use the need for RRT as the definition of
renal failure. Thus, drawing any conclusion from the case
reports, case series, and few large but heterogenous trials
that claim starch-based PVE indices renal injury and failure is problematic at best.
Furthermore, trials such as the VISEP study comparing
pentastarch to lactated Ringers solution either overresuscitate the pentastarch group or under-resuscitate the
LR group depending on your perspective as both groups
received identical volumes of fluid [5]. This ignores the basic
concept underpinning colloid-based PVE – colloid are better
retained in the plasma space and therefore, one administers
a smaller volume. It is little surprising that the pentastarch
group evidences a higher CVP and a lower hemoglobin than
the LR group based on unequal fluid administration. Also,
there was an unequal distribution of patients requiring emergency operative undertakings to the pentastarch group – the
at-risk group for acute tubular necrosis, AKI, and ARF!
589
C
590
C
Colonization
Perhaps equally important is the unreported presence
or absence of concomitant maintenance fluid in trials
addressing colloid administration versus crystalloidbased plasma volume expansion regimens. An important
trial evaluated several different fluids for PVE in the
treatment of shock, but categorized the fluids based on
whether the fluid was hyper- or hypo-oncotic and determined the occurrence of renally relevant events [6]. Crystalloids were compared to hypo-oncotic albumin and
gelatin versus hyperoncotic starch versus hyperoncotic
albumin. Importantly, all of the renally relevant events
including the need for renal replacement therapy occurred
in the hyperoncotic groups. Unfortunately, the definition
of ARF used in this trial was a doubling of baseline creatinine, or the need for dialysis. This study underscores
the need to avoid creating an inadvertent hyperoncotic
state when using hyperoncotic PVEs at least in patients
with shock. It is likely that this observation may be
extended to those with hypovolemia without shock as
well, but the decreased effective circulating volume that
characterizes septic and hypovolemic shock places this
patient population at particular risk for hyperoncoticity
when the principle administerd fluid is a hyperoncotic
colloid.
Current bias has begun to focus on the use of low
MW and lesser substituted starch-based colloids such as
Voluven (6% HES, 130/0.4 prepared in saline) and its
counterpart agent, Volu-Lyte that is prepared in
a balanced salt solution in a fashion similar to that of
Hespan and Hextend. Since the lower MW and less
substituted compounds have a shorter half-life, it is
expected that renal accumulation will occur less frequently and present a reduced risk of AKI and ARF as a
consequence of starch administration in sepsis. However, the concerns articulated above may not support
such notions, and there are other elements that remain
unidentified. For instance, since starch has been identified in the renal tubular cells of those with ARF who have
received starch-based PVE, we do not know if the starch
presence is causative or simply coincident and of no
functional consequence. Data are conflicting even in
the renal transplantation patient population. Moreover,
since one does not biopsy normal kidneys, one does
not know if a patient who received a starch-based PVE
regimen and who did not change their creatinine
also had starch molecules accumulate in their renal
tubular cells. While there is much bias and speculation,
the medical community appears to be divided into
those who have already lost their clinical equipoise
with regard to starch and renal injury and those who
are awaiting data.
Drug Interactions
There are no significant drug interactions noted for colloid preparations. There is some concern, although
unfounded, that calcium containing colloids may not be
administered through the same IV line as blood products
for fear of clotting. In clinical practice, given the rapid rate
of administration of each agent, clotting is not clinically
identified.
Mechanisms of Action
Colloids serve to expand plasma volume by exerting
osmotic activity and having synthetic modification to
retard the rate of degradation or filtration, thus preserving
their plasma half-life. Incidental modification of rheology
is also noted with colloid administration that is mediated
in part by altering RBC flexibility through small-diameter
vessels, and in part through reductions in viscosity [7]. As
a result, some observations identify colloid-based support
of microcirculatory delivery of oxygen as judged by muscle
tissue oximetry compared to non-colloid-based PVE regimens when fluids and blood products were administered
on protocol and titrated to a CVP measurement.
Cross-References
▶ Intravenous Fluids
References
1.
2.
3.
4.
5.
6.
7.
Grocott M, Mythen M, Gan TJ (2005) Perioperative fluid management and outcomes in adults. Anesth Analg 100:1093–1106
Mike James review of colloids
Boldt J (2002) Hydroxyethyl starch as a risk factor for acute renal
failure: Is a change of practice indicated? Drug Saf 25(12):837–846
Sakr Y, Payen D, Reinhart K et al (2007) Effects of hydroxyethyl starch
administration on renal function in critically ill patients. Br J Anaesth
98(2):216–224
Brunkhorst FM, Engel C, Bloos F et al (2008) Intensive insulin
therapy and pentastarch resuscitation in severe sepsis. N Engl
J Med 358:125–139
Schortgen F, Girou E, Deye N et al (2008) The risk associated
with hyperoncotic colloids in patients with shock. Inten Care Med
34(12):2157–2168
Neff TA, Fischler L, Mark M et al (2005) The influence of two
different hydroxyethyl starch solutions (6% HES 130/0.4 and 200/
0.5) on blood viscosity. Anesth Analg 100:1773–1780
Colonization
The process whereby microorganisms inhabit a specific
body site (such as the skin, bowel, or chronic ulcers)
without causing a detectable host immune response, cellular damage, or clinical signs and symptoms. It involves
Coma
adherence of organisms to epithelial cells, proliferation,
and persistence at the site of attachment. The presence of
the microorganism may be of varying duration and may
become a potential source of transmission.
Colonoscopy
▶ Gastrointestinal Endoscopy
Coma
DERRICK SUN, KATHRYN M. BEAUCHAMP
Department of Neurosurgery, Denver Health Medical
Center, University of Colorado School of Medicine,
Denver, CO, USA
Synonyms
Blackout; Encephalopathy; Stupor; Unconsciousness; Vegetative state
Definition
Coma is defined as the state of profound unconsciousness,
from which the patient cannot be aroused to respond
appropriately to external stimuli. It originates from the
Greek word meaning “deep sleep or trance.” It represents
an acute and life-threatening emergency, requiring rapid
diagnosis and intervention in order to preserve brain
function and life.
Coma lies on a spectrum of terms used to describe
varying degrees of alteration of consciousness, including
lethargy, stupor, and obtundation. The Glasgow Coma
Scale (GCS) is a simple and objective scoring system
used by health-care professionals to quickly assess the
severity of brain dysfunction, composed of three tests:
eye-opening response, verbal response, and motor
response. Eye opening is graded from 1 to 4. Verbal
response is graded from 1 to 5. In intubated patients,
a score of “1” is given for verbal response with
a modifier of “T.” Motor response is graded from 1 to 6.
The GCS score, a sum of all three components, ranges
from 3 (deep coma or death) to 15 (fully awake person).
GCS score of 8 or less is generally accepted as operational
definition for coma.
Consciousness is defined as the state of awareness of
oneself and one’s surrounding environment. Consciousness
C
has two major interconnected components: wakefulness
(i.e., arousal) and awareness. Both components are necessary to maintain consciousness. Wakefulness is dependent
on a network of neurons called the ascending reticular
activating system (ARAS), originating in the midbrain
and rostral pontine tegmentum and projecting to the diencephalon (hypothalamus and midline and intralaminar
nuclei of the thalamus). From there, widespread projections
are sent to bilateral cerebral cortex. Damage to the ARAS
would result in impairment of wakefulness. Awareness,
sometimes referred to as the “content” of consciousness,
represents the sum of all functions mediated by cerebral
cortical neurons and their reciprocal projections to and
from subcortical structures [1]. These functions include
sensation and perception, attention, memory, executive
function, and motivation. Awareness requires wakefulness, but wakefulness may be observed in the absence of
awareness, as in the case of vegetative state.
The vegetative state describes a state of wakefulness
without awareness. The vegetative patient exhibits sleep–
wake cycles, evident by “eyes-open” periods, without evidence of awareness of self or environment. The term
persistent vegetative state is reserved for patients who
remain in the vegetative state for at least 30 days.
The minimally conscious state (MCS) is a condition
defined by severely impaired consciousness with minimal
but definite behavioral evidence of self or environmental
awareness [2]. Like the vegetative state, MCS may be
a transitional state during recovery from coma, or progression of worsening neurologic disease.
Another condition worth recognizing is the locked-in
syndrome, in which the patient has complete paralysis of
all four limbs and the lower cranial nerves. The locked-in
syndrome is not a disorder of consciousness, as the patient
retains awareness. The most common cause is a lesion in
the base and tegmentum of the midpons, interrupting the
descending cortical motor fibers responsible for limb
movement, while preserving vertical eye movement and
eye opening. A high level of suspicion on the part of the
clinician is required to make this diagnosis.
Psychogenic unresponsiveness may mimic coma, and
is characterized by normal neurologic exam, including
normal oculocephalic and oculovestibular reflexes. Patient
may sometimes forcibly close the eyelids. If psychogenic
etiology is suspected, an electroencephalogram (EEG)
may be helpful to aid in diagnosis. In more challenging
cases, Amytal interview can be used, wherein the patient is
slowly injected with an anxiolytic drug while repeated
neurologic exam is performed. Patients with psychogenic
unresponsiveness should exhibit improvement in
function.
591
C
592
C
Coma
Brain death is defined as the irreversible cessation of all
functions of the entire brain, such that the brain is no
longer capable of maintaining respiratory or cardiovascular function.
Etiology and Pathophysiology
Coma could be caused by focal or “structural” conditions
that lead to disruption of the ARAS anywhere in the brain
stem, bilateral diencephalon, or diffuse bilateral cerebral
cortex. On the other hand, systemic or “metabolic” disorders that interfere with the normal metabolism of the
brain and disturb normal neuronal activity could also
lead to coma.
Structural Etiology
Structural causes of coma can be categorized into “compressive” or “destructive” lesions. Compressive lesions
(e.g., intracranial hemorrhage or tumor) may cause
impairment of consciousness by several mechanisms:
(1) by directly distorting the ARAS or its projection,
(2) by increasing ICP and thus impairing cerebral blood
flow, (3) by distorting and displacing normal brain tissue,
(4) by causing edema and further distortion of brain, or
(5) by causing herniation [2].
Compressive lesion such as epidural hematoma classically results from traumatic fracture of the skull that
lacerates a meningeal vessel branch. Blood accumulates
between the skull and the dura, causing brain compression
and shift. Some patients exhibit a period of lucid interval
after trauma, until the expanding hematoma grows large
enough to cause displacement of the diencephalon and
brain stem, leading to impaired consciousness. Subdural
hematoma usually results from tearing of bridging cerebral
veins. They are more commonly seen in elderly or alcoholic
patients with cerebral atrophy, or patients who are
anticoagulated with warfarin, clopidogrel, or aspirin. Acute
subdural hematoma has a high mortality rate due to high
association with other injuries, such as brain contusions.
Subarachnoid hemorrhage, when due to rupture of
aneurysm, has high rates of mortality and morbidity. As
the blood in the subarachnoid space breaks down, inflammatory reaction is incited, resulting in cerebral vasospasm.
Delayed brain ischemia and infarction can occur. Hydrocephalus can also complicate subarachnoid hemorrhage
due to impairment of cerebrospinal fluid absorption, leading to elevated intracranial pressure (ICP) and impairment of consciousness.
Brain tumors often present with headaches, focal neurologic deficits, or seizures. They may present with
impaired consciousness due to compression or infiltration
of the diencephalon or due to herniation.
Destructive lesions (e.g., cerebral infarct) cause coma
by directly damaging the ARAS or its projections. Bilateral
cortical or subcortical infarcts, due to cardioembolism or
severe bilateral carotid stenosis, could result in coma.
Although impairment of consciousness rarely occurs due
to unilateral cerebral hemispheric infarct, it may occur in
delayed fashion secondary to edema of infarcted tissue
causing compression of the other hemisphere and diencephalon. Occlusion of the thalamo-perforators branches
of the basilar artery can lead to infarcts of bilateral thalami,
causing coma or hypersomnolence [2]. Pontine hemorrhage, usually due to uncontrolled hypertension, is characterized by sudden onset of coma, pinpoint pupils,
breathing irregularity, and ophthalmoplegia. Cerebellar
hemorrhage, another possible consequence of
uncontrolled hypertension, presents with occipital headache, nausea and vomiting, unsteadiness, and ataxia. Early
diagnosis and treatment is crucial, as once the patient is
comatose, surgical intervention is often futile.
Traumatic brain injury (TBI) is another common
cause of coma. Mechanism of loss of consciousness in
TBI may be due to shearing forces applied to the ARAS.
Diffuse axonal injury (DAI) is associated with severe TBI,
and portends a poor prognosis.
Herniation Syndromes
The Monro–Kellie doctrine hypothesizes that the central
nervous system and its accompanying fluids are enclosed
in a rigid container, and the sum of the volume of the
brain, cerebrospinal fluid (CSF), and intracranial blood
remains constant. An increase in volume of one component (e.g., a growing mass lesion) can be compensated to
a degree by the displacement of an equal volume of
another component (e.g., CSF). When this compensatory
mechanism is overwhelmed, even a small increase in volume will lead to a large increase in pressure. The differential pressure gradient between adjacent intracranial
compartments leads to herniation.
Several herniation syndromes are commonly
described. Uncal herniation occurs when a mass lesion in
a lateral cerebral hemisphere pushes the uncus, or medial
temporal lobe, medially and inferiorly over the tentorial
edge. Uncal herniation causes stretching of ipsilateral oculomotor nerve, which leads to fixed and dilated pupil.
Hemiparesis, either contralateral or ipsilateral, could
result from compression of the cerebral peduncles. The
posterior cerebral artery runs along the tentorial notch,
and its compression could lead to ischemia of ipsilateral
occipital lobe, leading to visual field deficit. In central
herniation, or transtentorial herniation, pressure from
expanding supratentorial mass lesion displaces the
Coma
diencephalon caudally. In addition to distorting the
ARAS, branches of the basilar artery are also stretched,
leading to brainstem hemorrhage. Tonsillar herniation
results when the cerebellar tonsils are pushed caudally
down the foramen magnum, causing direct compression
of the medulla; fourth ventricular CSF outflow is closed
off, leading to further increase in intracranial pressure.
Subfalcine herniation, or cingulate herniation, occurs
when one cerebral hemisphere pushes medially under
the rigid falx cerebri, causing displacement of the cingulate
gyrus. Branches of the anterior cerebral artery are sometimes pushed against the falx, leading to ischemia of
medial cerebral hemispheres.
Metabolic Etiology
Diffuse, multifocal, and metabolic diseases cause stupor
and coma due to interruption in delivery of oxygen or
substrates (e.g., hypoxia, ischemia, hypoglycemia), alterations in neuronal excitability and signaling (e.g., drug
toxicity, acid–base imbalance), or changes in brain volume
(e.g., hypernatremia, hyponatremia) [3].
Hypoxia and ischemia can lead to impairment of consciousness. The brain has one of the highest metabolic
rates of any organ and requires a constant supply of
oxygen, glucose, and cofactors to generate energy, synthesize proteins, and carry out electrical and chemical reactions. The brain lacks reserves of its essential substrates,
and therefore it is vulnerable to even temporary cessation
of substrates or blood flow [2]. Cerebral autoregulation
maintains cerebral blood flow at a relatively constant rate
over a range of systemic blood pressure. When
autoregulation fails in the extremes of systemic blood
pressure, cerebral blood flow decreases and lactic acid
builds up, leading to a decrease in pH and impairment
in ATP generation. Neuronal cell death could occur due to
calcium influx and free radical formation [2].
Glucose is a major substrate for brain metabolism.
Profound hypoglycemia causes damage to the cerebral
hemispheres, producing laminar or pseudolaminar necrosis in severe cases [2]. Hypoglycemia could present as
delirium, stroke, or coma; therefore, finger-stick glucose
should be checked on all patients presenting with
impaired consciousness.
Wernicke’s encephalopathy is a syndrome caused by
thiamine deficiency, with classic symptoms of confusion,
ataxia, and ophthalmoplegia. If left untreated, Wernicke’s
encephalopathy could progress to Korsakoff ’s syndrome,
an irreversible syndrome characterized by amnesia and
confabulation.
Acute liver failure causes increased permeability of
blood-brain barrier, leading to cerebral edema and
C
elevated ICP. Elevated ICP is a major cause of death in
patients with acute liver failure [2]. Elevated ammonia
level is implicated in hepatic encephalopathy, although
direct correlation between ammonia level and degree of
clinical impairment is lacking. Clinical presentation varies
from delirium to obtundation. Hyperventilation with
respiratory alkalosis is common. Nystagmus, dysconjugate
eye movement, and muscle spasticity have been described.
Decorticate or decerebrate posturing is possible with
deep coma.
Renal failure may lead to uremic encephalopathy. The
precise pathophysiology of uremic encephalopathy is not
clear. Furthermore, treatment of uremia by hemodialysis
may cause rapid change in osmolarity, leading to rapid
water shifts and cerebral edema, which could result
in coma.
Endocrinopathies such as panhypopituitarism, adrenal insufficiency, hypothyroidism, and hyperthyroidism
have all been implicated as causes of coma. Patients with
diabetes may present with nonketotic hyperglycemic
hyperosmolar coma or coma from diabetic ketoacidosis.
Many drugs can cause coma. Sedative drugs such as
benzodiazepines and barbiturates, opioids, and ethanol
can cause impairment of consciousness, as can psychotropic drugs such as tricyclic antidepressants, lithium, and
selective serotonin reuptake inhibitors; anticholinergic
drugs, amphetamines, and illicit drugs all can cause delirium and coma.
Acid–base imbalance, especially respiratory acidosis,
and electrolyte derangements such as hyper- and
hyponatremia,
hyperand
hypocalcemia,
and
hypophophatemia can result in delirium, stupor, and coma.
Infectious and inflammatory diseases of the central
nervous system, including meningitis, encephalitis,
and cerebral vasculitis could present with impaired
consciousness.
Seizures and postictal states can also present as coma.
In one series of comatose patients without overt clinical
seizure activity, EEG demonstrated nonconvulsive status
epilepticus in 8% of patients [2]. Seizure produces an
increase in cerebral metabolic demand, and sustained
seizures can lead to hypoxic-ischemic brain damage if
untreated.
Treatment
When presented with the unconscious patient, basic principles of life support apply. Check airway, ensure breathing
and oxygenation, and maintain circulation. Intubate the
patient if GCS 8. The PaO2 should be maintained above
100 mmHg and the pCO2 kept ideally between 35 and 40
mmHg. The mean arterial pressure (MAP) should be
593
C
594
C
Coma
maintained above 70 mmHg to ensure adequate brain
perfusion. Intravascular volume depletion should be
corrected, and vasopressors may need to be used to maintain systemic pressure. Hypertension should be treated
cautiously, keeping in mind that for patients with chronic
hypertension, a sudden drop in blood pressure may lead to
relative hypoperfusion of the brain.
Finger-stick glucose should be checked, and both
hyperglycemia and hypoglycemia should be treated. Glucose should be administered along with thiamine to avoid
precipitating Wernicke’s encephalopathy.
If narcotic overdose is suspected, naloxone can be
given intravenously and repeated as necessary. Keep
in mind that, while naloxone has duration of action
of 2–3 h, some narcotics have a much longer half-life.
Thus, close observation is needed for patients who
recover after naloxone administration. If benzodiazepine
overdose is suspected, flumazenil, a benzodiazepine
antagonist, is sometimes used. Gastric lavage with activated charcoal is sometimes utilized for suspected drug
ingestion.
If the stuporous or comatose patient is relatively stable, an emergency CTscan should be obtained. However, if
elevated ICP is suspected, or if impending or active herniation is suspected, intracranial hypertension needs to be
treated first. Hyperventilation to PaCO2 between 25 and
30 mmHg will transiently lower ICP while other therapeutic measures take effect. Mannitol, a hyperosmolar
agent, may be given as a bolus to draw water from the
brain, thus lowering ICP. Mannitol is also reported to
lower blood viscosity and thus improve cerebral perfusion.
Alternatively, hypertonic saline can be given either as
a bolus of 23.4% solution or as a continuous drip of 3%
solution to lower ICP.
Seizures must be quickly diagnosed and treated, as
repeated or continuous seizures (i.e., status epilepticus)
could cause secondary brain injury. Lorazepam should be
administered to stop generalized seizures, followed by
a loading dose of phenytoin or valproic acid. When these
measures fail, general anesthesia with propofol or pentobarbital may be necessary.
If meningitis or encephalitis is suspected, broad spectrum antimicrobials should be instituted after blood cultures are obtained. A CT scan should be obtained to rule
out a mass lesion prior to lumbar puncture, although
treatment should not be delayed while waiting for culture
results. Steroid is used as an adjunct to antibiotics in
bacterial meningitis to decrease inflammatory response.
Additional therapy should be tailored toward specific
etiology.
Evaluation and Assessment
The prompt diagnosis and treatment of patient in coma is
crucial to outcome. Coma caused by some metabolic
derangements, such as hypoglycemia, is reversible if
appropriate and timely therapy is instituted. Coma due
to compression from subdural hematoma or epidural
hematoma could be reversible if promptly diagnosed and
surgically evacuated. Therefore, the evaluation and assessment of the unconscious patient ought to proceed in
a rapid, systematic, and focused manner, sometimes
simultaneously with treatment.
When possible, history should be obtained from
patient’s relatives, friends, paramedics, or police. The
onset and progression of coma sometimes could give
clues to the etiology. General physical exam looking for
signs of trauma or systemic medical illness should be
performed. Periorbital ecchymosis (raccoon eyes), drainage of clear or bloody fluid from the ears or nose, and skull
base ecchymosis (Battle’s sign) are all signs of trauma.
A quick neurologic exam should be performed,
assessing verbal response, eye opening, and motor
response. Brainstem reflexes such as pupillary light reflex,
oculocephalic reflex, oculovestibular reflex, and corneal
reflex should be tested. Deep tendon reflexes and skeletal
muscle tone should also be assessed.
Respiratory pattern should be noted as regular, periodic, or ataxic, or combination of these. Cheyne–Stokes
respiration is a pattern of periodic breathing with phases
of hyperpnea alternating with apnea. The depth of respiration waxes and wanes in a crescendo–decrescendo
manner. It is generally seen in patients with diffuse forebrain lesions, uremia, hepatic failure, or heart failure.
Sustained hyperventilation is sometimes seen in patients
with hepatic coma, sepsis, diabetic ketoacidosis, meningitis, or pulmonary edema. True central neurogenic
hyperventilation is rare, and may be due to midbrain or
pons lesions. Apneustic breathing is characterized by
prolonged pause at full inspiration, and it usually reflects
lesion in the pons, as seen in patients with brainstem
strokes from basilar artery occlusion. Ataxic breathing,
or irregular, gasping respiration, implies damage to the
medullary respiratory center. Cluster breathing is characterized by periods of rapid irregular respiration,
followed by apneic spells, and is indicative of lesion in
the medulla.
Emergency laboratory tests for evaluation of coma
should include complete blood count, electrolyte panel,
coagulation studies, ammonia, arterial blood gas, cerebrospinal fluid studies, and electrocardiogram. Additional
studies, such as liver function test, thyroid and adrenal
Coma depasse
function tests, blood culture, urine culture, and toxicology
screen should be considered.
After-care
The cost of caring for a patient in the comatose state
carries beyond the acute intensive care setting. Coma is
often a transient stage; few patients remain in eyes-closed
coma for more than 10–14 days [4]. Patients ultimately
will die, recover, or transition to vegetative state. The
comatose and vegetative patients have shortened life
expectancy due to several factors, often succumbing to
respiratory or urinary tract infections, multisystem
organ failure, and respiratory failure [5].
Survival of these patients depends, to some degree,
on the quality and intensity of medical treatment and
nursing care. Proper skin care, such as frequent turning
and repositioning, helps reduce incidence of decubitus
ulcers. Daily passive range of motion exercise helps
reduce limb contractures. Tracheostomy and percutaneous gastric feeding tube are often necessary for
maintaining airway and providing nutrition and hydration [5].
Prognosis
The prognosis for coma is variable and largely dependent
on the etiology, location, and severity of brain damage.
The Glasgow Outcome Scale (GOS) is often used to grade
the level of functional recovery from coma: Grade 5 indicates recovery to previous level of function; Grade 4
describes patients who recover with moderate disability
but remains independent; Grade 3 indicates recovery with
severe disability with dependence on others for daily support; Grade 2 indicates recovery to vegetative state, and
Grade 1 indicates no recovery.
In one series of 500 patients with nontraumatic
coma, 16% led an independent life at some point within
the first year (GOS grade 4 or 5), while 11% regained
consciousness but was dependent on others for activities
of daily living, 12% never improved beyond the vegetative state, and 61% died without recovery from coma
[4]. Patients who survived nontraumatic coma made
most of their recovery within the first month. Longer
duration of coma was associated with worse chance
of functional recovery. Among different disease processes, subarachnoid hemorrhage had the worst
outcome, while hepatic encephalopathy and other metabolic causes had the best. Lack of verbal response, eye
opening, motor response, pupillary light reflex, corneal
reflex, oculocephalic response, oculovestibular response,
or spontaneous eye movements were all independently
C
associated with lack of recovery to independent
function.
Coma arising from TBI portends better prognosis than
nontraumatic coma [2]. A comprehensive review by
the Brain Trauma Foundation listed several factors
with class I prognostic evidence. Advanced age was predictive of poor outcome, with 56% of patients younger
than 20 and only 5% of patients older than 60 able to
achieve GOS of 4 or 5. Each lower GCS score was associated in a stepwise fashion with progressively worse
outcome. Absent pupillary light reflex or oculocephalic
response at any point in the illness predicts an outcome
of less than 4 on the GOS. Hypotension and hypoxia
were also independent predictors of poor outcome.
Abnormal neuroimaging findings such as compression
of basal cisterns or midline shift of brain structures,
indicative of elevated ICP, were predictive of poor
outcome.
While EEG is useful in identifying nonconvulsive status epilepticus in the comatose patient, it has not
been shown to be predictive of outcome. Somatosensory-evoked potentials (SSEPs), on the other hand, are
a better predictor. In several studies, bilateral absence
of cortical SSEPs predicted death or vegetative state in
almost all patients [2].
Cross-References
▶ Encephalopathy and Delirium
References
1.
2.
3.
4.
5.
Young GB, Pigott SE (1999) Neurobiological basis of consciousness.
Arch Neurol 56:153–157
Posner JB, Saper CB et al (2007) Plum and Posner’s diagnosis of
stupor and coma. Contemporary Neurology Series 71, 4th edn.
Oxford University Press, New York
Stevens RD, Bhardwaj A (2006) Approach to the comatose patient.
Crit Care Med 34(1):31–41
Levy DE, Bates D, Caronna JJ et al (1981) Prognosis of nontraumatic
coma. Ann Intern Med 94:293–301
The Multi-Society task Force on PVS (1994) Medical aspects of the
persistent vegetative state – second of two parts. N Engl J Med
330:1572–1579
Coma depasse
▶ Brain Death
▶ Death by Neurologic Criteria
595
C
596
C
Community-Acquired Pneumonia (CAP)
Community-Acquired Pneumonia
(CAP)
Confusion
▶ Septic Encephalopathy
▶ Burns, Pneumonia
▶ Pneumonia, Empiric Management
Congenital Heart Disease in
Children
Compliance
Ratio between the change in volume determined by
a change in pressure (Crs = DV/DP), depending on the
elastic properties of the respiratory system.
JONATHAN R. EGAN, MARINO S. FESTA
The Children’s Hospital at Westmead, Westmead,
Australia
Synonyms
Cardiac disease; Heart disease
Definition
Complicated Intra-abdominal
Infections
▶ Abdominal Cavity Infections
Complicated Parapneumonic
Effusion
Fluid in the pleural space that does not resolve spontaneously with treatment of the underlying infection, and
requires drainage with therapeutic thoracentesis or placement of a chest tube.
▶ Empyema
Computed Tomography
▶ Imaging for Acute Abdominal Pain
Confirmed STSS
Clinical case definition+isolation of GAS from a normally
sterile site.
Malformation of the heart present at birth.
Characteristics
It is estimated that 4–10 liveborn infants per 1,000 are
diagnosed with congenital heart disease (CHD), with
approximately 40% diagnosed in the first year of life and
the remainder some time in childhood or adulthood [1].
The majority of lesions are amenable to surgical repair or
palliation and it has been estimated that the prevalence of
adults with CHD in the USA is increasing by approximately 5% per annum [2].
CHD may present in the newborn period or later in
childhood, usually with heart failure, central cyanosis,
episodic collapse, or as an incidental finding of a heart
murmur:
Heart Failure
Tachypnea, worse with exertion or feeding in an infant, is a
common sign of heart failure in CHD. This is most common in conditions that allow blood to shunt from left to
right (i.e., from the systemic to pulmonary circulation), or
in conditions that obstruction to flow through the heart at
the level of the valves, pulmonary veins, or either ventricular outflow tract causing pulmonary venous congestion.
In the infant, sweating with feeds and hepatomegaly
are commonly seen, and though dependent edema may
occur, pitting edema of the peripheries is much less common than in adults. Severe heart failure may manifest at
the time of spontaneous closure of the patent ductus
arteriosus at or around 1 week of age in a previously
asymptomatic neonate with an obstructive lesion of the
left heart such as critical aortic stenosis, hypoplastic left
Congenital Heart Disease in Children
heart syndrome (HLHS), interrupted aortic arch, or severe
coarctation of the aorta.
In later infancy and early childhood, undiagnosed
CHD leading to increased pulmonary blood flow or
obstruction to pulmonary venous drainage (e.g., ventricular septal defect, atrioventricular septal defect, patent
ductus arteriosus, anomalous pulmonary venous drainage) may cause chronic heart failure leading to impaired
growth and failure to thrive.
A chest X-ray will usually show cardiomegaly and
increased pulmonary vascular markings, with the notable
exception of obstructed total anomalous pulmonary
venous drainage of the infradiaphragmatic type where
marked pulmonary congestion is present in the absence
of cardiomegaly.
The normal fall in pulmonary vascular resistance in
the postnatal period may lead to increasing left-to-right
shunt and pulmonary blood flow in the first week of life.
Similarly, increased inspired oxygen or respiratory alkalosis may both decrease pulmonary vascular resistance and
lead to worsening heart failure.
Central Cyanosis
Cyanosis of the lips and tongue in the newborn infant is
indicative of desaturation of arterial hemoglobin and may
be readily identified by direct comparison with the
mother. Typically the baby with cyanotic congenital
heart disease looks otherwise well with little or no respiratory difficulty. An arterial blood sample pO2 taken in
maximal inspired oxygen will not exceed 100 mmHg.
The most common cause of cyanotic heart disease in
the newborn period is transposition of the great arteries
where separation of the pulmonary and systemic circulation requires mixing either at the level of the atrium via the
patent foramen ovale, or via a patent ductus arteriosus or
sometimes via a ventricular septal defect, to allow oxygenated blood to cross the systemic circulation.
Other CHD resulting in decreased pulmonary blood
flow (e.g., pulmonary atresia with an intact ventricular
septum, tetralogy of Fallot, tricuspid atresia, Ebstein’s
anomaly) or anomolous to pulmonary venous return to
the right atrium (e.g., total anomalous pulmonary venous
drainage with or without obstruction to the pulmonary
venous blood flow) may present with central cyanosis
commonly in the newborn period, or later in infancy or
childhood.
Incidental Murmur
It remains common for less severe forms of CHD to be
diagnosed in childhood by detection of a significant murmur on routine or coincidental examination.
C
Significant murmurs may be caused by turbulent flow
across abnormal structures (e.g., patent ductus arteriosus,
pulmonary stenosis) or connections (e.g., ventricular septal defect) or due to increased flow across normal structures (e.g., atrial septal defect causing left-to-right shunt
and a pulmonary flow murmur).
Episodic Collapse
Infants and children may have episodes of extreme tachycardia or bradycardia associated with alteration of consciousness, pallor, and sometimes collapse. Signs of heart
failure may initially be absent, especially in infants, and
develop over several hours if the abnormal rhythm
persists.
Supraventricular tachycardia is a rapid tachyarrhythmia with a rate usually over 220 bpm that may present de
novo in infants and older children. The electrocardiograph
(ECG) is characterized by the absence of P waves and
absolutely regular R–R interval. Up to a third of cases
have an underlying structural heart abnormality and
a proportion of the remainder have a short PR interval
with a delta wave on electrocardiograph (ECG) during
normal sinus rhythm implying early excitation via an
accessory pathway (Wolf–Parkinson–White syndrome).
Recurrent ventricular tachycardia is a recognized cause
of sudden death in childhood and may be associated with
a family history of sudden death. Prolonged QT syndromes (e.g., Romano-Ward syndrome or if associated
with sensorineural deafness Jervell and Lange-Nielsen syndrome) due to inherited defects in myocardial potassium
channel function that allow early repolarization typically
present with episodes of torsades de point causing sudden
collapse or death. Inherited defects of myocardial sodium
channels (e.g., Brugada syndrome) may present with sudden onset of ventricular arrhythmia in older children and
young adults [3].
Congenital complete heart block may present with
heart failure in the prenatal (hydrops fetalis) or postnatal
period and may be associated with maternal anticardiolipin
syndrome.
Management
Fetal Screening
An effective antenatal screening program may help
improve prenatal detection of life-threatening CHD. This
allows parents the choice of termination of pregnancy and
the chance of improved outcomes by avoiding unanticipated postnatal collapse and the careful planning of perinatal care [4].
597
C
598
C
Congenital Heart Disease in Children
Postnatal Stabilization
Stabilization of the newborn with CHD depends on
detailed knowledge of the cardiac anatomy and assessment
of the changing physiology in the postnatal period. This
can only be achieved by a multidisciplinary approach to
care. Transthoracic echocardiography with Doppler measurement and color flow mapping is essential to confirm
diagnosis based on prenatal ultrasound or clinical examination and to rule out additional lesions. Cardiac catheterization is usually reserved for complex cases or if an
interventional procedure is indicated.
Full history and examination for associated abnormalities or syndromes should be undertaken. A systematic
approach to stabilization of the airway and breathing
is required in the initial postnatal period prior to
cardiac assessment. This may include intubation and ventilation to normalize lung volumes and reduce left ventricular wall stress in some cases. In cases of suspected
or known duct-dependent CHD (e.g., transposition of
the great arteries with intact ventricular septal defect
(VSD), hypoplastic left heart syndrome, interrupted aortic
arch), the neonate should be commenced on an intravenous infusion of epoprostenol (Prostacyclin) in order to
maintain the duct open (usual dose 5–25 ng/kg/min).
Care should be taken to ensure that the baby maintains
an adequate preload after starting the infusion as systemic vascular resistance and cardiac filling pressures
are likely to fall. The self-ventilating neonate should also
be closely observed for apnea at this time as this is known
to be associated with commencement of epoprostenol
infusion.
Babies with single ventricle anatomy and physiology,
in which a single ventricle effectively supplies pulmonary
and systemic blood flow (e.g., hypoplastic left heart syndrome, pulmonary atresia) are sensitive to changes to
systemic and pulmonary vascular resistance, which will
influence the relative flow to the two circuits. Hence, in
addition to maintaining good cardiac output by attention
to adequate preload and myocardial contractility, manipulation of factors to influence the vascular resistance in the
systemic and pulmonary circulations should be used to
allow adequate systemic blood flow. Avoidance of noxious
stimuli, maintenance of normothermia and appropriate
analgesia are important, in addition to pharmacological
manipulation by systemic vasodilators, in order to avoid
a situation of increased systemic vascular resistance leading to excess pulmonary flow and decreased systemic
oxygen delivery. This situation may be exacerbated by
the normal fall in pulmonary vascular resistance in the
postnatal period, or by use of high inspired oxygen or
hyperventilation, both of which should be avoided.
Babies born with separated pulmonary and systemic
circulations (e.g., transposition of the great arteries) are
dependent on communications between the atria, ventricles, or at the level of the patent ductus arteriosus to allow
adequate mixing of oxygenated and desaturated blood.
This may need to be augmented at the level of the atrial
connection by a balloon atrial septostomy following femoral or umbilical vein catheterization soon after birth in
babies where low systemic arterial oxygen saturation of
hemoglobin (usually below 70–75%) is significantly contributing to decreased systemic oxygen delivery. Preductal pulseoximetry saturations should be monitored,
usually in the right hand, in order to monitor saturation
of blood reaching the brain and myocardium.
Cardiac Surgery and Cardiopulmonary
Bypass
Surgery, if required, may be in the neonatal period, or
early or late childhood. The aim of surgery may be corrective or palliative, and surgery may need to be
conducted on more than one occasion in a staged
approach. In a significant proportion of cases, cardiopulmonary bypass (CPB) and some degree of cooling is
required to allow adequate oxygenation of vital organs
during surgery, which requires an empty heart and usually
a period of cardiac standstill. Occasionally, a short period
of low-flow CPB or of complete hypothermic circulatory
arrest may be required to allow a relatively bloodless field
during complex surgery on the aorta. Clearly, this is a time
of high risk with potential for embolic or ischemic damage
and the prospect of disturbed physiology in the postoperative period following myocardial and end-organ reperfusion.
Advances in the care of newborns with CHD have
meant that neonatal reparative surgery is increasingly
possible. Though complex and challenging, early repair
offers significant advantages. These include early elimination of cyanosis and of congestive heart failure, optimal
circulation for growth and development, and reduced
anatomic distortion from palliative procedures.
Palliative cardiac surgery remains the only option in
infants with an anatomical single ventricle (e.g., hypoplastic left heart syndrome). This usually requires a threestaged approach. Firstly, pulmonary blood flow is secured
via a systemic to pulmonary arterial shunt. Later, in
infants without elevated pulmonary vascular resistance
and with adequate atrioventricular valve and diastolic
ventricular function, a cavopulmonary anastamosis is created so that systemic venous return is directed directly into
the pulmonary arteries. This is done in two stages, firstly
by directing return from the superior vena cava to the
pulmonary arteries via a bidirectional cavopulmonary
Congestive Heart Failure
(Glenn) anastamosis and later by the additional redirection of the inferior vena cava flow, either via a lateral
tunnel through the atrium or via an extra-cardiac conduit,
to create a complete cavopulmonary (Fontan) circulation.
Rearrangement of the systemic and pulmonary circulation
to operate “in series” in this way leads to correction of
cyanosis. However, given the paucity of long-term outcome data, total cavopulmonary circulation remains
viewed as a palliative rather than a curative procedure.
Care of the postoperative cardiac surgical patient is
complex and requires knowledge of the underlying anatomy and physiology and details of the surgery and
intraoperative course. A progressive low cardiac output
state, not attributable to any residual or undiagnosed
cardiac lesion, which reaches its nadir usually by 12-h
postoperatively, occurs in a significant proportion of
patients [5]. This complexity is managed by mechanical
ventilation and pharmacological support to optimize
myocardial function, pulmonary and systemic afterload,
and supportive intensive care therapy.
Lesions with increased pulmonary blood flow or
increased pulmonary venous pressure may predispose to
increased postoperative pulmonary artery pressures and
increased reactivity of the pulmonary vasculature in the
postoperative period, necessitating the use of inhaled nitric
oxide as a selective pulmonary vasodilator in some cases.
A small number of patients require extracorporeal
mechanical oxygenation (ECMO) support to allow myocardial rest and adequate time for recovery following
cardiac surgery.
Early postoperative extubation may be of benefit in
some patients, particularly in those with cavopulmonary
anastamosis, and should be considered in any patient
known to have had a smooth intraoperative course and
without signs of excessive bleeding, hypoxemia, or low
cardiac output state in the early postoperative period.
After-care
Medical management usually involves diuretic therapy
with or without ACE inhibitors in the weeks and months
following surgery. Regular assessment for late surgical
complications including wound infection, chylothorax,
postcardiotomy immune pericarditis (Dressler’s syndrome), and for residual lesions is undertaken before
gradual tapering of medical follow-up. In the case of
more complex lesions where further operative interventions or transplant may be required, ongoing follow-up
through to adulthood is mandatory and these patients
should be transitioned to adult congenital heart disease
programs. In addition to echocardiographic and cardiac
catheter assessments, cardiac magnetic resonance imaging
C
may also be useful in the assessment of cardiac function in
some patients.
In infants following Stage 1 palliation of HLHS, significant interstage mortality may be reduced by careful
monitoring of saturations and weight gain either in hospital or at home.
Orthotopic heart transplantation should be considered in infants and children with severe intractable forms
of CHD.
Prognosis
CHD is responsible for the most deaths in the first year of
life of any other birth defect. While most CHD occurs as
an isolated congenital malformation, CHD is more common in several genetic conditions, including Trisomy 21
(Down syndrome), Noonan syndrome, Marfan syndrome,
Trisomy 13 (Patau syndrome), and DiGeorge syndrome,
and prognosis depends on the type of CHD, as well as any
underlying condition.
Advances in perfusion practice, surgical techniques,
and postoperative care have all led to overall decreased
perioperative mortality. Long-term morbidity, including
abnormal neurodevelopmental outcomes, particularly in
patients with single ventricle physiology or following
prolonged postoperative recovery has been noted and is
the topic of ongoing research.
References
1.
2.
3.
4.
5.
6.
Hoffman JI (1990) Congenital heart disease: incidence and inheritance. Pediatr Clin North Am 37:25–43
Brickner ME, Hillis LD, Lange RA (2000) Congenital heart disease in
adults: first of two parts. N Engl J Med 342:256–263
Towbin JA (2004) Molecular genetic basis of sudden cardiac death.
Pediatr Clin North Am 51(5):1229–1255
Khoshnood B, De Vigan C, Vodovar V, Goujard J, Lhomme A,
Bonnet D, Goffinet F (2005) Trends in prenatal diagnosis, pregnancy
termination, and perinatal mortality of newborns with congenital
heart disease in France, 1983–2000: a population based evaluation.
Pediatrics 115(1):95–101
Wernovsky G, Wypij D, Jonas RA, Mayer JE Jr, Hanley FL, Hickey PR,
Walsh AZ, Cahng AC, Castaneda AR, Newburger JW, Wessel DL
(1995) Postoperative course and hemodynamic profile after the arterial
switch operation in neonates and infants. Circulation 92(8):2226–2235
Nugent AW, Daubeney PE, Chondros P, Carlin JB, Cheung M,
Wilkinson LC, Davis AM, Kahler SG, Chow CW, Wilkinson JL,
Weintraub RG (2003) The epidemiology of childhood cardiomyopathy in Australia. N Engl J Med 348(17):1639–1646
Congestive Heart Failure
▶ Heart Failure, Biomarkers
▶ Heart Failure Syndromes, Treatment
599
C
600
C
Conscious Sedation
Conscious Sedation
JOHN H. BURTON
Department of Emergency Medicine, Carilion Clinic
Virginia Tech Carilion School of Medicine,
Roanoke, VA, USA
Synonyms
Deep sedation; Procedural sedation; Sedation
Definition
The phrase “conscious sedation” has historically been
applied to the administration of sedative or analgesic
medications for suppression of a patient’s level of consciousness in preparation for, and during, a painful or
anxiety-provoking medical procedure.
Conscious sedation as applied to many modern procedures is a misnomer, particularly in the intensive care
unit (ICU) or emergency department (ED) setting. In
these practice environments, a depth of patient relaxation
and sedation well below “conscious” is frequently
intended. Many providers attempt to be more descriptive
in the depth of intended sedation by adding the descriptors “mild,” “moderate,” or “deep” for any encounter.
Others have been proponents for the terms “procedural
sedation” or “procedural sedation and analgesia” in an
attempt to emphasize a depth of sedation and analgesia
that will be consistent with the one best suited for the
intended procedure.
Regardless of the terminology used, the practice of
conscious sedation is an essential component of sedation
and/or analgesia for many procedural interventions. The
proper use of conscious sedation will confer significant
benefits to both the patient and the medical provider. For
patients, relief of pain, anxiety, and amnesia to the procedure event are obvious desirable outcomes. Similarly,
more relaxed and comfortable patients will translate to
an improved experience for medical providers with
enhanced patient safety, improved procedure success,
and less angst for the pain and suffering inflicted on the
patient on behalf of the medical procedure [1].
Depth of Conscious Sedation
The depth of intended patient sedation and relaxation can
be broadly characterized as mild, moderate, and deep
levels of suppressed consciousness. These categorizations
exist along a broad spectrum for the depth of patient
sedation intended for the procedure. A state of general
anesthesia completes the spectrum and describes a depth
of sedation characterized by unresponsiveness to all stimuli and the absence of airway protective reflexes.
Minimal sedation typically describes a patient with
a near-baseline level of alertness. This level of sedation
does not impair the ability to follow commands or
respond to verbal stimuli. Under a state of minimal sedation, cardiovascular and ventilatory functions are not
threatened or impaired.
Moderate sedation describes a depth of consciousness
characterized by many or all of the following: eyelid ptosis,
slurred speech, and delayed or altered responses to verbal
stimuli. Event amnesia will frequently occur under moderate sedation levels. The patient airway should be minimally threatened by apnea or ventilatory suppression
under moderate sedation depths. Similarly, while the likelihood of cardiovascular embarrassment is small, monitoring of cardiovascular status is appropriate for changes
in patient oxygenation, blood pressure, and heart rate.
Deep sedation renders the patient level of consciousness unresponsive to most verbal commands with preservation of airway protective reflexes and noxious, painful
stimuli. Event amnesia is typical of deep levels of sedation.
Monitoring for deep sedation encounters should emphasize the significant potential for reduction in ventilation
and cardiovascular complications including changes to
heart rate, heart rhythm, and blood pressure. The potential for apnea should also prompt the consideration for
more sensitive ventilation monitoring techniques, including exhaled, end-tidal carbon dioxide levels.
Pre-existing Condition
Minimal, moderate, and deep sedation have all been
described in the medical literature for conditions that
invoke pain, anxiety, and complex medical procedures
that may require minimal patient movement and optimized muscle relaxation.
In the ICU setting, conscious sedation should be distinguished conceptually from continuous sedation. The
former would be employed toward procedures or events
requiring sedation or relaxation, while the latter would
imply the use of sedative agents for continuous sedation
for patient comfort during periods of mechanical ventilation or to supplement ongoing medical treatment and
stabilization. For example, an intubated ICU patient may
be treated with a propofol infusion for continuous sedation. This patient may require a procedure, such as tube
thoracostomy, that may provoke consideration of a plan
for increased sedation and/or analgesia to address the pain
associated with this procedure. In most other settings,
including emergency or gastroenterology procedures, for
example, the likelihood that the patient will be under any
Conscious Sedation
form of continuous sedation is much smaller and therefore, a treatment plan for conscious sedation will be initiated from a normal level of patient consciousness.
Conscious Sedation Procedures
Common procedures in which conscious sedation will be
utilized in the ICU or emergency setting are listed in
Table 1. Procedures such as electrical cardioversion or
sedation for radiological imaging may be viewed as events
where the addition of an analgesic agent is of limited
benefit given the limited amount or complete absence of
pain prior to or following the procedure. In these events,
the conscious sedation plan may be simplified to emphasize a sedation strategy with minimal or no analgesic
considerations.
Procedures such as orthopedic fracture or dislocation
reduction are typical of encounters where both patient
relaxation and analgesia should be considered in the sedation plan. These patients will have analgesic requirements
prior to, during, and following the treatment procedure.
These patients should have a conscious sedation plan that
incorporates a baseline analgesic treatment plan in addition to the planned sedation.
Pre-sedation Considerations
Preexisting medical illnesses should be considered in the
formulation of any conscious sedation treatment plan.
Acute or chronic illnesses may render a patient to be at
elevated risk for adverse events during conscious sedation,
Conscious Sedation. Table 1 Common procedures in the
ICU or emergency setting where conscious sedation should be
considered
ICU
Chest tube thoracostomy
C
specifically cardiovascular or ventilatory embarrassment.
The contemplation of the use of sedation or analgesic
agents should then incorporate these risks into both the
decision to use conscious sedation or the selection of
specific treatment agents. Conditions such as hemorrhagic
shock or sepsis may render a significant degree of cardiovascular instability or risk with conscious sedation. Similarly, traumatic facial injuries, or morbid obesity may
render challenges to assisted ventilation in the case of
respiratory suppression. At a minimum, preparatory considerations prior to conscious sedation should include
a history of present illness, past medical history, and
focused physical examination directed toward airway
and cardiovascular assessment.
The oral intake of fluids or solids prior to sedation,
NPO status, remains a subject of debate among physicians
caring for conscious sedation patients [2]. More brief
periods of suppressed consciousness as well as lighter
depths of sedation during conscious sedation render limited analogies to the operating room patient experience
and NPO requirements in that setting. There have been
exceptionally few reports in the medical literature of
adverse outcomes related to NPO status for conscious
sedation patients. Additionally, there are many large series
of patients undergoing deep sedation levels with no aspiration or ingested solids/fluids complications. These
observations further support the position that the application of operative patient anesthesia principles is of limited utility to the typical conscious sedation patient.
Finally, the emergent or critical nature of many procedures
in the emergency or ICU setting prompts consideration of
a risk/benefit paradigm for any patient requiring a medical
procedure and conscious sedation. Taken in summary, the
risks of aspiration or obstruction from recent solid or fluid
intake must be balanced with the benefits derived from an
immediate or timelier sedation intervention [3].
Abscess incision and debridement
Ventriculostomy placement
Central venous or arterial line placement
Complex wound management, e.g., burn wound care
Emergency
Orthopedic fracture or dislocation reduction
Complex laceration repair
Abscess incision and debridement
Foreign body removal
Central venous line placement
Electrical cardioversion
Lumbar puncture
Radiological imaging
Application
Planned Depth of Sedation and Procedure
Minimal or light conscious sedation is usually performed
for procedures that are less painful, particularly with the use
of local anesthesia, and require light levels of patient relaxation. Typical light sedation encounters include procedures
such as lumbar puncture, radiological studies, simple fracture reductions in combination with local anesthesia, and
abscess incision and drainage. Agents and combinations
typically utilized for light sedation include fentanyl,
midazolam, and low-dose ketamine (Tables 2 and 3).
Moderate and deep conscious sedation is usually
performed for procedures that require greater degrees of
601
C
602
C
Conscious Sedation
Conscious Sedation. Table 2 Agents commonly utilized for
conscious sedation
Analgesia agents
Conscious Sedation. Table 3 Common agents, dosing, and
depth of sedation associated with each agent for patient
conscious sedation
Fentanyl
Morphine sulfate
Hydromorphone
Sedation agents
Repeat
dosea (mg/
kg)
Agent
Initial dose
(mg/kg)
Midazolam
0.03
0.03
Titrate to
desired
depth
Etomidate
0.15–0.20
0.1
Deep
sedation only
Propofol
0.5–1.0
0.5
Deep
sedation only
Methohexital 1.0
0.5
Deep
sedation only
Ketamine
0.5
Moderate
and deep
sedation
Benzodiazepines, e.g., midazolam
Barbiturates, e.g., methohexital
Propofol
Etomidate
Ketaminea
a
Ketamine has both analgesic and sedation properties
patient relaxation. These procedures often have greater
associated levels of pain and anxiety. Common moderate
or deep sedation encounters include procedures such as
complex orthopedic fracture or dislocation reductions,
tube thoracostomy, and more complex wound and
debridement procedures including burn dressing changes
or large abscess incision and drainage. Agents utilized for
moderate or deep sedation include higher dose ketamine,
etomidate, methohexital, and propofol as single agents or
in combination with an analgesic agent (Table 3).
Monitoring the depth of conscious sedation is best
performed with the use and documentation of
a standardized sedation assessment scale. Examples of
this include the Ramsay Scale (Table 4) or the modified
Aldrete-Parr Scale. Each of these scales, and other similar
patient assessment tools, utilize a standard set of predicted
impairment assessment in a number of body systems or
categories. Given that the most clinically relevant complications associated with conscious sedation encounters are
adverse respiratory events, patient depth of sedation monitoring should emphasize respiratory assessment in addition to depth of awareness.
Selection of Conscious Sedation Agents
With the exception of ketamine, the most substantial
pharmacologic effects of sedation medications impact
patient levels of consciousness with minimal to no analgesic effects [4]. Given that the majority of sedation procedures will involve pain, most conscious sedation
encounters should incorporate an analgesic approach to
augment the planned sedation depth.
The dosing of analgesic and sedative agents should be
standardized in a weight-based fashion. Selection of
a specific analgesic, sedative, or combination should take
1.0
Depth of
sedation
a
Some providers may prefer a continuous drip infusion to a repeatbolus dosing strategy
Conscious Sedation. Table 4 Ramsey sedation scale
Score
Responsiveness
1
Patient is anxious and agitated or restless, or both
2
Patient is cooperative, oriented, and tranquil
3
Patient responds to commands only
4
Patient exhibits brisk response to light glabellar
tap or loud auditory stimulus
5
Patient exhibits a sluggish response to light
glabellar tap or loud auditory stimulus
6
Patient exhibits no response
into consideration the patient’s prior experience with
sedation as well as the desired duration of clinical affects.
The use of short-acting agents such as propofol and
etomidate has gained wide-spread acceptance. Briefacting sedative agents confer shorter periods of impaired
levels of consciousness and subsequently less risk for
adverse respiratory events. An additional benefit to
shorter periods of impaired consciousness is reduced
monitoring times that allow for reduced allocations of
intense patient monitoring by medical staff.
Conscious sedation agents are typically dosed in
weight-based bolus increments in the emergency setting
(Table 3). In the ICU setting, the use of continuous infusions following an initial bolus is more commonplace
Continuous Renal Replacement Therapy (CRRT)
given the frequent use of continuous drip infusions from
these providers. Patients who require longer periods of
analgesia, such as those with fractures, will benefit from
strategies emphasizing longer-acting analgesic agents,
such as morphine or hydromorphone, coordinated with
sedative dosing.
A combination of agents is a common practice for
conscious sedation agent selection. The combination of
midazolam and fentanyl has historically been a strategy
used in many settings. Recently, the combination of ketamine and propofol (“ketofol”) has gained a degree of
interest. This combination, typically with bolus dosages
less than those employed with the use of propofol or
ketamine alone, 0.5–0.75 mg/kg for each agent, has been
argued to ameliorate the adverse risks associated with
ketamine or propofol alone while also capitalizing on the
benefits of each drug: a risk/benefit balance for each agent
in combination.
There remains a great deal of variation in the selection
of sedation and dosing regimens for conscious sedation
between medical providers and medical settings. Provider
experience as well as institution or medical consultant
preferences may substantially influence individual
approaches. A great deal of research has been performed
addressing comparative considerations for agent selection,
dosing, and patient procedures for conscious sedation
principles. Any institutional or medical provider approach
toward conscious sedation should be built upon
a foundation derived from the extensive findings in the
medical literature.
References
1.
2.
3.
4.
Miner JR, Burton JH (2007) Clinical practice advisory: emergency
department procedural sedation with propofol. Ann Emerg Med
50:182–187
Green SM, Roback MG, Miner JR, Burton JH, Krauss B (2007)
Fasting and emergency department procedural sedation and analgesia: a consensus-based clinical practice advisory. Ann Emerg Med
49:454–461
Miner JR, Martel ML, Meyer M, Reardon R, Biros MH (2005) Procedural sedation of critically ill patients in the emergency department. Acad Emerg Med 12(2):124–128
American Society of Anesthesiologists (2002) Task force on sedation
and analgesia by non-anesthesiologists: practice guidelines for sedation and analgesia by non-anesthesiologists. Anesthesiology
96:1004–1017
Consumption Coagulopathy
▶ Disseminated Intravascular Coagulation
C
603
Contact Precautions
A set of practices used to prevent patient-to-patient transmission of infectious agents that are spread by direct or
indirect contact with the patient. Health-care workers
caring for patients on contact precautions wear a gown
and gloves for interactions that may involve contact with
the patient or patient’s environment. In addition, patients
are placed in a single room or shared room with other
patients on contact precautions for the same indication.
Continuous Arterio-venous
Hemofiltration (CAVHF)
▶ Hemofiltration in the ICU
Continuous Cardiac Output (CCO)
▶ Cardiac Output, Measurements
Continuous Hemodialysis
(CVVHD)
▶ Hemofiltration in the ICU
Continuous Positive Airway
Pressure (CPAP)
▶ Noninvasive Ventilation
Continuous Renal Replacement
Therapy (CRRT)
▶ Hemofiltration in the ICU
C
604
C
Continuous Veno-venous Hemodiafiltration (CVVHDF)
Continuous Veno-venous
Hemodiafiltration (CVVHDF)
▶ Hemofiltration in the ICU
Recently the Acute Kidney Injury Network (AKIN)
proposed a consensus definition where AKI is defined as
an increase of 0.3 mg/dL or 50% or greater occurring
within a 48 h time period [4].
Treatment
Continuous Veno-venous
Hemofiltration (CVVHF)
▶ Hemofiltration in the ICU
The treatment of established CI-AKI is not different from
other types of AKI and consists of prevention of hypotension and hypovolemia and stop administration of potential nephrotoxic agents. For a more detailed discussion on
the treatment of AKI we refer to the specific chapters
on this in this textbook.
Prevention
Contrast Medium-Induced
Nephropathy
▶ Contrast Nephropathy
Contrast Nephropathy
ERIC A. J. HOSTE
Department of Internal Medicine, Ghent University
Hospital, Ghent, Belgium
Established risk factors for development of CI-AKI
include an estimated glomerular filtration rate (eGFR)
<60 mL/min/1.73 m2, diabetes mellitus, volume depletion, nephrotoxic drugs, anemia, and hemodynamic instability [5]. ICU patients have often one or more of these
risk factors, and are therefore at greater risk for development of CI-AKI. Also intra-arterial administration of
radio contrast medium, high volume of contrast medium,
and contrast medium with high osmolality are associated
with higher risk for CI-AKI.
Preventive measures for CI-AKI can be categorized
into four groups: withdrawal of nephrotoxic drugs, volume
expansion, pharmacologic therapies, and hemofiltration
or hemodialysis. We will discuss these in detail.
Withdrawal of Nephrotoxic Drugs
Synonyms
Contrast medium-induced nephropathy; Contrastassociated acute kidney injury; Contrast-induced
nephropathy
All nephrotoxic drugs should be withdrawn >24 h before
contrast administration in patients at risk for CI-AKI
(GFR<60 mL/min) [5].
Volume Expansion
Definition
Several definitions for contrast-induced acute kidney
injury (CI-AKI) have been used in medical literature. CIAKI is typically defined as an increase of serum creatinine
of 0.5 mg/dL or 25% or more within 2 days following
contrast medium administration [3]. Multiple variations
on this definition are used: some use only the absolute
increase and others only the relative increase of serum
creatinine, the observation period may be increased up
to 5 days, and some use the more specific cut off of an
absolute increase of 1 mg/dL. The European Society of
Urogenital Radiology defines CI-AKI by an increase of
serum creatinine of 0.5 mg/dL or 25% or greater within
3 days following intravascular administration of radio
contrast medium, without an alternative etiology.
Volume expansion with crystalloids at a rate of 1–1.5 mL/kg
for 1–12 h before the procedure, and continued for 6 to
12 h afterwards, has an established role in reducing the risk
for CI-AKI. Isotonic saline 0.9% was in one trial superior to
half isotonic saline 0.45% in prevention of CI-AKI. Isotonic
sodium bicarbonate (3 mL/kg/h for 1 h before the procedure and at 1 mL/kg/h for 6 h after the procedure) was
superior to isotonic saline in prevention of CI-AKI, in
a number of smaller studies and in meta-analyses [1].
Although, the number of patients studied, and heterogeneity of the studies, preclude a firm conclusion.
Pharmacological Therapy
No adjunct pharmacological therapy to date has been
proven efficacious for reducing the risk for CI-AKI [5].
Contrast Nephropathy
The CIN Consensus Working Panel has divided the
drugs that have been evaluated into three categories based
on their results [5].
Positive Results
These drugs are potentially beneficial, but need further
evaluation.
● Theophylline/aminophyllin
These adenosine antagonists block the potent
intrarenal vasoconstrictor adenosine, which also is a
mediator of tubulo-glomerular feedback. A metaanalysis including seven trials and 480 patients demonstrated a significant decline in serum creatinine after
contrast administration.
● Statins
Retrospective data from large databases demonstrated that patients who were treated with statins
had a lower incidence of CI-AKI. This can be
explained because statins have beneficial effects on
endothelial function, maintain nitric oxide production, and reduce oxidative stress. A prospective randomized study published after the recommendations,
in 304 patients undergoing coronary angiography,
could not demonstrate a beneficial effect when 80 mg
atorvastatin was administered daily, 48 h before and
after the contrast procedure.
● Ascorbic acid
A small prospective randomized study in 231
patients undergoing cardiac catheterization demonstrated a lower incidence for patients treated with
oral ascorbic acid (3 g before and two times 2 g after
the procedure).
● Prostaglandin E1
Two small studies including 130 and 125 patients
found that the vasodilator prostaglandin E1 and its
synthetic analogue misoprostol were effective in
reducing the risk for CI-AKI.
Neutral
● N-acetylcysteine (NAC)
Although NAC is often administered for prevention
of CI-AKI, the evidence supporting its use is weak. Over
27 prospective randomized studies and meta-analyses
found conflicting results regarding the potential beneficial effects of NAC on CI-AKI. The majority of studies
were in patients undergoing non-coronary or coronary
angiography with intra-arterial administration of contrast medium. Studies were heterogeneous as several
dosing regimes were evaluated, in different cohorts,
and different outcomes were assessed.
C
A study in volunteers suggested that the beneficial
effects of NAC could be attributed to an effect on serum
creatinine concentration, and not on glomerular filtration rate. However, recent data could not confirm this.
● Fenoldopam/dopamine
Three small studies and one uncontrolled study
suggested that renal dose dopamine could prevent
CI-AKI. This could not be confirmed in a prospective
randomized study.
Fenoldopam, a selective dopamine-A1 receptor
agonist, was beneficial in several uncontrolled studies,
but not in two prospective randomized studies.
● Calcium channel blockers
Several small studies evaluated the effects of
amlodipine, nifedipine, nitrendipine, and felodipine
on risk for CI-AKI, but found no consistent effect.
● Atrial natriuretic peptide (ANP)
Two small studies could not demonstrate
a beneficial effect of ANP on the occurrence of CI-AKI.
Negative Effects
● Furosemide, mannitol, and dual endothelin receptor
antagonist
These drugs were evaluated in small studies with
conflicting and negative results on prevention of CI-AKI.
Hemofiltration or Hemodialysis
Hemodialysis can effectively remove contrast media.
However, even when administered within 1 h after contrast administration, hemodialysis was not effective in
reducing the incidence of CI-AKI.
The CIN Consensus Working Panel agreed that in
patients with severe renal impairment (GFR <20 mL/kg/
min), hemodialysis should be planned in case CI-AKI
occurs [5].
Hemofiltration was beneficial in preventing CI-AKI in
two studies, when administered 4–6 h before the procedure, and continued for 18–24 h afterwards. These studies
were flawed as the primary endpoint CI-AKI, defined by
a 25% increase of serum creatinine, is affected by
hemofiltration. Secondary endpoints, such as in-hospital
and 1-year mortality, were also positively affected by the
intervention. Further data are therefore needed.
Evaluation/Assessment
It is important to identify risk factors for CI-AKI in
patients who will undergo a contrast procedure. After
the procedure it is recommended to monitor serum creatinine concentration for 3–5 days in order to diagnose
occurrence of CI-AKI.
605
C
606
C
Contrast-Associated Acute Kidney Injury
After-care
The therapy for CI-AKI is similar to other forms of AKI in
ICU patients, and consists of optimization of volume
status, and withdrawal of nephrotoxic drugs. Further,
one needs to monitor and tread for consequences of CIAKI such as hyperkalemia and other electrolyte abnormalities, volume overload, and acidosis.
Contrast-Induced Nephropathy
▶ Contrast Nephropathy
Conus Medullaris Syndrome
Prognosis
Patients who develop CI-AKI are at greater risk for inhospital mortality and 1-year mortality [3]. Levy et al.
found a 5.5-fold increased risk of hospital death, even
after correction for other comorbidities [2]. Risk of
death is greater in patients with need for treatment with
renal replacement therapy, and in patients with chronic
kidney disease before the procedure.
The risk for developing need for dialysis is currently
estimated as <1% in patients with CI-AKI in low risk
patients. Data in ICU patients are scarce; one study
found that 3.5% of 486 ICU patients needed treatment
with dialysis after contrast administration. Another study
in 139 ICU patients found a nonsignificant higher incidence of dialysis in patients with CI-AKI (19% versus 6%,
p = 0.091).
CI-AKI is also associated with other adverse cardiovascular outcomes such as myocardial infarction, bypass
surgery, pulmonary edema, cardiogenic shock, bleeding
requiring transfusion, and vascular complications. Also,
length of hospital stay is longer in patients who have
CI-AKI.
References
1.
2.
3.
4.
5.
Hoste EA, De Waele JJ, Gevaert SA, Uchino S, Kellum JA
(2010) Sodium bicarbonate for prevention of contrast-induced
acute kidney injury: a systematic review and meta-analysis. Nephrol
Dial Transplant 25:747–758
Levy EM, Viscoli CM, Horwitz RI (1996) The effect of acute renal
failure on mortality. A cohort analysis. JAMA 275:1489–1494
McCullough PA, Adam A, Becker CR et al (2006) Epidemiology and
prognostic implications of contrast-induced nephropathy. Am
J Cardiol 98:5–13
Mehta RL, Kellum JA, Shah SV et al (2007) Acute kidney injury
network: report of an initiative to improve outcomes in acute kidney
injury. Crit Care 11:R31
Stacul F, Adam A, Becker CR et al (2006) Strategies to reduce the risk
of contrast-induced nephropathy. Am J Cardiol 98:59–77
Contrast-Associated Acute Kidney
Injury
▶ Contrast Nephropathy
SCOTT E. BELL1, KATHRYN M. BEAUCHAMP2
1
Department of Neurosurgery, School of Medicine,
University of Colorado Health Sciences Center, Denver,
CO, USA
2
Department of Neurosurgery, Denver Health Medical
Center, University of Colorado School of Medicine,
Denver, CO, USA
Definition
Conus medullaris syndrome (CMS) arises from
a spectrum of clinicopathologic entities representing dysfunction of the lowest level of the spinal cord, termed the
conus medullaris, which consists of the sacral segments.
There is a subset of spinal cord injuries referred to as spinal
cord injury syndromes, to which conus medullaris
syndrome belongs, that are grouped by their respective
symptomatology, including central cord syndrome,
Brown-Sequard syndrome, anterior cord syndrome, posterior cord syndrome, and cauda equina syndrome. While
CMS is classically associated with pathophysiologic disruption isolated to the conus medullaris, it may also be
associated with a widespread spinal cord process that
includes the conus medullaris, which leads to the generalized syndromic symptoms. By nature of its anatomy, this
is an illness characterized by both upper motor and lower
motor neuron signs and symptoms that manifest in the
perineal region and lower extremities.
The spinal cord ends at the level of the last thoracic
to second lumbar vertebrae in a normal adult, with the
remainder of the spinal canal being occupied by the cauda
equina. This corresponds to the level of the thoracolumbar
junction. It is an important concept to recall that the
vertebral column level deviates from the spinal cord level
starting in the cervical spine. A depiction of this relationship is seen in Fig. 1. In general, the spinal cord level is
considered to reside roughly one to two levels above its
corresponding vertebral level (at which the nerve root
exits) for most of the cervical and upper thoracic spinal
cord, three to four levels above for the lower thoracic and
lumbar spinal cord, and five or more levels above for the
sacral spinal cord. With this relationship in mind, it is to
Conus Medullaris Syndrome
C1
CI
C2
Subarachnoid space
CII
C3
CIII
Cervical
Enlargement
C4
CIV
C5
CV
C6
CVI
C7
CVII
C8
TI
T1
TII
T2
TIII
T3
TIV
C
nontraumatic causes of spinal cord disease is more difficult due to its rarity and the lack of consensus and consistency in reporting. Conus medullaris syndrome as a whole
is quite a rare process, with a diverse array of etiologies
(Table 1). Definitive epidemiologic information about
CMS is sparse. In a series of 839 patients reviewed retrospectively of spinal cord injury (SCI) rehabilitation admissions from 1992 to 2004 at an urban tertiary care center,
1.7% had CMS [1]. A European study reported an average
annual incidence of conus medullaris syndrome at 1.5 per
million population, and prevalence of 4.5 per 100,000
population, over the study period 1996–2004 from etiologies of all types [2].
T4
TV
Etiology
TVI
The most common causes of CMS are reported as compression from herniated intervertebral disc, and vertebral
fracture at the thoracolumbar junction [2]. The mechanisms at the root of CMS specifically, underlying these
etiologies, are multimodal. The acute or primary mechanism
involves ischemia and direct injury to neuropil and neuronal
cells at that location by compression, traction, contusion,
and/or laceration. The secondary mechanism involves
a complex cascade of chemical signals and inflammatory
mediators, ion conduction and matrix derangements, cellular respiratory insults, and cytotoxic neurotransmitters that
results in the propagation of irreversible injury. Other etiologies of conus medullaris syndrome includes any lesion that
disrupts the grey and/or white matter of the spinal cord at
that level. Such lesions may include infiltrative, compressive,
demyelinating, ischemic, or inflammatory processes produced by tumors, trauma, infections, or autoimmune and
metabolic diseases. Varying combinations of primary and
secondary mechanisms of spinal cord injury are responsible
for the spectrum of conus medullaris syndrome seen in all
etiologies of this disease.
Tumors of the spine cause damage to the conus
medullaris by compressive and infiltrative mechanisms.
The most common intramedullary tumor of the conus
medullaris is ependymoma [3]. They develop from the
ependymal cells lining the filum terminale, and less
often the ependymal cells of the ventriculus terminalis.
This structure is an ependymal-lined termination of the
central canal, residing at the transition from conus
medullaris to filum terminale. Less frequently encountered intramedullary tumors at the conus include lowgrade astrocytomas and rarely glioblastoma multiforme.
This later form has been shown to occur at the conus as
either primary occurrence or as identified in holocord
disease. The most common extramedullary tumors at the
conus medullaris are peripheral nerve sheath tumors,
T5
T6
TVII
T7
TVIII
T8
TIX
T9
TX
T10
TXI
T11
Lumbar
Enlargement
TXII
T12
LI
Sacral Cord
L1
LII
L2
Cauda Equina
LIII
L3
LIV
Filum Terminale
(pial part)
L4
LV
L5
End of dural/
subarachnoid space
SI
SII
S1
SIII
S2
SIV
S3
SV
S4
S5
Cu
Filum Terminale
(dural part)
Conus Medullaris Syndrome. Figure 1 Relationship of
spinal cord and nerve roots to vertebral level (Adapted from
Drake et al. 2008)
say that a conus medullaris lesion occurs at the vertebral
level approximately L1, but affects the lower sacral segments of the spinal cord.
Spinal cord injury occurs at a reported annual incidence of 40 per million population, with 11,000 new
cases each year in the United States. Epidemiology of
607
C
608
C
Conus Medullaris Syndrome
Conus Medullaris Syndrome. Table 1 Reported etiologies of conus medullaris syndrome
Inflammatory
Tumor
Infection
Non-tumor
Trauma
Transverse myelitis
Ependymoma
Staphylococcus
Sarcoidosis
HNP
Longitudinal myelitis
Astrocytoma
Tuberculosis
Cavernoma
Burst
Neuromyelitis optica
GBM
Schistosomiasis
AVF/AVM
Fracture
Lupus erythematosus
Ganglioglioma
Cysticercosis
Amyloid angiopathy
Fracture dislocation
Ventriculus terminus
cyst
Spinal stenosis
Parainfectious myelitis
Meningioma
PNET
Teratoma
Hemangioblastoma
Tethered cord
Metastases
Infarct
Chordoma
Dermoid cyst
Peripheral nerve sheath
tumor
Epidermoid cyst
GBM glioblastoma multiforme, PNET primitive neuroectodermal tumor, AVF arteriovenous fistula, AVM arteriovenous malformation, HNP herniated
nucleus pulposus
meningioma, and metastases. In contrast to the brain,
intramedullary metastases occur much less commonly,
likely owing to the difference in blood flow between the
brain and spine [3].
Epidermoid and dermoid cysts may be congenital or
acquired, and can occur at the conus medullaris. They
arise from retained integument within the spinal canal,
with or without a sinus tract to the surface. These lesions
are due to one of two mechanisms [3]. They may be
associated with developmental malformative rests, as
well as being acquired by lumbar puncture or after surgery
to close myelomeningocele early in life. They can be the
source of recurrent infections, and may expand to compress the conus, or cause local vascular derangement producing CMS. Teratomas are congenital, and likewise arise
from rests of misplaced tissue. There is debate whether
these arise from a migratory problem during development,
or from a dysembryogenic-type mechanism. At the conus,
they are frequently associated with dysembryogenic defects
such as split cord and myelomeningocele.
Inflammatory diseases are rarely associated with
conus medullaris syndrome, precluding analysis as
a series. However, reports of inflammatory demyelinating
diseases that are either isolated to the conus or involving
the conus in holocord-type fashion are reported as case
reports in the literature. The most common entities showing CMS symptoms include transverse myelitis, NMO,
and longitudinal myelitis. Their occurrences have been
described in cases where the mechanism is likely autoimmune response after systemic infection or vaccine [4].
Likewise, other systemic inflammatory diseases, such as
lupus erythematosus, have been identified in cases where
initial presentation of the disease was by way of CMS [5].
Thus far, due to the rarity of these entities, no known
specific pathophysiologic process has been described for
these causes of CMS.
While tethered cord syndrome is considered as
a distinct entity, it may be considered within the spectrum
of CMS. This process exerts its deleterious effect on the
medullary conus by placing tension on the spinal vessels,
and cord itself. This ultimately leads to ischemia by
a variety of pathophysiologic mechanisms. One such
mechanism of dysfunction as a result of tethered cord
includes metabolic derangements leading to increased
reduction states of certain oxidase systems in the mitochondria of nerve cells, which appears to be related
to ischemia in this area. Mild-to-moderate redox derangements have been reversed with surgical untethering; however, severe cases are more refractory.
Indeed, there have been reports of conus medullaris
syndrome after spinal meningitis. Other infectious processes that have been reported affecting the conus
medullaris include epidural or intramedullary abscesses
from staphylococcus, tuberculosis, as well as schistosomiasis and neurocystercircosis. It is generally accepted that
these infections seed via hematogenous route through the
valveless system of vascular plexuses around the
thoracolumbar junction, or by local phlegmon formation.
Holocord processes may present with conus medullaris
syndrome signs and symptoms, in conjunction with other
neurologic deficits related to its associated spinal cord pathophysiology. The term “holocord” is used to define diffuse
Conus Medullaris Syndrome
C
609
Conus Medullaris Syndrome. Table 2 Conus medullaris syndrome vs cauda equina syndrome
Conus medullaris syndrome
Cauda Equina syndrome
Presentation
Sudden (inflammatory lesions), insidious (tumors),
bilateral
Acute (trauma), gradual (stenosis), may be unilateral
Reflexes
Hyperreflexia; knee jerk preserved, ankle jerk affected Hyporeflexia; knee jerk and ankle jerk both affected
Radicular pain
Less severe
More severe
Low back pain
Local low back pain only; rarely radiation to
perineum
Low back pain with dermatomal radiation
Sensory
Perianal localization to sensory disturbance;
Stereotypic “saddle anesthesia;” asymmetric and
symptoms/signs symmetric and bilateral; sensory dissociation present unilateral disturbance possible; no sensory
dissociation; possible dermatomal sensory
disturbances possible with parasthesias
Motor strength
Usually symmetric; spastic paraparesis, less
pronounced; fasciculations possible
Asymmetric and unilateral motor weakness possible;
areflexic paraparesis; atrophy common
Impotence
Frequent
Less frequent; erectile dysfunction including inability
to maintain erection, inability to ejaculate
Sphincter
dysfunction
Urinary retention and atonic anal sphincter causes
overflow incontinence; presents early
Urinary retention; presents late
Source: Adapted from Dawodu et al. (2009)
involvement of multiple, or all, regions of the spinal cord in
a disease process. This has been seen in as diverse an array of
pathologies as there are focal disruptions of the conus itself.
Some of the more common holocord processes include
infiltrative tumors with widespread dissemination; syringomyelia from compressive pathologies; and neuromyelitis
optica, which shows “longitudinal myelitis” type imaging
and clinical findings.
Another etiology worth mention is that of ischemic
injury or infarction of the conus medullaris. This occurs
through a variety of mechanisms, and is thought to represent approximately 1% of stroke cases. Embolism has
been described from sickle cell anemia, epidural steroid
injection, antiphospholipid antibodies, and abdominal
surgical procedures. Other mechanisms include vascular
malformations producing a blood flow “steal” phenomenon whereby blood flow bypasses the arteriole and capillary level by arteriovenous shunting.
Clinical Presentation
The symptoms of conus medullaris syndrome may present
acutely or insidiously, depending upon the etiology. These
symptoms will show mixed upper motor and lower motor
neuron signs of the perineum and distal lower extremities,
with an emphasis on UMN. Lower motor neuron deficits
are due to the presence of lumbar nerve roots present
within the thecal sac prior to exit at their respective vertebral level. Due to the anatomic relationship between the
conus medullaris and the cauda equina, CMS may be
easily confused with cauda equina syndrome to the complacent observer. One must take careful measure to distinguish these disease processes during the evaluation of
bowel and bladder, and lower extremity dysfunction. Both
syndromes produce weakness and sensory dysfunction of
the saddle region as well as variable parts of the lower
extremities. Some distinguishing characteristics are
outlined in Table 2 . Local back pain, if present, is typically
an early symptom, followed by bowel and bladder retention. The pain will be more aching in nature, rather than
the sharp, sudden pain associated with cauda equina syndrome. Motor dysfunction is typically a late sign in conus
medullaris syndrome. A common sign of severe spinal
cord dysfunction includes diminished or absent
bulbocavernosus and anal sphincter reflexes. This is likewise represented in conus medullaris syndrome.
Once distinguished from the differentials, attention
can be turned to narrowing the list of possible etiologies
of the problem. The acuity of onset and history lends to
the distinction between surgical lesions and medical
lesions. Tumors, vascular malformations, and other surgically remediable lesions tend to present with a more
insidious onset. Exceptions to this are those acute
processes, such as injury, visibly expansile lesions, hemorrhages, etc., which may require immediate decompression
to save neural tissue. Otherwise, acute onset symptomatology tends to occur in those nonsurgical processes,
such as autoimmune and inflammatory etiologies, which
are more amenable to medical treatments.
C
610
C
Conus Medullaris Syndrome
Imaging studies are an important adjunct for any diagnostic workup. Of particular importance will be an MRI
with and without gadolinium of the entire spine to examine
the neural axis for findings. Cystic lesions will be isointense
with CSF on both T1Wand T2W images. Tumors may show
rim-enhancement or varying degrees of homogeneous or
heterogeneous lesion enhancement, depending upon the
type of tumor. Inflammatory or demyelinating processes
may show rim-enhancement as well. The distinguishing
characteristic is that tumors typically have an expansile
quality identifiable at the conus, while inflammatory
processes usually do not. However, this must be taken in
the context of the clinical picture. In the absence of acute
symptoms and an expansile lesion, tumor is more likely;
whereas acute symptoms and an expansile mass may be
a harbinger for hemorrhage, for example. Computed
tomography of the spine for bony involvement is important
for surgical planning and prognosticating, but would not
supplant the use of MRI. The diagnostic value of a contrastenhancing MRI outweighs that of CT scan, and would
preclude its usefulness in the later.
Electrophysiologic studies, such as electromyography
and nerve conduction velocities, can be useful in
distinguishing central from peripheral nervous system
processes of duration longer than 2–4 weeks. They are
useless in providing information on the nature of symptoms of shorter duration due to the pathophysiology
underlying acute denervation, demyelination, and neuromuscular conduction defects.
Treatment
Treatment for conus medullaris syndrome varies based on
etiology. As previously mentioned, discrete lesions within
the conus identifiable on imaging should be approached
with microsurgical technique for biopsy, debulking, and
rarely radical resection if curable etiology is known. If
traumatic injuries are present with conus medullaris
compression, decompression and stabilization at the earliest
possible juncture in the patient’s acute stages of illness has
been argued to be important for optimal convalescence.
When to operate under these circumstances depends upon
many factors, including hemodynamic instability and associated injuries of a more critical nature. The debate of when
spinal cord injuries should be operated has led to much
discord in the surgical literature of the traumatized spine.
Whether due to a trauma or nontraumatic causes, the
rationale for treading lightly in this region of the spinal
cord lies in the functional forgiveness of the location. Once
definitively injured, those neuronal elements responsible
for bowel, bladder, and sexual function rarely recover.
This is directly opposed to those elements at other locations in the spinal cord responsible for somatic sensory
and motor function, which carry a good prognosis with
rehabilitation, after incomplete injury. However, there are
circumstances, as in the case of ependymoma, where gross
total resection will likely lead to cure. In these cases, it is
imperative to use meticulous surgical technique in order
to minimize the potential for permanent disability.
If it is determined that a nonsurgical lesion is present,
the medical treatment depends upon the nature of the
lesion. If infection is suspected, antibiotic therapy is initiated only after an organism is identified, either through
blood cultures or image-guided aspiration. Then dual
agent, IV antibiotics must be initiated for long-term
therapy. This treatment for intramedullary and epidural
infections can be very successful. In those refractory cases,
or cases of acute worsening, surgical debridement may be
necessary in addition. If an inflammatory process is
suspected, high-dose steroid therapy is the gold standard.
Multiple cases of inflammatory conus medullaris syndrome have been shown to be quickly responsive to
these treatments, sometimes leading to complete remission of symptoms. More frequently, partial recovery
occurs, with gradual improvement to only some disability
in weeks to months. In some cases, it is useful or necessary
to synergize with other immune modulators, such as
IV-IG, cyclophosphamide, and azathioprine.
Prognosis
Often the prognosis for conus medullaris syndrome is
more related to the etiology than to the syndrome itself.
If the underlying cause is a malignant process, this is far
and away the more decisive factor in prognostication than
the presence of CMS. However, if CMS is due to a lesion
affecting the conus in isolation, then prognosis is related
to the degree of neuronal tissue damage. Frequently, with
today’s techniques of intensive rehabilitation and targeted
medical therapy, lesions isolated to the conus medullaris
causing CMS will improve to acceptable functional levels,
if not full recovery. In rare cases, isolated CMS leads to
permanent paraplegia and pelvic sphincter dysfunction.
References
1.
2.
3.
McKinley W, Santos K, Meade M, Brooke K (2007) Incidence and
outcomes of spinal cord injury clinical syndromes. J Spinal Cord
Med 30:215–224
Podnar S (2007) Epidemiology of cauda equina and conus medullaris
lesions. Muscle Nerve 35:529–531
Ebner FH, Roser F, Acioly MA, Schoeber W, Tatagiba M (2009)
Intramedullary lesions of the conus medullaris: differential diagnosis
and surgical management. Neurosurg Rev 32:287–301
Convective Clearance
4.
5.
Pradhan S, Gupta RK, Kapoor R, Shashank S, Kathuria MK
(1998) Parainfectious conus myelitis. J Neurol Sci 161:156–162
Katramados AM, Rabah R, Adams MD, Huq AH, Mitsias PD
(2008) Longitudinal myelitis, aseptic meningitis, and conus
medullaris infarction as presenting manifestations of pediatric systemic lupus erythematosus. Lupus 17:332–336
Convection
The physical mechanism by which a solute is dragged
across a semipermeable membrane in association with
ultrafiltered plasma water. This water and solute shift is
secondary to a pressure gradient across the membrane.
Convective Clearance
ZHONGPING HUANG1, WILLIAM R. CLARK2,3, CLAUDIO RONCO4
1
Department of Mechanical Engineering, Widener
University, Chester, PA, USA
2
Gambro Renal Products, Lakewood, CO, USA
3
Nephrology Division, Indiana University School of
Medicine, Indianapolis, IN, USA
4
Department of Nephrology, St. Bortolo Hospital,
Vicenza, Italy
C
with HD. However, ▶ clearance of larger molecules is
limited due to HD’s primarily diffusive nature. In clinical
practice, HD therapy prescription is driven largely by
factors influencing urea clearance. On the other hand,
convective modalities, namely, ▶ hemofiltration and
hemodiafiltration, are capable of removing solutes over
a wider MW array than can HD. In AKI, these therapies
typically are provided on an extended basis as continuous
venovenous hemofiltration (CVVH) and continuous
venovenous hemodiafiltration (CVVHDF), being part of
the CRRT spectrum.
In a study employing CVVH, Ronco and colleagues
[1] reported a direct relationship between daily ultrafiltrate volume and survival in critically ill AKI patients.
A normalized ultrafiltration rate of 35 mL/kg/h or more
(on average) was associated with a mortality of approximately 45% while a more standard ultrafiltrate rate (mean,
20 mL/kg/h) was associated with a mortality of approximately 65%. Although subsequent studies in which convection has contributed relatively less to total solute
clearance have produced mixed results [2], the Ronco
study remains the “gold standard” with respect to convective solute removal in AKI.
This chapter provides a review of the determinants of
convective solute removal. This is followed by an overview
of the manner in which CVVH and CVVHDF are applied
clinically.
Application
Convective Clearance
Synonyms
Solvent drag; Ultrafiltration
Definition
The mechanism of ▶ convection may be described as
solvent drag: if a pressure gradient exists between the
two sides of a semipermeable (porous) membrane, when
the molecular dimensions of a solute are such that passage
through the membrane is possible, the solute is swept
(“dragged”) across the membrane in association with
ultrafiltered plasma water.
Pre-existing Condition
Although conventional hemodialysis (HD) remains the
most commonly used treatment modality for the management of patients with acute kidney injury (AKI), continuous renal replacement therapy (CRRT) is used
increasingly in this setting. The removal of low-molecular
weight (MW) nitrogenous waste products is very effective
The determinants of convective clearance differ significantly from those of diffusion, which is primarily
a concentration gradient-driven process. On the other
hand, convective solute removal is determined primarily
by the sieving properties of the filter membrane used and
the ultrafiltration rate. The mechanism by which convection occurs is termed solvent drag. If the molecular
dimensions of a solute are such that transmembrane passage to some extent occurs, the solute is swept (“dragged”)
across the membrane in association with ultrafiltered
plasma water. Thus, the rate of convective solute removal
can be modified either by changes in the rate of solvent
(plasma water) flow or by changes in the mean effective
pore size of the membrane. As discussed below, the blood
concentration of a particular solute is an important determinant of its convective removal rate.
Both the water and solute permeability of an ultrafiltration membrane are influenced by the phenomena
of secondary membrane formation and concentration
611
C
612
C
Convective Clearance
polarization. The exposure of an artificial surface to
plasma results in the nonspecific, instantaneous adsorption of a layer of proteins, the composition of which
generally reflects that of the plasma itself. This layer of
proteins, by serving as an additional resistance to mass
transfer, effectively reduces both the water and solute
permeability of an extracorporeal membrane. Evidence
of this is found in comparisons of solute sieving coefficients determined before and after exposure of
a membrane to plasma or other protein-containing
solution.
Although concentration polarization primarily pertains to plasma proteins, it is distinct from secondary
membrane formation. Concentration polarization specifically relates to ultrafiltration-based processes and applies
to the kinetic behavior of an individual solute. Accumulation of a solute that is predominantly or completely
rejected by a membrane used for ultrafiltration of plasma
occurs at the blood compartment membrane surface.
This surface accumulation causes the solute concentration
just adjacent to the membrane surface (i.e., the
submembranous concentration) to be higher than the
bulk (plasma) concentration. By definition, concentration
polarization is applicable in clinical situations in which
relatively high ultrafiltration rates are used. Conditions
that promote the process are high ultrafiltration rate
(high rate of convective transport), low blood flow rate
(low shear rate or membrane “sweeping” effect), and
the use of ▶ post-dilution (rather than ▶ pre-dilution)
replacement fluids (increased local solute concentrations).
Post-dilution CRRT
The location of replacement fluid delivery in the extracorporeal circuit during CRRT has a significant impact on
solute removal and therapy requirements. (For the purpose of the rest of this chapter, CRRT refers either to
CVVH or CVVHDF.) Replacement fluid can be delivered
to the arterial blood line prior to the hemofilter (predilution mode) or to the venous line after the hemofilter
(post-dilution mode). In post-dilution CRRT, the relationship between solute clearance and ultrafiltration rate
is relatively straightforward. In this situation, solute clearance is determined primarily by and related directly to the
solute’s sieving coefficient and the ultrafiltration rate.
(Sieving coefficient is defined as the ratio of the solute
concentration in the filtrate to the simultaneous plasma
concentration.) For a given solute, the extent to which it
partitions from the plasma water into the red blood cell
mass and the rate at which it is transported across red
blood cell membranes also influences clearance. For example, the volume of distribution of both urea and creatinine
includes the red blood cell water. However, while urea
movement across red blood cell membranes is very fast,
the movement of creatinine is significantly less rapid.
Furthermore, red blood cell membranes are completely
impermeable to many uremic toxins. A prominent example of this is the low MW protein toxin class, for which the
volume of distribution is the extracellular fluid. These
observations lead to the obvious conclusion that hematocrit also influences solute clearance in CRRT. Finally,
through its effect on secondary membrane formation
and concentration polarization (see above), plasma total
protein concentration is also a determinant of solute
clearance in CRRT.
For a given volume of replacement fluid over the entire
MW spectrum of uremic toxins, post-dilution CRRT provides higher solute clearance than does pre-dilution
CRRT. As discussed below, the relative inefficiency of the
latter mode is related to the dilution-related reduction in
solute concentrations, which decreases the driving force
for convective mass transfer. Despite its superior efficiency
with respect to replacement fluid utilization, post-dilution
CRRT is limited inherently by the attainable blood flow
rate. More specifically, the ratio of the ultrafiltration rate
to the plasma flow rate delivered to the filter, termed the
filtration fraction, is the limiting factor. In general,
a maximal filtration fraction of approximately 25%
usually guides prescription in post-dilution CRRT. At
filtration fractions beyond these values, concentration
polarization and secondary membrane effects become
prominent and may impair hemofilter performance.
The blood flow limitations imposed by the use of
temporary catheters for CRRT accentuate the filtration
fraction-related constraints on maximally attainable ultrafiltration rate in the post-dilution mode. Therefore, the
ultrafiltrate volumes shown by Ronco and colleagues to
improve survival can usually be achieved only in the predilution mode. As discussed below, efficient utilization of
replacement fluid in acute pre-dilution CRRT is an important consideration.
Pre-dilution HF
From a mass transfer perspective, the use of pre-dilution
has several potential advantages over post-dilution. First,
both hematocrit and blood total protein concentration are
reduced significantly prior to the entry of blood into the
hemofilter. This effective reduction in the red cell and
protein content of the blood attenuates the secondary
membrane and concentration polarization phenomena
described above, resulting in improved mass transfer.
Pre-dilution also favorably impacts mass transfer due
to augmented flow in the blood compartment, because
Convective Clearance
pre-filter mixing of blood and replacement fluid occurs.
This achieves a relatively high membrane shear rate,
which also reduces solute-membrane interactions. Finally,
pre-dilution may also enhance mass transfer for some
compounds by creating concentration gradients that
induce solute movement out of red blood cells.
The above mass transfer benefits must be weighed
against the predictable dilution-induced reduction in
plasma solute concentrations, one of the driving forces
for convective solute removal. The extent to which this
reduction occurs is determined mainly by the ratio of the
replacement fluid rate to the blood flow rate. Indeed,
a frequently overlooked consideration is the important
influence of blood flow rate on solute clearance. For
small solutes, which are distributed in the blood water
(BW) component within the blood passing through the
hemofilter, the operative clearance equation in predilution CRRT is:
K ¼ QF S ½Q BW =ðQ BW þ QS Þ
ð1Þ
where K is solute clearance, QBW is blood water flow rate,
QF is ultrafiltration rate, S is sieving coefficient, and QS is
the substitution (replacement) fluid rate. At a given QF
value, pre-dilution CVVH is always less efficient than
post-dilution CVVH with respect to fluid utilization, as
discussed above. A sieving coefficient of 1.0 implies equivalence of blood water and ultrafiltrate concentrations,
resulting in small solute clearances that are effectively
C
equal to QF in post-dilution CVVH. As Eq. 1 indicates,
the larger QS is relative to QBW, the smaller is the entire
fraction represented by the third term on the right-hand
side. In turn, the smaller is this term, the greater is the loss
of efficiency (relative to post-dilution) due to dilution.
Since employing a relatively low QS is not an option in
high-dose CVVH due to the direct relationship that exists
between QF and QS, attention needs to be focused on
achieving blood flow rates that are significantly higher
than what have been used traditionally in CRRT (i.e.,
150 mL/min or less). In fact, widespread attainment of
doses consistent with the intermediate and high-dose
arms in the study performed by Ronco and colleagues
(35–45 mL/h/kg) cannot occur unless blood flow rates of
approximately 250 mL/min or more become routine in
pre-dilution CVVH.
Evidence supporting the critical importance of QB in
pre-dilution CVVH appears in Fig. 1 [3]. For this singlepool modeling analysis, a dose equivalent to 35 mL/h/kg
in post-dilution is targeted. In addition, a filter operation
of 20 h per day is assumed to account for differences in
prescribed versus delivered therapy time. For patients of
varying body weight, the substitution fluid requirements
to attain the above dose are shown as a function of QB. For
low blood flow rates (= 150 mL/min), these data suggest
that substitution fluid rates required to achieve this dose
are impractically high in the majority of patients (>70 kg)
due to a “chasing the tail” phenomenon. To achieve the
200
Replacement fluid flow (ml/min)
180
160
140
120
Patient
Size
100
100 kg
80
85 kg
70 kg
60
55 kg
40
40 kg
20
0
100
150
200
250
300
Blood flow (ml/min)
350
400
Convective Clearance. Figure 1 Substitution fluid requirements as a function of blood flow rate in pre-dilution CVVH (Reprinted
from [3]. With permission from Elsevier)
613
C
C
Corlopam®
dose target, a high ultrafiltration rate is required. However, the concomitant requirement of a similarly high
substitution fluid rate has a relatively substantial dilutive
effect on solute concentrations at low QB. On the other
hand, for QB values greater than 250 mL/min, the dilutive
effect of the substitution fluid is attenuated significantly
and with the resultant improvement in fluid efficiency, the
target dose can be delivered practically to a broad range of
patients.
The operating principle of CVVHDF is that total
clearance can be augmented by combining diffusion and
convection. Due to the relatively low flow rates used for
these therapies, changes in solute concentrations within
the filter are also relatively small. This allows total solute
clearance to be estimated by simply adding the diffusive
and convective components. In other words, no interaction between the two mass transfer processes occurs.
50
Clearance (mL/min)
614
Urea
Creatinine
Vancomycin
Inulin
40
30
20
10
20
40
Ultrafiltration rate (mL/min)
60
Convective Clearance. Figure 2 Solute clearance (mL/min)
as a function of ultrafiltration rate (mL/min) in pre-dilution
CVVH (Reprinted from [4]. With permission from Elsevier)
Practical Considerations
At least until recently, the ultrafiltration rate (QF) in
CVVH has typically been in the 1–2 L/h range. However,
in response to outcome data published by Ronco and
colleagues, prescription of significantly higher QF values
is occurring. In post-dilution CVVH, the mode employed
in the Ronco study, the relationship between solute clearance and QF is quite straightforward, as mentioned previously. For reasons also described above, the relationship
between clearance and QF may not be as predictable in
pre-dilution, relative to the case of post-dilution. Consequently, the claim that QF is a dose surrogate in predilution CVVH needs to be demonstrated. To this end,
Huang and colleagues have investigated the effect of QF on
solute removal parameters in pre-dilution CVVH [4]. For
a blood flow rate of 200 mL/min, removal parameters
were measured at QF values of 20, 40, and 60 mL/min,
corresponding to 17, 34, and 51 mL/h/kg for a 70 kg
patient. These parameters are measured for solutes of
varying MW.
The relationship between solute clearance and QF for
urea, creatinine, vancomycin, and inulin appears in Fig. 2.
Overall, these data are consistent with a convective therapy
for two reasons. First, for each solute, the clearance-QF
relationship is linear, confirming a direct relationship
between these two parameters. Second, for a given QF
over the solute MW range investigated, clearance is not
strongly dependent on molecular weight, at least in comparison to hemodialysis. Specifically, very little difference
in clearance is observed between the two small solutes and
between the two middle molecule surrogates as a function
of QF. On the other hand, reflecting its diffusive basis, HD
is associated with much larger differences in clearance over
the same MW range. The authors concluded that, because
an orderly relationship exists between QF and solute clearance, QF is a reasonable dose surrogate in pre-dilution
CVVH, as has been suggested for post-dilution CVVH
and for CVVHDF. Overall, these data seem to validate
the use of effluent-based dosing, which has been employed
in two recent international trials evaluating the relationship between CRRT dose and outcome [5].
References
1.
2.
3.
4.
5.
Ronco C, Bellomo R, Hommel P, Brendolan A, Dan M, Piccinni P,
LaGreca G (2000) Effects of different doses in continuous
veno-venous hemofiltration on outcomes in acute renal failure:
a prospective, randomized trial. Lancet 355:26–30
Saudan P, Niederberger M, De Seigneux S et al (2006) Adding
a dialysis dose to continuous hemofiltration increases survival in
patients with acute renal failure. Kidney Int 70:1312–1317
Clark WR, Turk JE, Kraus MA, Gao D (2003) Dose determinants in
continuous renal replacement therapy. Artif Organs 27:815–820
Huang ZP, Letteri JJ, Clark WR, Zhang W, Gao D, Ronco C (2007)
Ultrafiltration rate as a dose surrogate in pre-dilution hemofiltration.
Int J Artif Organs 30:124–132
Huang Z, Letteri JJ, Clark WR, Ronco C (2008) Operational characteristics of continuous renal replacement therapy modalities used for
critically ill patients with acute kidney injury. Int J Artif Organs
31:525–534
Corlopam®
▶ Fenoldopam
Coronary Computerized Tomographic Angiography
Coronary Computerized
Tomographic Angiography
JUDD E. HOLLANDER1, HAROLD LITT2
1
Department of Emergency Medicine, University of
Pennsylvania, Philadelphia, PA, USA
2
Department of Radiology, University of Pennsylvania,
Philadelphia, PA, USA
Synonyms
Coronary CTA; CT coronary angiography
Definition
A computed tomography examination of the heart
acquired with ECG-synchronization during the arterial
phase of intravenous contrast enhancement designed to
visualize native coronary arteries and/or bypass grafts.
Pre-existing Condition
Coronary Artery Disease
Coronary CTA is primarily used to evaluate the presence
or absence of coronary artery disease. It has a high degree
of diagnostic accuracy when compared to cardiac catheterization as the criterion standard. In a meta-analysis of
2,515 patients from 41 studies, the subset imaged on a 64slice scanner had a per patient sensitivity of 98% with
a specificity of 92% for detection of significant coronary
artery disease [1]. Newer generation scanners have even
greater diagnostic performance.
Potential Acute Coronary Syndrome
Of the nearly eight million patients presenting annually to
US emergency departments for evaluation of chest pain,
80–85% are not ultimately found to have a cardiac cause
for their symptoms. However, given the prevalence and
clinical significance of coronary artery disease, excluding
a cardiac cause of chest pain remains a challenging clinical
problem and often mandates extensive testing. Although
clinical algorithms can successfully risk stratify patients,
they have not typically been considered useful in identifying the group of patients who can be discharged safely
from the emergency department without requiring an
inpatient evaluation.
It is well established that patients without coronary
artery disease are at very low risk for adverse cardiovascular events, even when they have symptoms that would
otherwise be consistent with a potential acute coronary
C
syndrome. Recent cardiac catheterization with normal or
minimally diseased vessels is known to be useful to “rule
out” an acute coronary syndrome in such patients. Coronary CTA, as a noninvasive surrogate for catheterization,
can be used to risk stratify patients with respect to coronary artery disease and subsequently ACS immediately
after onset of symptoms, thus avoiding hospitalization.
Application
Coronary CTA has several promising clinical applications
at the present time, including: (1) identifying patients who
present with a potential acute coronary syndrome (often
in the ED) who may safely be discharged; (2) to evaluate
patients for coronary artery disease, either as a first test or
after indeterminate or suspected false positive stress test,
avoiding unnecessary invasive cardiac catheterization; and
(3) to evaluate stent or bypass graft patency and location
in patients with symptoms after percutaneous coronary
intervention (PCI) or bypass graft surgery (CABG).
Coronary CTA to “Rule Out” Acute Coronary
Syndrome
Coronary CTA has high diagnostic accuracy (Fig. 1). Janne
d’Othee et al. [1] found a sensitivity of 98% and specificity
of 92% relative to cardiac catheterization using 64 slice
scanners. Based upon this high diagnostic accuracy, centers with experience in coronary CTA have developed
clinical pathways that allow for rapid disposition of
patients who present with potential acute coronary syndromes found not to have coronary artery disease. This
strategy is based upon the observation noted above that
patients without coronary disease at cardiac catheterization are considered to be at low risk for adverse cardiovascular events.
Coronary CTA performs at least as well as myocardial
perfusion imaging in identifying patients at low risk
for cardiovascular events. Observational studies of symptomatic patients presenting to the ED have found that
patients with normal coronary CTA results are at low
risk for adverse events over varying time periods of up to
one year.
Many small studies (35–103 subjects) have followed
patients up to 15 months and have uniformly found that
low- to intermediate-risk patients without coronary disease do well during this time period. One study of 568
patients in which coronary CTA was used for clinical
decision making demonstrated that patients discharged
from the ED following a negative study were at very low
risk of 30-day cardiovascular events [2]. In a group of 481
patients with a TIMI score of less than or equal to two
without a stenosis of 50% or more who were followed for
615
C
616
C
Coronary Computerized Tomographic Angiography
a
b
Coronary Computerized Tomographic Angiography. Figure 1 Forty-three-year-old male who presented to the ED with chest
pain who was found to have an 80% stenosis in the proximal LAD. (a) CCTA demonstrates that the lesion is caused by noncalcified plaque and (b) corresponding catheter angiography performed prior to stenting lesion
up to 1 year, there were no patients who had definite
cardiovascular events [3].
A coronary CTA based strategy to evaluate low- to
intermediate-risk patients in the ED is cost effective.
Chang et al. [4] found that immediate coronary CTA was
more cost effective in the short term and was associated
with a shorter length of stay than observation unit management with coronary CTA, observation unit management with stress test and admission with hospitalist
directed care in a cohort of patients similar to this study.
Short term benefits occur due to the reduced length of stay
and lower cost of coronary CTA relative to single photon
emission computed tomography (SPECT) imaging. Coronary CTA has also been associated with reduced utilization of coronary angiography, reduced revisit and
readmission rate in other studies.
Clinical Utility to Diagnose or “Rule Out”
Coronary Artery Disease
Given the high diagnostic accuracy of coronary CTA compared to cardiac catheterization, several groups have evaluated whether coronary CTA can reduce equivocal test
results from stress nuclear imaging as well as the likelihood
of receiving an invasive diagnostic procedure like cardiac
catheterization.
Weustink et al. [5] compared the accuracy and clinical
utility of stress testing and coronary CTA for identifying
patients who require invasive coronary angiography (cardiac catheterization). They found that stress testing was
not as accurate as coronary CTA. In low-risk patients
(<20% pretest probability of disease), a negative stress
test or a negative coronary CTA confirmed no need for
invasive angiography. On the other hand, a positive stress
test only yielded a positive predictive value of 50%, meaning half the tests were false positive. In patients with an
intermediate (20–80%) pretest probability of disease,
a positive coronary CTA predicted need for invasive angiography (93% post-test probability of disease) and
a negative result confirmed lack of need for further testing
(<1% post test probability).
Population-based data from Canada has found that
the rate of normal invasive coronary angiograms was
relatively reduced by 15% (absolute reduction of 5%) in
an institution that implemented coronary CTA.
Thus, it appears that use of coronary CTA can reduce
confusion from false positive and false negative stress tests
and lead to more appropriate use of invasive coronary
angiography.
Evaluation of Symptomatic Patients after PCI
or CABG Surgery
In patients who have previously undergone revascularization, recurrent chest pain may be caused by a variety of
factors including progression of native vessel disease, stent
or bypass graft stenosis or occlusion, and sternotomy or
pericardiotomy complications (Fig. 2). For those patients
in whom the chest pain is not clearly anginal, coronary CT
may allow discrimination among these conditions. If
repeat sternotomy is contemplated, whether for repeat
CABG, valve replacement, or other reason, CT can
Coronary Computerized Tomographic Angiography
C
617
C
a
b
Coronary Computerized Tomographic Angiography. Figure 2 Seventy-two-year-old male with recurrent chest pain one year
after PCI. (a) CCTA shows patent stent in the circumflex artery, but progression of disease in the LAD (b), with up to 70%
stenosis caused by calcified and non-calcified plaque
demonstrate the course and position of bypass grafts relative to the sternum, decreasing operative complications.
Coronary CT can also identify the course of the internal
mammary arteries and location of target vessels for minimally invasive “keyhole” CABG surgery.
Additional Uses
Selection for CT coronary arteriography may also include
patients with unexplained or atypical chest pain when an
aberrant origin of the coronary artery is considered possible; concerns such as pulmonary embolism or aortic
dissection; evaluation of an ischemic etiology for a newly
diagnosed cardiomyopathy and/or heart failure; preoperative or preprocedural evaluation of the coronary arteries,
cardiac structures, and thoracic anatomy; and evaluation
of cardiac and/or coronary artery anomalies.
Indications in patients who have previously undergone CABG and/or percutaneous coronary intervention
(PCI) include patients with new or recurrent symptoms of
chest pain to confirm graft/stent patency or detect graft/
stent stenoses or other complications; and for patients
who are scheduled for additional cardiac surgery (e.g.,
aortic valve replacement or bypass graft revision) when
preoperative definition of anatomic detail, including the
bypass grafts, is critical.
Difficulties with Interpretation
provides information regarding anatomy. Anatomical
abnormalities will not always be the explanation for clinical complaints. In some patients, a “noncritical” stenosis
of 60% might be impeding flow to explain the symptoms
while in another patient, a typically “critical” stenosis of
90% may not be causing the symptoms. In some situations, a functional test will be required to determine
whether the anatomical abnormality explains the symptoms. A wall motion abnormality in the distribution of the
stenosis may confirm that the stenosis is clinically relevant.
Decreased myocardial perfusion, as evidenced by nuclear
imaging, magnetic resonance imaging or contrast perfusion study will similar demonstrate clinical relevance of
the stenosis.
Myocardial Bridging
Myocardial bridging is a congenital abnormality where
a portion of a major coronary artery has an
intramyocardial segment. Although usually not clinically
significant, myocardial bridging has also been linked to
clinical complications such as ischemia, spasm, dysrhythmias and sudden death. In some series, coronary CTA has
identified myocardial bridging in as many as 50% of
patients, although dynamic compression occurs in only
about a quarter of these patients. Whether or not myocardial bridging detected on coronary CTA is associated with
adverse events in patients who are otherwise at low risk is
not known.
Anatomy Versus Function
Although coronary CTA is usually performed to evaluate
the coronary arteries, sometimes the interpretation of the
results or the findings can be difficult. Coronary CTA
Incidental Findings
Coronary CTA will often include images of the thorax and
the upper abdomen. As a result, abnormalities of other
618
C
Coronary CTA
structures within these spaces can be observed. The rate of
incidental findings reported in the literature is near 40%.
Some incidental findings are clinically relevant and might
explain the symptoms leading to the test (e.g., pulmonary
embolism, aortic dissection, and malignancies). Others
are incidental findings that can lead to further diagnostic
evaluation, which may or may not have been otherwise
necessary, potentially increasing costs. The most costeffective approach to incidental findings remains to be
determined.
References
1.
2.
3.
Artifacts and Study Quality
Janne d’Othee B, Siebert W, Cury R, Jadvar H, Dunn EJ, Hoffman U
(2008) A systematic review on diagnostic accuracy of CT based
detection of significant coronary artery disease. Eur J Radiol
65:449–461
Hollander JE, Chang AM, Shofer FS, McCusker CM, Baxt WG, Litt
HI (2009) Coronary computerized tomographic angiography for
rapid discharge of low risk chest patients with potential acute coronary syndromes. Ann Emerg Med 53:295–304
Hollander JE, Chang AM, Shofer FS, Collin MJ, Walsh KM,
McCusker CM, Baxt WG, Litt HI (2009) One year outcomes following coronary computerized tomographic angiography for evaluation
of emergency department patients with potential acute coronary
syndrome. Acad Emerg Med 16:693–698
Chang AM, Shofer FS, Weiner MG, Synnestvedt MB, Litt HI, Baxt
WG, Hollander JE (2008) Actual financial comparison of four strategies to evaluate patients with potential acute coronary syndromes.
Acad Emerg Med 15:649–655
Weustink AC, Mollet NR, Neefjes LA et al (2010) Diagnostic accuracy
and clinical utility of noninvasive testing for coronary artery disease.
Ann Intern Med 152:630–639
Factors that result in decreased study quality include
patient obesity, elevated heart rate, dysrhythmia, and coronary artery calcification. A heart rate of less than 70 is
generally desirable for coronary CTA, although newer
technologies are loosening this restriction. Oral or intravenous beta blockers are most commonly used for heart
rate control when necessary. Sublingual nitroglycerin,
administered at the time of the scan, may improve coronary visualization through vasodilation. Coronary CTA
study quality may be compromised in patients with atrial
fibrillation, as well as those with frequent premature
ectopic beats. The presence of a large amount of coronary
calcium may obscure the adjacent coronary lumen, and
result in overestimation of the degree of stenosis, though
this issue is ameliorated by recent technological advances
in image acquisition, reconstruction, and post-processing.
4.
Radiation Risk
JEREMY CORDINGLEY
Adult Intensive Care Unit, Royal Brompton Hospital,
London, UK
As with all x-ray imaging studies, there is radiation exposure. For coronary CTA the radiation exposure varies
widely between institutions and patients. It is dependent
upon patient-related factors such as the weight of the
patient (larger patients have more exposure) and the
rhythm (sinus rhythm has less exposure). With respect
to institutional and scanner-related factors, shorter scan
lengths, electrocardiographic, controlled tube modulation, 100-kV tube voltage, sequential ECG-triggered scanning techniques, and experience in cardiac CTA are all
associated with lower radiation doses without associated
decreases in image quality. The long-term consequences of
radiation exposure from medical imaging are not well
known. They are based upon modeling rather than actual
data, but it seems prudent to limit the radiation exposure
when possible. Although dependent upon institutional
protocol, myocardial perfusion imaging often has more
radiation exposure than coronary CTA, and newer CT
techniques result in doses similar to or lower than cardiac
catheterization. CTmay also decrease dose by reducing the
need for additional testing.
5.
Coronary CTA
▶ Coronary Computerized Tomographic Angiography
Coronary Syndromes, Acute
Synonyms
Acute myocardial infarction (MI); Non-ST elevation
myocardial infarction (NSTEMI); STelevation myocardial
infarction (STEMI); Unstable angina (UA)
Definition
Acute coronary syndromes (ACS) are a spectrum of illness
caused by reduction in blood flow to the myocardium
because of atherosclerotic disease of one or more coronary
arteries and defined by clinical presentation, ECG findings,
and biochemical markers of myocardial cell damage. An
ACS occurs when blood supply to an area of myocardium is
acutely reduced by sudden narrowing or obstruction of the
vascular lumen by acute intravascular thrombosis on an
atherosclerotic plaque through damaged endothelium.
Blood supply becomes insufficient to meet metabolic
demands resulting in myocardial ischemia.
Coronary Syndromes, Acute
The main clinical ACS syndromes that occur are
unstable angina, non-ST elevation myocardial infarction
(NSTEMI), and ST elevation myocardial infarction
(STEMI) [1].
Unstable angina – Clinical presentation is of ischemic
chest pain that does not resolve rapidly with sublingual
glyceryl trinitrate and is not associated with ECG changes
of ST elevation or increased serum concentration of biochemical markers of myocardial necrosis.
NSTEMI – Clinical presentation of myocardial ischemia associated with increased serum concentration of
biochemical markers of myocardial necrosis but no ECG
ST elevation.
STEMI – Clinical presentation of myocardial ischemia
with ECG changes that should include the presence of one
of: greater than 2 mm ST elevation in two adjacent chest
leads, greater than 1 mm ST elevation in two limb leads
(adjacent) or new bundle branch block and is associated
with an increased serum concentration of biochemical
markers of myocardial necrosis.
Since the widespread availability of highly sensitive
biochemical markers of myocardial cell necrosis
(troponin I and T), many patients previously classified as
having unstable angina now fall into the category of
NSTEMI. In practice, decision making about the need
for emergency myocardial reperfusion therapy (thrombolytic drugs or percutaneous coronary intervention (PCI))
is based on the presence or absence of new ST elevation.
The 2007 ESC (European Society of Cardiology) guidelines therefore classify patients into two categories based
on the implications for patient management:
● ST elevation ACS (STE-ACS): Chest pain and ST elevation (STE) for greater than 20 min – immediate goal
is to rapidly reestablish coronary flow by primary
coronary intervention (PCI) or pharmacological
thrombolysis.
● Non-ST elevation ACS (NSTE-ACS): Chest pain without ST elevation for greater than 20 min – immediate
management is to treat myocardial ischemia. Serial
ECG monitoring and measurements of biochemical
markers of myocardial necrosis will guide further
management.
Evaluation/Assessment
History
Most patients have chest pain that typically feels like
pressure on the chest and may radiate to the left arm,
neck, or jaw. The pain may be intermittent or continuous,
and there may be associated symptoms including nausea
C
and abdominal pain. Chest pain may be atypical or may be
absent (more common in patients with diabetes mellitus).
Some patients may have had increasing frequency and
severity of chest pain over days or weeks (crescendo
angina). Symptoms are not helpful in differentiating
STE- and NSTE-ACS.
There may be a history or family history of coronary
artery disease or conditions known to be associated with
increased incidence such as diabetes mellitus, hyperlipidemia, peripheral or cerebrovascular disease, hypertension,
and smoking.
Physical Examination
Full physical examination should be carried out but is
often normal. There may be evidence of previous cardiovascular interventions, or signs of heart failure. Specific
complications of myocardial infarction with physical signs
include pericarditis, mitral regurgitation, and ventricular
septal rupture. Physical signs of other potential diagnoses,
for example, pneumothorax, should be sought.
Investigations
ECG – 12-lead ECG should be carried out as soon as
possible after presentation and repeated after 6 and 24 h
and compared, if possible, to previous recordings. Presence of new ST elevation as defined in STEMI (above)
leads to a diagnosis of STE-ACS and immediate revascularization therapy. ST depression of at least 0.5 mm in two
adjacent leads is seen in patients with NSTE-ACS, with
a poorer prognosis associated with deeper ST segment
depression. T wave inversion may also occur, but in
a small proportion of patients with NSTE-ACS the ECG
is normal. Use of right and extended left chest electrode
positions may be helpful in identifying right and posterior
ischemia. Stress ECG testing is indicated for risk assessment in asymptomatic patients, without diagnostic resting ECG changes or elevated troponin concentrations,
prior to hospital discharge.
Biochemical markers of myocardial cell necrosis –
Elevated serum troponin T or I concentrations are the
most sensitive and specific markers of myocardial cell
death and useful as prognostic markers and therefore
used to determine management of patients with NSTEACS. However, troponin concentrations may not start to
rise above reference concentration for at least 3 h after the
ACS has started, and in NSTE-ACS this time may be
considerably longer. Further troponin measurements
should be carried out 6–12 h after episodes of chest pain.
Elevated troponin concentrations are found in conditions
unrelated to ACS including sepsis, renal failure, cardiac
failure, and acute aortic dissection, and therefore troponin
619
C
620
C
Coronary Syndromes, Acute
concentrations need to be interpreted in the clinical context and in conjunction with other investigations.
Chest X-ray – There may be evidence of an enlarged
heart or pulmonary edema. May be required to exclude
differential diagnoses.
Echocardiography – Transthoracic echocardiography
(TTE) should ideally be carried out in assessment of ACS
and may be useful in assessing ischemia-induced regional
myocardial motion abnormalities, assessing potential
complications of myocardial infarction such as ischemic
mitral regurgitation and excluding differential diagnoses.
Rapid, focused TTE assessment of severely ill patients by
non-cardiologists is being promoted by courses such as
FEER-Germany and FEEL-UK in order to increase patient
access to emergency echocardiography and allow faster
recognition of life-threatening complications.
Coronary angiography – The current gold standard for
imaging of coronary arteries and carries rare but serious
risks of CVA, arrhythmia, pericardial hemorrhage, arterial
dissection and/or obstruction, renal failure, and anaphylaxis. This technique also facilitates reperfusion therapy
using angioplasty with or without coronary artery stenting.
Differential Diagnosis of Chest Pain/ACS
Aortic dissection – May present with chest pain and can
cause ACS if dissection involves coronary arteries. Other
vascular diagnoses that can mimic ACS include aortic
aneurysm and aortic coarctation.
Esophageal spasm
Gastric ulceration or perforation, cholecystitis, pancreatitis
Chest wall pain
Pleural pain, pneumonia, pulmonary embolism, and
infarction
Pericardial disease
Other types of heart disease, e.g., myocarditis, valvular
(e.g., aortic stenosis)
Treatment
The treatment of ACS is an area in which evidence relating
to new physical and drug treatments becomes available
frequently and there are often changes to best practice.
Readers should consult the latest guidelines from the ESC
and ACC/AHA (American College of Cardiology/American
Heart Association) [2, 3, 4, 5].
Patients with an oxygen saturation <90% should receive
supplemental oxygen and an intravenous cannula placed,
with blood simultaneously sampled for measurement of
troponin, creatinine, glucose, and full blood count. Pain
not responding to sublingual nitrate should be managed
with intravenous nitrate infusion and morphine with an
antiemetic. Basic observations should be recorded, continuous ECG monitoring attached, and 12-lead ECG
recorded. Cardiac arrest should be managed with standards ALS protocols. Management will be determined by
classification into NSTE-ACS, STE-ACS, or low likelihood
of ACS.
NSTE-ACS
Management is based on risk assessment of the likelihood
of further coronary events and death with high-risk
patients being managed with earlier coronary angiography
and revascularization and lower risk patients managed
with medical treatment alone.
Risk Stratification
Factors associated with an increased risk of death in
patients with ACS include previous coronary artery disease, main stem or three vessel disease, persistent chest
pain, diabetes mellitus, increasing age, heart rate, creatinine, Kilip class [6] (Table 1), concentration of biomarkers
of myocardial necrosis, ST segment changes, decreasing
systolic blood pressure, and occurrence of cardiac arrest.
A number of risk scoring systems have been developed to
calculate both risks of in-hospital death and mortality over
longer time periods. Examples of risk scoring systems
include GRACE [7] (Global Registry of Acute Coronary
Events) TIMI, FRISC, and PURSUIT.
Medical Therapy
Antiplatelet drugs
● Aspirin
All patients, without contraindications, should
receive standard uncoated oral aspirin 160–325 mg
(chewed) on presentation with an ACS and continued
at 75–100 mg daily.
● Clopidogrel
Coronary Syndromes, Acute. Table 1 Kilip classification
General
Presentation
All patients presenting with suspected ACS should be
assessed rapidly using a standard ABCDE approach.
1. No evidence of heart failure
2. Elevated JVP/crackles on lung auscultation
3. Acute pulmonary edema
4. Cardiogenic shock
Coronary Syndromes, Acute
All patients, without contraindications, should
receive oral clopidogrel 300 mg, followed by 75 mg
daily, and continued for 12 months. A dose of 600 mg
should be considered in patients about to undergo
PCI. Patients receiving clopidogrel and requiring
urgent CABG should stop it 5 days prior to surgery if
this is clinically possible.
● Glycoprotein IIb/IIIa inhibitors
Tirofiban or eptifibatide treatment, in addition to
aspirin and clopidogrel, is indicated for patients who
are at high risk of continued coronary events, and
should be used in combination with an anticoagulant.
For patients that have not received either and undergo
PCI, abciximab should be used.
● Careful assessment of the relative risks of hemorrhagic
complications versus further coronary thrombosis
should be made prior to administering these drugs.
Anticoagulants
All patients presenting with NSTE-ACS should receive
anticoagulants in addition to antiplatelet drugs. Choice
of agent will depend on the clinical scenario, risk assessment of further coronary events, and potential hemorrhagic complications.
● Heparin
Low molecular weight heparins (LMWH) have
advantages over unfractionated heparin (UH) in
being easier to administer, require less monitoring,
and are associated with a lower incidence of heparin
induced thrombocytopenia (HIT).
● Fondaparinux (Factor Xa inhibitor)
Alternative to heparin for patients not undergoing
urgent angiography and PCI and associated with
a lower incidence of hemorrhagic complications.
● Bivalarudin and other direct thrombin inhibitors
Alternative to heparin with fewer hemorrhagic
complications.
Antianginal Agents
● Nitrates – If sublingual GTN is ineffective in relieving
ischemic chest pain, intravenous infusion should be
used, but may cause hypotension and is
contraindicated in patients taking PDE-5 inhibitors
(e.g., sildenafil).
● Beta-blockers – If there are no contraindications, betablocking drugs should be administered with a target
heart rate of 50–60 bpm. Care should be taken in
patients with evidence of AV conduction block or
significant left ventricular dysfunction.
C
● Calcium channel blockers – Indicated for the treatment of angina secondary to coronary vasospasm,
particularly dihydropiridines (e.g., nifedipine). In
other situations, calcium channel antagonists may be
used as alternatives in patients who are unable to take
or in addition to beta-blockers. Dihydropyridines
should not be used without combination with a betablocker in patients with non-vasospastic angina.
Revascularization
High-risk patients should have urgent coronary angiography followed by revascularization particularly when there
is continuing or unresolving chest pain with dynamic ST
segment changes, hemodynamic instability, heart failure,
or life-threatening arrhythmias.
Patients in a medium- to high-risk group, without lifethreatening complications, should have coronary angiography performed within 72 h and revascularization (PCI
or CABG) if indicated.
Low-risk patients should undergo a noninvasive test of
inducible ischemia while in hospital and undergo coronary angiography if positive.
STE-ACS
Following initial assessment and management (as above),
patients with STE-ACS presenting within 12 h of symptom require urgent coronary reperfusion therapy using
either PCI or thrombolytic drugs. Risk assessment can be
carried out using one of the established systems (e.g.,
TIMI risk score for STEMI).
Aspirin should be given to all patients without contraindications (as for NSTE-ACS). Patients undergoing PCI
should have clopridogrel loading dose (300 or 600 mg).
During PCI, heparin (UH) is given to reduce thrombotic
complications; bivaluridin may be used as an alternative.
The GP IIb/IIIa inhibitor abciximab has been shown to
improve outcome post PCI and may be commenced during the procedure and infused intravenously for 12 h
afterward. ACEI should be started in the first 24 h in
high-risk patients and continued. Beta-blockers are useful
in decreasing further ischemia but should be avoided in
patients with unstable hemodynamics, AV conduction
block, or asthma.
Reperfusion Therapy
● PCI – Is indicated urgently for patients with STE-ACS
within 12 h of onset of symptoms. Patients presenting
after 12 h from the onset of symptoms with continuing
evidence of ischemia should also be managed with
urgent angiography and PCI.
621
C
622
C
Coronary Syndromes, Acute
Longer time to coronary reperfusion is associated with increased mortality. ESC guidelines recommend that the time from first medical contact
to intracoronary balloon inflation should be less
than 2 h in all patients and less than 90 min in
those with a large area of myocardial infarction and
low risk of hemorrhage. Primary PCI should be used,
where available in preference to pharmacological
thrombolysis in all patients but particularly in
patients with cardiogenic shock or heart failure and
in patients with contraindications to fibrinolytic
drugs.
● Fibrinolytic therapy – Is indicated in circumstances
when PCI cannot be performed within recommended
times or is contraindicated. Pre-hospital administration is associated with improved outcomes compared
to in-hospital. Fibrinolysis is associated with 1% risk
of intracranial hemorrhage which is more common in
women, hypertensives, increasing age, and patients
with known cerebrovascular disease. In addition,
there is approximately 10% risk of other serious hemorrhage. Because of these risks, fibrinolytic therapy
is contraindicated in patients with a previous history
of hemorrhagic stroke (or unknown etiology) and
within 6 months of an ischemic stroke. Other absolute
contraindications are: known bleeding disorder, central nervous system tumors or trauma, head injury,
major trauma or surgery within the last 3 weeks,
gastrointestinal hemorrhage within the previous
month, aortic dissection, and puncture sites that are
not compressible. Relative contraindications to
thrombolytic therapy are: oral anticoagulants, TIA in
the last 6 months, severe hypertension, pregnancy
including up to 1 week postpartum, active peptic
ulceration, advanced liver disease, infective endocarditis, and failure to respond to cardiopulmonary
resuscitation.
Complications of ACS
Streptokinase should not be readministered because
antibody generation reduces its activity and can increase
the risk of allergic reactions.
In the event of evidence of failure of pharmacological
thrombolysis (approximately 20% patients) or reinfarction (approximately 10% patients), urgent coronary
angiography and PCI are indicated. If this is not possible,
a second dose of antifibrinolytic agent may be given (not
streptokinase if already administered).
Patients presenting after 12 h from initial symptoms
should be treated with aspirin, clopidogrel, and an antithrombin drug.
Acute right ventricular failure may present with the findings of low cardiac output, ST elevation in inferior and
right sided chest leads associated with elevated JVP but no
evidence of pulmonary edema. Right ventricular failure is
difficult to manage, and it is important to ensure adequate
left ventricular preload and maintain coronary perfusion
pressure. Early coronary reperfusion should be
undertaken.
Arrhythmia
These are common following STE-ACS and are managed
using standard algorithms.
Cardiogenic Shock
Patients have a low cardiac output state usually associated
with hypotension and elevated left atrial pressure. Early
revascularization is indicated with appropriate supportive
therapy which may include mechanical circulatory support with an intra-aortic balloon pump or ventricular
assist device.
Mitral Regurgitation
May occur because of annular dilatation or papillary muscle dysfunction or rupture. Clinical features are of mitral
regurgitation which may be severe and require support
with an intra-aortic balloon pump and afterload reduction. Treatment is early surgical valve repair or
replacement.
Ventricular Rupture
● Ventricular septal rupture – Diagnosis suspected
because of deteriorating clinical condition with new
systolic murmur, increase in oxygen saturation of
a catheter moved from the right atrium to ventricle,
and appearances on echocardiography. Management
is stabilization followed by surgical repair, though
small lesions have been closed with percutaneous
devices.
● Free wall rupture – Diagnosis suspected because of
rapidly