Pulse Oximetry
Pulse Oximetry
Pulse Oximetry
Pulse oximetry
Author
C Crawford Mechem, MD, FACEP
Section Editor
Polly E Parsons, MD
Deputy Editor
Helen Hollingsworth, MD
Disclosures
All topics are updated as new evidence becomes available and our peer review process is
complete.
Literature review current through: Oct 2012. | This topic last updated: Apr 23, 2012.
INTRODUCTION Hypoxemia is an important and potentially avoidable cause of morbidity
and mortality in many hospital settings, including the intensive care unit (ICU), emergency
department, procedure suite, and operating room. Rapid, accurate detection of hypoxemia is
critical to prevent serious complications; however, oxygenation is difficult to assess on the basis
of physical examination alone. Frank cyanosis does not develop until the level of
deoxyhemoglobin reaches 5 g/dL, which corresponds to an arterial oxygen saturation (SaO
2
) of
around 67 percent [1]. Furthermore, the threshold at which cyanosis becomes apparent is
affected by multiple variables including peripheral perfusion, skin pigmentation, and hemoglobin
concentration [2].
Blood gas analysis was for many years the only available method of detecting hypoxemia in
critically ill patients, but this technique is painful, has potential complications, and does not
provide immediate or continuous data [3]. (See "Arterial blood gases" and "Arterial
catheterization techniques for invasive monitoring".)
Pulse oximetry allows noninvasive measurement of arterial hemoglobin saturation, without the
risks associated with arterial puncture. Over the past 30 years, pulse oximetry has become the
standard for continuous and/or noninvasive assessment of arterial oxygen saturation [4]. Arterial
oxygen saturation determination with pulse oximetry is now in such ubiquitous use that it has
been called the "fifth vital sign" [5].
Despite the widespread use of pulse oximetry in clinical monitoring, many practitioners are
unaware of the potential limitations of this technology. Theoretical and clinical aspects of pulse
oximetry will be reviewed here. The interpretation of arterial oxygen tension is discussed
separately. (See "Oxygenation and mechanisms of hypoxemia".)
HISTORY Carl Matthes invented the first noninvasive oximeter employing an ear probe in
1935 [6]. The device used two wavelengths of light to compensate for variations in tissue
thickness and blood content, but did not account for pulsatile flow. The development of oximetry
intensified during World War II, when a method was sought to monitor oxygenation in pilots
flying at high altitudes in pressurized cockpits [3,6,7]. These early nonpulsatile devices did not
measure true arterial saturation because of interference from capillary and venous blood, and
they were unwieldy to use and transport [8].
In 1970, scientists at Hewlett-Packard developed the first widely used, commercial ear oximeter
that preferentially measured arterial saturation by heating the tissue to 41C to increase local
cutaneous blood flow [7]. In 1974, Takuo Aoyagi found that arterial oxygen saturation could be
measured by quantifying pulsations in the light signals coming through tissue. This eliminated
the need for heating the tissue, and his device is the ancestor of the current generation of pulse
oximeters [9].
PRINCIPLES The theoretical basis for pulse oximetry derives from the Beer-Lambert law.
This law states that the absorption of light of a given wavelength passing through a non-
absorbing solvent, which contains an absorbing solute, is proportional to the product of the solute
concentration, the light path length, and an extinction coefficient. The Beer-Lambert law can
readily be applied to co-oximeters used for blood gas analysis, in which a sample of arterial
blood can be placed within a cuvette, and factors such as light path length and solute
concentration can be controlled [10].