BS2010

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

BS2010

Friday, April 26, 2024 8:56 PM

Lecture 1: Introduction 1. Collimated Light: Lecture 2: Bright-Field Microscopy Lecture 3: Wide-Field Microscopy Wide-field microscopy and bright-field microscopy are two different ima
○ Collimated light refers to a set of light rays that are parallel to each techniques used in microscopy. Here's a comparison of the two:
1. Brief History of Light Microscopy Objective Lens (Objective) Bright-Field Microscopy:
other. In other words, they are not converging or diverging as they
Magnifying Glass: Earliest evidence from 424 BC, inspired by water droplets. travel. This property is important in optical systems where parallel • Definition: The eye of a microscope, responsible for the primary image formation. 1. Principle: Bright-field microscopy is the most basic form of micro
Eyeglasses: Invented around 1300 AD, significant for later microscope development. rays are needed for proper focusing or to maintain a consistent • Features: Anti-reflective coatings, various specifications including magnification, where a specimen is illuminated by a light source, and the light th
First Compound Microscope: By Hans and Zacharias Jansen in 1595. intensity over a given area. aberration correction, infinity-corrected system, cover glass thickness, working absorbed by the specimen is collected by the objective lens to for
○ Collimation is typically achieved using devices like collimating distance (WD), and immersion medium. image.
lenses or mirrors, which adjust the light rays to be parallel. • Mounting: Objectives are mounted on a nosepiece. 2. Contrast: It relies on the absorption of light by the specimen to cr
○ Lasers are often described as collimated because they naturally contrast. Denser or thicker areas of the specimen absorb more lig
produce light that is both intense and parallel over considerable appear darker.
distances. 3. Resolution: It typically has a lower resolution compared to other
advanced microscopy techniques.
4. Setup: It is relatively simple and does not require specialized equ
or dyes.
5. Applications: It is widely used for observing transparent or opaq
○ samples, such as cells, tissues, and microorganisms.
6. Limitations: The main limitation is that living cells or unstained sa
may not provide enough contrast to be clearly visualized.
(exception:phase contrast microscopy)
2. Coherent Light:
2. Pioneers in Microscopy ○ Coherence refers to the phase relationship between different waves Wide-Field Fluorescence Microscopy:
Robert Hooke (1635-1703): Birth of cell biology, inventor of the compound microscope, author of of light. Coherent light has a constant phase difference between 1. Principle: In wide-field fluorescence microscopy, the specimen is
"Micrographia." waves, which means they can interfere constructively or with fluorescent dyes or tagged with fluorescent proteins. An exci
destructively when they overlap. light of a specific wavelength is used to illuminate the specimen, c
Antonie van Leeuwenhoek (1632–1723): Known for simple microscope and discoveries of protists, 1. Introduction to Fluorescence Microscopy
○ A light source is considered coherent if it has a single, well -defined the fluorophores to emit light at a different wavelength.
bacteria, and sperm. frequency (or a very narrow range of frequencies) and if the light Image Quality and Magnification • Fluorescence is a process where a substance absorbs light and re-emits it at a different wavelength. 2. Contrast: It provides high contrast by using the specific emission
waves maintain a fixed phase relationship with each other. • Common Magnifications: 5X, 10X, 20X, 40X, 60X, 100X. • Key concepts: Photoluminescence, Stokes shift, Fluorophore, Phosphorescence. from fluorophores, which can be detected separately from the exc
3. Comparison of Microscopes ○ Lasers are a prime example of coherent light sources because they • Eyepiece (Ocular): Used in conjunction with the objective to provide total a. Photoluminescence: light.
Hooke's Compound Microscope: Plagued by chromatic and spherical aberrations. emit light that is not only monochromatic (single wavelength) but magnification. ▪ Photoluminescence is a collective term that describes the phenomenon where a substance absorbs 3. Resolution: Offers higher resolution and can be enhanced by us
1. Chromatic Aberration: also has a fixed phase relationship across the entire beam. • Total Magnification: Calculated as Objective Magnification × Eyepiece light (photons) and then re-emits the light. This process involves the absorption of energy from a light objectives with higher numerical apertures (NA).
○ Chromatic aberration occurs when different wavelengths (colors) of light are focused at Magnification. source, which excites the electrons in the substance to a higher energy state, followed by the release 4. Setup: Requires specialized filters (excitation and emission filters
different points along the optical axis. This is due to the dispersion of light, where the Relationship Between Collimation and Coherence: • Useful Total Magnification: Limited by the numerical aperture (NA), often around of this energy as light as the electrons return to their original (ground) state. light source that provides the specific excitation wavelength need
refractive index of the lens material varies with the wavelength of light passing through it. • While all lasers produce coherent light, not all coherent light sources are 500 × NA to avoid empty magnification. b. Stokes Shift: the fluorophores.
○ Types: collimated. For example, some light -emitting diodes (LEDs) can emit light ▪ The Stokes shift refers to the difference in wavelength between the absorbed light and the emitted 5. Applications: Ideal for studying cellular processes, co-localizatio
that is coherent over a small area but is not collimated. In the context of microscopy, there are two key terms related to magnification: light in a photoluminescent process. Typically, the emitted light has a longer wavelength (and thus proteins, gene expression, and other applications requiring specif
▪ Axial Chromatic Aberration: Longitudinal shift of focus for different colors, resulting in total magnification and useful magnification. Here's the distinction between the lower energy) than the absorbed light. This shift occurs because some of the energy absorbed is lost molecular labeling.
• Conversely, light can be collimated without being coherent. For instance,
a blur along the axis. two: in the form of heat or through other non-radiative processes before the light is re-emitted. 6. Advantages: It allows for the detection of specific biomolecules,
sunlight can be made collimated using a lens or mirror, but it is not
▪ Lateral Chromatic Aberration: Transverse shift, where different colors come into coherent because the light waves do not have a fixed phase relationship. Total Magnification c. Fluorophore (Fluorescent Molecule): color imaging, and can be used for live-cell imaging with minimal
focus at different lateral positions. In summary, collimated light has parallel rays, and coherent light has a fixed • Definition: The overall magnification achieved by a microscope, which is the ▪ A fluorophore is a specific type of chromophore that can undergo fluorescence. It is a molecule that phototoxicity.
○ Effects: Chromatic aberration can lead to colored fringes around objects in the image and a phase relationship between its waves. A laser can be both collimated and product of the magnification of the objective lens and the eyepiece (ocular can absorb light at a certain wavelength and then re-emit it at a longer wavelength. Fluorophores are 7. Limitations: The need for fluorescent labeling can be a limitation
reduction in the overall sharpness and clarity of the image. coherent, but these terms describe distinct optical properties, and one does not lens). used extensively as markers or labels in various scientific applications, including fluorescence may require genetic manipulation or the use of antibodies. Additio
○ Correction: This aberration is often corrected by using a combination of lenses made from necessarily imply the other. • Calculation: Total Magnification = Objective Magnification × Eyepiece microscopy, to visualize and track specific structures or molecules within a sample. photobleaching can be an issue where the fluorophores lose their
different types of glass with varying dispersion properties, a technique known asachromatic Magnification. d. Phosphorescence: to fluoresce over time.
or apochromatic lens design. Incoherence • Example: If you have an objective with a 60X magnification and an eyepiece ▪ Phosphorescence is a type of photoluminescence where the emitted light is delayed after the In summary, while both techniques allow for the visualization of microsc
Incoherent light is light in which the phase relationship between different light with a 10X magnification, the total magnification would be 600X (60X × 10X). cessation of the excitation light. Unlike fluorescence, which lasts only nanoseconds, phosphorescence samples, wide-field fluorescence microscopy offers higher contrast and
can last for minutes or even hours. This extended emission is due to a different mechanism involving specificity through molecular labeling, whereas bright-field microscopy i
waves is random or constantly changing. This means that the light waves do not Useful Magnification the transition of electrons to a 'triplet state,' which is longer-lived than the 'singlet state' involved in more straightforward technique that relies on the absorption of light by t
have a fixed or consistent phase difference, and as a result, they do not exhibit the
• Definition: The magnification at which the microscope can effectively fluorescence. As a result, phosphorescent materials can store energy for a longer period before sample to create an image.
phenomenon of constructive or destructive interference over a significant period
resolve details in a specimen. Beyond this point, increasing magnification releasing it as light.
or distance. Here are some key points about incoherent light: does not improve the clarity or resolution of the image; it may actually make CCD stands for Charge-Coupled Device. It is a type of semiconductor d
1. Phase Variation: In incoherent light, the phase of the light waves varies it more blurry or less detailed. that is widely used in various applications for capturing and processing
randomly, leading to a lack of a stable interference pattern. • Limitation: Useful magnification is limited by the numerical aperture (NA) of images, as well as for detecting and measuring light intensities. Here's
2. Spectrum: Incoherent light sources often have a broad spectrum, meaning the microscope's objective lens and the properties of the light used (its detailed explanation of what a CCD is and its functions:
○ they emit light over a range of wavelengths. However, it's important to note wavelength, λ). 1. Image Capture:
that having a broad spectrum does not necessarily mean the light is • Calculation: A common rule of thumb is that the useful total magnification is ○ A CCD is used in digital cameras, including those found in
incoherent; it's the lack of a fixed phase relationship that defines about 500 times the numerical aperture (500 × NA). This is a general microscopes, to capture images. It functions as an electron
incoherence. guideline and can vary depending on the specific microscope and the quality detector, converting light that passes through the camera le
3. Natural Light: Most natural light sources, such as the sun or a light bulb, of its components. electrical charges.
emit incoherent light because they emit light from many different atoms or • Example: If the NA of an objective lens is 1.4, the useful magnification 2. Pixel Array:
molecules that are not phase-locked. would be around 500 × 1.4 = 700X. ○ The CCD consists of an array of light-sensitive elements ca
4. Speckle Pattern: When incoherent light is reflected or scattered off a Key Differences pixels. Each pixel corresponds to a point in the captured im
rough surface, it produces a speckled pattern, which is a result of the accumulates a charge proportional to the light intensity that
1. Purpose: Total magnification is a straightforward multiplication of the
random interference of the light waves. receives.
magnifications of the objective and eyepiece, while useful magnification is
5. Applications: Incoherent light is used in many everyday applications about the practical limit of resolving power for a given microscope setup. 3. Charge Transfer:
because it is generally easier to produce and handle. For instance, ○ The key feature of a CCD is its ability to transfer the electric
2. Resolution: Total magnification doesn't consider the resolving power of the
incandescent bulbs, fluorescent lamps, and LEDs are all incoherent light charge generated in each pixel in a sequential manner to a
microscope, whereas useful magnification is directly related to the resolving
sources. pixels. This is how the device gets its name, as it "couples"
power determined by the NA and the wavelength of light.
6. Microscopy: In microscopy, incoherent light is often preferred because it together the charges from each pixel.
3. Image Quality: Beyond the useful magnification, the image quality
provides a more even illumination of the sample, reducing the risk of 4. Readout Register:
deteriorates because the microscope cannot resolve finer details, resulting
○ damaging the sample and producing a clearer image. in a blurred or less distinct image even though the total magnification is ○ The accumulated charges are eventually transferred to a re
register, where they are converted into a voltage signal. Th
7. Interference and Diffraction: Incoherent light does not produce sharp higher.
is then processed and digitized to create the final image.
interference patterns or diffraction effects, which can be an advantage in
5. High Sensitivity and Low Noise:
applications where a uniform light distribution is desired. Optical Aberrations 2. Molecular Explanation of Fluorescence ○ CCDs are known for their high sensitivity to light and low no
8. Thermal Light: Light emitted by a hot object, such as a glowing filament • Types: Chromatic, spherical, and field curvature aberrations. levels, making them ideal for capturing clear and detailed im
in an incandescent light bulb, is typically incoherent. This is because the • Involves electron transition from ground to excited state within a molecule.
• Corrections: Different objective types correct for these aberrations to varying • Quantum yield and Stokes shift are critical factors. even in low-light conditions.
light is emitted by a large number of atoms or molecules that are not degrees (Achromat, Plan Achromat, Fluorite, Plan Fluorite, Plan Apochromat). 6. Resolution:
synchronized. • Photobleaching is a concern where fluorophores lose their ability to fluoresce.
○ The resolution of a CCD is determined by the number of pix
In summary, incoherent light is characterized by the random phase relationship
between its constituent waves, which leads to a lack of stable interference effects. Field Curvature the array. More pixels allow for higher resolution images, ca
2. Spherical Aberration: • Definition: Field curvature is an optical aberration where the image plane is more details of the scene.
This property makes incoherent light suitable for many applications where a
○ Spherical aberration is an optical defect that occurs when light rays passing through a not flat but curved. This means that a flat object in the field of view will not 7. Monochromatic vs. Color CCDs:
uniform and non-damaging light source is needed.
be in perfect focus at all points simultaneously. Instead, the best focus is ○ CCDs can be monochromatic (black and white) or color. Co
spherical lens near its edge are not focused as sharply as those passing through the central
typically achieved at the center of the field, with the focus deteriorating as CCDs use a color filter array (CFA), typically a Bayer filter p
region. This is because spherical lenses bend light paths differently depending on the to capture color information, which is then processed to cre
you move towards the edges.
distance from the optical axis. full-color image.
• Effect on Image: Due to field curvature, objects at the edges of the field
○ Effects: The result is a blurred image with a hazy, out-of-focus appearance, particularly in may appear blurred or distorted, even when the center is in sharp focus. 8. Applications:
the periphery of the field of view. This can be particularly problematic when observing large specimens or ○ Beyond digital cameras and microscopes, CCDs are used i
○ Correction: Spherical aberration can be reduced by using lenses with an aspherical shape, when high-quality imaging across the entire field of view is required. range of applications, including astronomy, where they are
which are designed to correct the path of light rays and bring them to a common focus. In • Correction: High-quality objectives often have corrected or reduced field telescopes to capture images of celestial objects, and in sc
some high-quality microscopes, special lens designs and coatings are used to minimize these curvature to ensure that the entire field of view is in focus, or at least that a research for detecting and measuring light in various conte
aberrations. larger, flatter area is in focus. This is especially important in applications like 9. Advantages:
photography or digital scanning of microscopy slides. ○ CCDs offer several advantages, such as high quantum effic
good linearity, and excellent image quality. They are also c
capturing images with a wide dynamic range.
10. Limitations:
○ One limitation of CCDs is that they can be more expensive
alternative image sensors, such as CMOS (Complementary
Oxide-Semiconductor) sensors. Additionally, CCDs require
power to operate and can be slower in readout times comp
○ CMOS sensors.
In summary, a CCD is a sophisticated light detection device that is cent
the operation of many digital imaging systems. Its ability to capture light
electrical charges and transfer these charges to create a high-quality im
makes it a critical component in various imaging applications, including
microscopy.

Leeuwenhoek's Simple Microscope: Superior in the 17th century due to technological limitations.

4. Development of Compound Microscope


18th Century: Reduction of chromatic aberration.
1830: Introduction of methods to reduce spherical aberration.
1873: Ernst Abbe's theory of the microscope and the formula for resolution. 0.61λ/NA

5. Advancements in the Last Century


Phase Contrast Microscope: Allows for the study of live cells.
1961: Introduction of the confocal microscope.
1990s: Single molecule imaging techniques.
Confocal microscope
A confocal microscope is an advanced type of optical microscope that provides high-resolution, three-
dimensional images of a specimen by using a technique known as confocal imaging. Here's how it works
and some of its key features:
Principle of Confocal Microscopy:
• In a traditional widefield microscope, the entire specimen is illuminated at once, and the resulting
image can be blurred due to out-of-focus light.
• A confocal microscope overcomes this issue by using a pinhole aperture to allow only in-focus light
from a single plane of the specimen to reach the detector.
Key Components and Features:
1. Laser Scanning: The confocal microscope uses a laser to scan the specimen point by point,
building up an image by only collecting light from a thin plane of focus.
2. Pinhole: A pinhole in the detection pathway ensures that only the light from the plane of focus
(the point where the laser is scanning) passes through, while out-of-focus light is blocked.
3. Depth Resolution: By sequentially scanning different planes of the specimen, a confocal
microscope can generate a series of two-dimensional images that, when combined, create a highly
detailed three-dimensional representation of the sample.
4. Fluorescence: Confocal microscopes are often used with fluorescent dyes or proteins, which are
illuminated by the laser and emit light at a different wavelength, allowing the collection of specific
signals from different structures or molecules within the specimen. **Non-radiative transition: very small energy loss;no photons emitted; loss as heat
5. Optical Sectioning: The ability to collect data from a single plane at a time enables optical 3. Jablonski Energy Diagram
sectioning, which is the creation of virtual, thin sections of the specimen, reducing the out-of- • Illustrates energy levels and transitions between them.
focus information and increasing image clarity. • Includes the concepts of energy loss as heat and the generation of oxygen radicals.
6. Image Processing: The collected data can be processed to generate a series of two-dimensional
images or compiled into a three-dimensional model.
7. Applications: Confocal microscopy is widely used in biological and medical research for the study
of cell structures, tissue architecture, and the distribution of specific proteins or other
biomolecules within cells.

6. Recent Breakthroughs Other Optical Aberrations Mentioned


2006: Super-resolution microscopy; Nobel Prize in Chemistry 2014 awarded to Stefan Hell, Eric Betzig,
• Chromatic Aberration: Occurs when different colors (wavelengths) of light
and William Moerner. are focused at different points, causing a colored fringe around the image.
Super-resolution microscopy • Spherical Aberration: Happens when light rays at the periphery of the lens
Super-resolution microscopy is a group of cutting -edge imaging techniques that overcome the are focused at different points than those at the center, leading to a blurred
diffraction limit of light, which traditionally has imposed a resolution barrier on optical microscopy. This image.
breakthrough allows scientists to visualize cellular structures and molecular interactions with a • Correction Approach: Spherical aberration is typically corrected by using
resolution far below the Abbe diffraction limit, which is about 200 nanometers in the lateral (X -Y) plane lenses made from different types of glass or by using specific lens shapes,
and about 500 nanometers in the axial (Z) plane for visible light. whereas field curvature is addressed by designing lenses that create a plane
of focus that is flatter or more uniform across the field of view.
Here's an overview of super-resolution microscopy and some of the key techniques:
1. Breaking the Diffraction Limit: 4. Excitation and Emission Spectra(Visible spectrum)
○ The diffraction limit is a physical property of light that prevents conventional microscopes Additional Considerations • Describes how absorption and emission spectra are related.
from resolving features smaller than a certain size. • Material and Design: Higher-grade objectives like Plan Fluorites and Plan
• Mentions normalization of spectra and the mirror image property.
○ Super-resolution microscopy uses various methods to localize fluorophores (light-emitting Apochromats often use specialized materials like fluorite crystals or have
complex designs to achieve superior correction of aberrations. • Absorption spectrum: Absorption vs wavelength
markers) with precision far better than the diffraction limit. • Emission spectrum: Fluorescence intensity vs wavelength
• Cost: The level of correction increases with the complexity of the design and
2. Key Super-Resolution Techniques: the quality of the materials, which typically results in higher cost for the • Small molecule fluorophores: Fluorescin and X-Rhodamine-5-(and-6)-Isothiocyanate
○ Stimulated Emission Depletion (STED):Utilizes two laser beams to confine the area of objective.
fluorescence to a nanoscale region, effectively 'turning off' the fluorescence outside the • Resolution and Image Quality: Better correction of aberrations usually
region of interest. leads to higher resolution and better image quality, which is crucial for
○ Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction demanding applications like confocal microscopy or high-magnification work.
(STORM): These techniques involve the stochastic (random) activation of individual When selecting an objective lens, it's important to consider the specific
requirements of the microscopy work. For routine work, an Achromat may be
fluorophores in a field of densely labeled molecules. The exact position of each activated
sufficient, while for more advanced research or where high -definition imaging is
fluorophore is determined with high precision, and a super-resolved image is built up from required, a Plan Apochromat might be necessary. Each type of objective has its
the localized points. own set of trade-offs between cost, image quality, and correction of optical
○ Structure Illumination Microscopy (SIM):Uses a patterned illumination to extract higher aberrations.
spatial frequencies that are otherwise not resolvable by a conventional microscope.
○ Minimal Localization Microscopy: An approach that combines widefield microscopy with
advanced image analysis to achieve super-resolution.
3. Nobel Prize in Chemistry 2014:
○ The development of super-resolution microscopy was recognized with the Nobel Prize in
Chemistry in 2014. The prize was awarded to Stefan Hell for STED microscopy, and to Eric
Betzig and William E. Moerner for the development of PALM and STORM.
4. Applications: 5. Fluorescent Proteins
○ Super-resolution microscopy has a wide range of applications in cell biology, neurobiology, • Structure and function of Green Fluorescent Protein (GFP) from Aequorea victoria.
and developmental biology, allowing researchers to study the dynamics of cellular • Applications in cell biology through genetic tagging and live cell imaging.
structures and molecular interactions with unprecedented detail. a. Beta-Can Structure: The fluorophore is located within a central beta-can, which is a barrel-like structure
○ It is particularly useful for visualizing protein complexes, organelles, and the cytoskeleton, as formed by eleven beta-strands. This beta-can structure encloses and protects the fluorophore from the
external environment, which is crucial for its stability and fluorescence.
well as for tracking the movement of individual molecules within cells.
b. N and C Terminals: GFP has distinct N-terminal and C-terminal regions. The N-terminus is often where the
5. Advantages: Numerical Aperture (NA) protein is fused to other proteins of interest for tagging and visualization purposes. The C-terminus is
○ The ability to resolve cellular structures at the nanoscale opens new possibilities for • Importance: Determines the resolution of a microscope system. important for the stability of the protein.
understanding biological processes. • Formula: NA = n × sinα, where n is the refractive index of the medium and α is the c. Chromophore: The term "chromophore" specifically refers to the part of the fluorophore that actually
○ Non-invasive for live-cell imaging, although some techniques may require specialized half-cone angle. Half cone angle can never reach 90 degrees. absorbs light and undergoes electronic transitions to emit light. In GFP, the chromophore is the result of an
labeling or conditions. • Influences: Brightness of the image and spatial resolution. autocatalytic reaction that occurs after the protein has been synthesized.
• Immersion Media: Different media increase NA, enhancing image quality (Air,
7. The Nature of Light Water, Glycerol, Immersion Oil). 6. Fluorescence Labeling Techniques
Duality of Light: Particle (photon) and wave (electromagnetic wave). • As focal length increases, half cone angle also increases; Objectives with high • Indirect immunofluorescence labeling steps: Cell growth, fixation, permeabilization, primary antibody binding,
magnifications are made to have high NAs. washing, secondary antibody labeling, and mounting.
Visible Light: Wavelength range from 400 nm to 700 nm.
Light Sources: He-Ne laser (monochromatic) vs. Tungsten/fluorescent lamps (continuous spectrum). • The birghtness of the image increases when NA increases and decreases when
magnification increases. Indirect immunofluorescence labeling is a widely used technique in cell biology and pathology to detect and
Helium-Neon (He-Ne) lasers and Tungsten or fluorescent lamps are two different types of light sources localize specific antigens within cells or tissues. Here are the typical steps involved in this process:
High magnification objectives usually have higher NAs to maintain or
that are used in various applications, including microscopy. Here are the main differences between them improve the resolution and light-gathering ability at higher magnifications. 1. Cell Culture or Tissue Preparation:
and their typical uses: However, the two factors—NA and magnification—still affect image brightness ○ Cells are grown on a glass coverslip or tissue sections are prepared on a slide.
He-Ne Laser in different ways, even though they often change together when you switch 2. Fixation:
1. Monochromatic: He-Ne lasers emit light at a single, well-defined wavelength (commonly around objectives. ○ The cells or tissue sections are fixed to the slide using a fixative such as formaldehyde or methanol.
632.8 nm for red light) or within a very narrow range of wavelengths. Let's clarify the effects on image brightness: Fixation stabilizes cellular structures and prevents loss of antigenicity.
2. Coherence: Because of their single wavelength, lasers produce coherent light, which means the 1. Effect of Increasing NA: When you switch to an objective with a 3. Permeabilization (if necessary):
light waves are in phase with each other. higher NA (and typically higher magnification), the light-gathering ○ If intracellular antigens are to be detected, the cells are permeabilized using a detergent or enzyme to
ability increases because the objective can now collect light over a allow antibodies to penetrate the cell membrane.
3. Collimation: Laser light is highly collimated, meaning the light rays are parallel and do not diverge
larger angle. This results in a brighter image, assuming all other 4. Blocking:
significantly over distance. factors remain constant. ○ Non-specific binding sites are blocked with a protein solution (e.g., bovine serum albumin, normal goat
4. Intensity: Lasers can produce a high-intensity beam of light, which is useful for applications 2. Effect of Increasing Magnification: At the same time, as serum) to reduce background staining.
requiring a strong light source. magnification increases, you're looking at a smaller area of the 5. Primary Antibody Incubation:
5. Applications in Microscopy: sample. If the total amount of light collected remains the same, this ○ The primary antibody, which is specific to the antigen of interest, is applied to the slide and incubated
○ Confocal Microscopy: He-Ne lasers are often used as a light source in confocal microscopy smaller area now represents a larger image, which can make the for a period of time to allow it to bind to the antigen.
due to their monochromatic and coherent properties. image appear dimmer. The field of view is narrower, and the same 6. Washing:
○ Spectroscopic Analysis: The narrow wavelength output is ideal for spectroscopic techniques amount of light is spread over fewer pixels in the image, which can
○ Unbound primary antibody is washed away with a buffer solution to remove background noise.
reduce perceived brightness.
that require specific wavelengths for analysis. The statement from the lecture notes reflects these considerations: 7. Secondary Antibody Incubation:
○ Laser Tweezers and Microdissection:The high intensity and collimation of lasers allow for • As NA Increases: The brightness of the image can increase because ○ A secondary antibody that is specific to the species in which the primary antibody was raised is
precise manipulation of small particles or cells. more light is being collected over a larger solid angle. applied. This secondary antibody is conjugated with a fluorophore, enabling the detection of the
Tungsten/Fluorescent Lamps • As Magnification Increases: The brightness of the image can antigen.
1. Continuous Spectrum: Tungsten halogen lamps and fluorescent lamps emit a broad spectrum of decrease because you're spreading the collected light over a larger 8. Washing:
light, which covers a wide range of wavelengths. area (or looking at a smaller area of the sample). ○ After incubation with the secondary antibody, the slide is washed again to remove any unbound
In practice, when you switch to a higher magnification objective with a higher secondary antibody.
2. Incoherence: The light emitted by these sources is incoherent, meaning the light waves are not in
NA, you often get a brighter image due to the increased NA, but the increase 9. Mounting:
phase with each other.
in brightness might not be as much as you'd expect because the ○ The slide is mounted using a medium that preserves the fluorescence and matches the refractive
3. Diversity: Tungsten and fluorescent lamps are versatile and can be used in various lighting magnification is also higher, which can reduce the perceived brightness. index of the immersion medium to prevent light scattering.
applications, not just microscopy. 10. Imaging:
4. Applications in Microscopy: ○ The slide is then examined under a fluorescence microscope. The fluorophore attached to the
○ Brightfield and Phase Contrast Microscopy:These lamps are suitable for conventional secondary antibody will emit light when excited by the appropriate wavelength, allowing the
microscopy techniques like brightfield and phase contrast, which do not require a specific visualization of the antigen.
wavelength. 11. Analysis:
○ Fluorescence Microscopy: Some fluorescence microscopy applications use broad-spectrum ○ The resulting fluorescence image is analyzed to determine the location and distribution of the antigen
lamps to excite a range of fluorophores, although more specific light sources are often within the cells or tissues.
preferred for this purpose.
• Multi-color labeling to differentiate biomolecules using different fluorophores.
○ Illumination for General Observation:They provide a broad and even light source for
general observation and documentation.
Summary
• He-Ne lasers are preferred for applications that require a specific, monochromatic light source
with high coherence and collimation, such as confocal microscopy and certain types of
spectroscopy.

Quick Notes Page 1


Lecture 5: Super-resolution and F-techniques Lecture 6: Image acquisotion and manipulation Lecture 7: Basic Image Analysis
Super-resolution Microscopy 1. Analog and Digital Signal Dynamic Range and Pixel Saturation
aging
Lecture 4: Confocal and TIRF microscopy 1. Bright Field Microscopy: This is a traditional form of microscopy where a
• Dynamic Range: The range of light intensities that a camera c
specimen is illuminated by a light source, and the image is formed by the light that 1. Engineered PSF (Point Spread Function): Techniques like STED (Stimulated Emission Depletion) microscopy are used
Detectors for Wide-Field Epi-Fluorescence Microscopy (e.g., CCD or PMT).
is scattered or reflected by the specimen into the objective lens. to make the PSF smaller.
• Avoiding Pixel Saturation: Adjusting exposure time to preven
scopy Eyes 2. Wide Field Microscopy: This technique involves the use of a light source to 2. Single Molecule Localization: Techniques like PALM (Photoactivated Localization Microscopy) and STORM (Stochastic
at is not brightness.
Film (Obsolete) illuminate the entire field of view at once. It is a common method for general Optical Reconstruction Microscopy) increase localization precision without changing the PSF.
m an CCD or CMOS(Superior to photographic film due to): fluorescence microscopy, where the specimen is viewed in a single exposure. Super-resolution microscopy is a collection of advanced microscopy techniques that overcome the resolution limits of
Image Interpretation and 3D Reconstruction
• High sensitivity (single photon detection) 3. Confocal Microscopy: This is an advanced form of fluorescence microscopy that conventional light microscopy. These techniques allow scientists to visualize biological structures and cellular processes
reate at a much higher resolution than what is possible with traditional light microscopes.
• 2D vs. 3D: It's challenging to infer the 3D organization of cellul
○ CCDs and CMOS sensors are capable of detecting even a single photon of light, which means uses a pinhole to eliminate out-of-focus light or "blur," resulting in a series of
ght and optical sections that can be stacked to create a three-dimensional image. It is not Here are the key points about super-resolution microscopy: • Confocal Microscopes: Generate thin optical sections, aiding
they can capture images in very low light conditions. This is much more sensitive than
traditional film, which requires a certain amount of light to expose the film properly. considered a wide field technique because it scans the specimen point by point, 1. Resolution Limit: Traditional light microscopy is limited by the diffraction of light, which sets a resolution limit of • Example: Live HeLa cell with GFP-Sec61β expression, visuali
rather than illuminating the entire field at once. about 200 nanometers (200 nm) for visible light. This is not sufficient to resolve many cellular structures and microscope.
• Wide spectrum and linear response Sectioning is not always required for confocal microscopy, but
○ These sensors can respond to a wide range of light wavelengths (from ultraviolet to infrared) 4. TIRF Microscopy: This is a specialized form of fluorescence microscopy that processes.
uipment uses total internal reflection to create an evanescent field, which excites 2. Overcoming the Limit: Super-resolution microscopy techniques circumvent this diffraction limit by using various obtaining the detailed three-dimensional (3D) information that c
and provide a linear response, meaning that the output signal is directly proportional to the Here's a more detailed explanation:
input light intensity over a broad dynamic range. fluorophores only within a very thin (typically 100 -200 nm) region near the strategies to achieve higher spatial resolution, often reaching the range of tens of nanometers.
ue interface. It is used for studying cellular structures close to the plasma membrane 3. Types of Super-Resolution Microscopy: 1. Thin Sections for 3D Reconstruction: When the goal is
• Instant results series of 2D images, sectioning is necessary. This is com
and is also not a wide field technique. ○ Stimulated Emission Depletion (STED) : This technique uses a process that suppresses the fluorescence
○ Unlike film, which must be developed in a darkroom before the image can be seen, digital biology, where researchers want to understand the spatia
amples sensors provide instant results. As soon as the image is captured, it can be viewed on a screen emission in regions surrounding the point of interest, effectively narrowing the PSF and allowing for higher
resolution. three dimensions.
or processed by a computer. 2. Optical Sectioning: Confocal microscopy itself performs
• Digital image production, suitable for computer processing sectioning. It uses a pinhole to eliminate out-of-focus ligh
○ CCD and CMOS sensors produce images in a digital format. This means the images can be image at a specific plane within the sample. By taking im
s labeled easily manipulated, enhanced, and analyzed using computer software, which is not possible 3D image can be constructed.
tation with traditional film without digitizing it first. 3. Live Cell Imaging: In live cell imaging, where cells are m
causing embedding, physical sectioning is not performed. Instead
High performance CCDs and CMOS are monochromatic or black and white , capable of responding to in their native state, and any sectioning is purely optical.
of light multiple wavelengths and generating colored images through filtering. Use filter to genrate colour image. 4. Thick Samples: For thick, non-sectioned samples, confo
citation depth resolution and contrast compared to traditional mic
The Charge-Coupled Device (CCD) sample's depth, multiple optical sections would be taken
ing 5. Type of Sample: The need for sectioning also depends o
with some single cells or thin tissue cultures, physical sec
s) and a entire sample can be in focus at once.
ed for In summary, while physical sectioning is a common practice to
of confocal microscopy, it is not a mandatory step in every app
sectioning depends on the research question, the type of samp
on of
ic
Reliability of 2D Image Interpretation
• Limitations:
multi -
○ A 3D sheet may appear as a sheet or a line in 2D.
○ A 3D line may appear as a line or a dot in 2D.
• Analog Signal: Continuous signal over time (e.g., sine wave).
, as it ▪ A dot could represent a vesicle or tubule.
• Digital Signal: Discontinuous, represented by discrete values.
onally, Analog and digital signals are two different ways of representing information, particularly
r ability in the context of electronic communication and data transmission.
Analog Signal:
scopic • An analog signal is a continuous signal that can take on any value within a given
d range. It is called "analog" because it is analogous to some other quantity, such as
is a sound or light waves.
the
• In the context of sound, an analog signal might represent the continuous variations
in air pressure that we perceive as sound.
device • Analog signals are subject to degradation over distance and are more susceptible
g visual to interference, which can lead to a loss of quality.
sa Digital Signal:
• A digital signal, on the other hand, is a discrete signal that consists of distinct
values or states. It is typically represented by a series of digits, which is why it is
called "digital."
many
ic light • Digital signals are often used in computing and telecommunications because they
○ can be more easily processed and manipulated by digital electronics.
ens into
• In digital imaging, for example, an image is represented by a series of pixels,
where each pixel has a specific integer value that corresponds to its intensity or
color information.
alled
age and • Digital signals are less prone to degradation over distance and are more resistant
it to interference, making them more reliable for long-distance transmission and
storage.
The transition from an analog signal to a digital signal is known as digitization. This
process involves sampling the analog signal at discrete intervals and converting these
cal samples into digital values. This is a fundamental concept in digital imaging, where
djacent Obtain more than 100 sections(xy,yz,xz) of images
continuous images (like the real-world scene) are captured and represented as a series
or links of discrete pixels in a digital image. Colocalization Studies
• Importance: To identify the nature of cellular structures, coloca
2. Digital Imaging necessary.
adout • Microtubule intensity can vary either continuously (analog) or discontinuously (digital) in
s signal • Color Rules:
space. ○ Yellow for colocalization in red and green images.
• Conversion from analog to digital involves pixels with integer values, no decimal points, ○ White for colocalization in red, green, and blue images.
and positivity.
oise • Markers: Using different markers like GalT-mCherry, Rab6-GF
mages, 3. Dynamic Range in Imaging colocalization.
Working Principle: Analogous to a grid of buckets collecting rainwater (photons converted to
• Dynamic Range of Sensor: Number of possible light intensities an image sensor can
photoelectrons). Basic Morphometric Analysis
measure (e.g., CCD or PMT).
xels in Parameter of the CCD • Length Measurement: Using stage rulers or optical rulers to c
• Dynamic Range of Image: Number of possible pixel values.
apturing • Quantum Efficiency (QE): Ratio of photoelectrons to incident photons. Lower than one. • Pixel Size Calculation: Fixed in wide-field and varies with sca
• Higher dynamic range allows for more precise intensity measurement.
• Full Well Capacity: Maximum photoelectrons a sensor can store. confocal.
• Expressed in bits (e.g., 8-bit = 256 levels, 12-bit = 4096 levels, 16-bit = 65536 levels). • Area and Volume Calculation:
• Dynamic Range: Range of values a pixel can capture, often 8-bit, 12-bit, or 16-bit.
olor Noises in CCD Image ○ 2D Image: Area = (pixel number) × (area of one pixel).
pattern, • Types of Noise: ○ 3D Image/Stack: Volume calculated using Zu-Geng’s or
ate a a. Photon Noise (or Shot Noise):
○ Photoactivated Localization Microscopy (PALM) : Involves the stochastic (random) activation of individual
▪ Definition: Photon noise is the inherent statistical variation in the number of fluorophores, which can then be localized with high precision. By successively imaging and localizing
photons (light particles) that are incident on the sensor or film. It is a fundamental individual molecules, a super-resolved image is constructed.
n a wide source of noise that is a result of the quantum nature of light.
used in PALM, which stands for Photoactivated Localization Microscopy, is a super -resolution microscopy technique
▪ Characteristics: It is independent of the sensor technology and is a natural limit to that allows for the precise localization of individual fluorescent molecules within a sample. Here's a step -by-
entific the noise in any imaging system. Photon noise follows a Poisson distribution, which
xts. step explanation of how PALM works:
means that the standard deviation of the number of photons or photoelectrons
1. Use of Special Fluorophores: PALM utilizes photoactivatable fluorescent proteins (PA-FPs) or dyes.
generated in each pixel is equal to the square root of the mean number of photons
ciency, These fluorophores are initially in a non-fluorescent state and can be activated by a specific wavelength
(√n). of light (activation beam).
apable of ▪ Relevance: It is most significant in low-light conditions where the number of
2. Initial State: At the start of the imaging process, all fluorophores within the sample are in their dark,
photons is small, and the statistical variation is relatively large.
non-fluorescent state.
b. Dark Noise:
than 3. Stochastic Activation: A small subset of these fluorophores is randomly activated by the activation
▪ Definition: Dark noise, also known as thermal noise or dark current, is the beam. The term "stochastic" implies that the activation is random and not all fluorophores are turned on
y Metal- electronic noise generated within the sensor itself, even when no light is hitting the at once. 4. Image Quality and Sensor Dynamic Range
more sensor. It is caused by the thermal energy in the sensor, which can cause the
ared to 4. Localization: Once activated, the position of each individual fluorophore is determined with high • Image quality affected by the number of levels (dynamic range) that can be
generation of electron-hole pairs. distinguished.
precision. This is typically done by fitting the point spread function (PSF) of the activated molecule to a
▪ Characteristics: Dark noise is present regardless of light exposure and can be Gaussian function or by calculating the center of mass of the fluorescence signal. This step allows • Saturation occurs when photoelectron count exceeds sensor capacity, leading to a loss
ntral to reduced by cooling the sensor, as lower temperatures decrease the thermal energy
ht as researchers to pinpoint the location of each molecule to within tens of nanometers. of quantifiable information.
and thus the noise.
mage 5. Photobleaching: After their localization, the activated fluorophores are then switched off or • To avoid saturation: adjust acquisition parameters, use less gain, reduce exposure time,
▪ Relevance: It is particularly a concern in long exposure times where the
g photobleached using a different wavelength of light. Photobleaching ensures that the same molecule and manage excitation light intensity.
accumulation of dark noise can significantly affect the image quality.
will not be imaged again, allowing for the next round of activation and localization. 5. Image Histogram
c. Read Noise:
6. Repetition: Steps 3 to 5 are repeated many times, creating a time series of images. Each image • Graphical representation with intensity or pixel value on the X-axis and the number of
▪ Definition: Read noise is the electronic noise that is introduced during the process captures the positions of a different random subset of fluorophores. pixels within a bin on the Y-axis.
of reading out the signal from the sensor. It includes noise from the amplifiers and
7. Image Reconstruction: The data from all these images is then computationally combined to create a
the analog-to-digital conversion process.
super-resolved image. Because the same molecule is not imaged more than once, a very dense and
▪ Characteristics: Read noise is typically constant for a given sensor and does not precise map of the distribution of fluorophores within the sample can be constructed.
change with the intensity of the incoming light. It is measured in electrons and is a
key factor in determining the sensor's signal-to-noise ratio (SNR). 8. 3D Imaging: PALM can also be performed in three dimensions by using multiple focal planes and
computational methods to reconstruct the 3D distribution of fluorophores.
▪ Relevance: Read noise can limit the detectability of low-intensity signals,
particularly in high-precision applications such as astrophotography or scientific 9. Applications: PALM is particularly useful for studying the nanoscale organization of proteins within
microscopy. cells, protein-protein interactions, and the architecture of cellular structures like the cytoskeleton.
PALM, along with other super-resolution techniques like STORM, has significantly contributed to our ability to
visualize cellular structures at the molecular level, providing insights into cellular organization and dynamics
Out of Focus Blur in Wide-Field Microscopy that were not possible with traditional light microscopy.
Blur results from fluorophores outside the focal plane contributing to the image through PSF projections.
○ Stochastic Optical Reconstruction Microscopy (STORM) : Similar to PALM, STORM also uses single -
molecule localization but does not require the fluorophores to be photoactivatable . Instead, it uses a high
concentration of non-fluorescent molecules and a low concentration of fluorescent ones, which are then
activated and localized.

4. Localization Precision: In both PALM and STORM, the position of individual molecules is determined with
nanometer precision, often by fitting the point spread function (PSF) to a Gaussian function or by calculating the
center of mass of the fluorescence signal. Image Segmentation by Thresholding
5. Applications: Super-resolution microscopy has a wide range of applications in cell biology, including the study of • Process: Selecting certain pixels (ROI) based on intensity thre
protein interactions, cellular organelle dynamics, and the structure of biological macromolecules. • Outcomes: Object (≥ threshold) and background (< threshold) pix
6. Technical Requirements: These techniques often require specialized equipment, software for image analysis, and • Binary Image/ROI: Generated for morphometric analysis.
sophisticated sample preparation methods.
Morphometry is a branch of biology that involves the measurem
Single Molecule Localization Techniques biological structures. In the context of image analysis and micro
• Gaussian Fitting: Fitting the Airy disk (2D) or PSF (3D) with a 2D Gaussian function to determine the position of a single to the quantitative measurement and analysis of the geometric
An image histogram is a graphical representation that shows the distribution of pixel image. This can include cells, tissues, or other biological entitie
molecule at nanometer resolution. intensities or pixel values in an image. The purpose of an image histogram includes:
• Center of Mass: Using the concept of center of mass to locate the precise position of a single molecule, where the PSF Here are some key aspects of morphometric analysis:
1. Visualizing Intensity Distribution: It provides a visual summary of the spread of 1. Measurement of Length, Area, and Volume: Morphom
is considered as a physical object with mass equivalent to its fluorescence intensity. pixel intensities, showing how many pixels correspond to each possible intensity measuring the dimensions of structures. For instance, in
value. length of cells, the area of a tissue section, or the volume
Precision in Single Molecule Localization 2. Assessing Image Contrast: By looking at the histogram, one can quickly 2. Pixel-Based Calculations: In digital images, morphome
• Precision Representation: The precision is denoted by σ (standard deviation), which approaches zero as the number of determine if an image has high contrast (a histogram that spreads across the entire by counting pixels. The size of each pixel in real-world un
photons ( n) increases. range of possible values) or low contrast (a histogram that is bunched up in a small and this calibration factor is used to convert pixel measur
• Analogy: Provided a shooting game analogy to explain why more photons result in higher precision. range). dimensions.
3. Identifying Overexposure and Underexposure: A histogram can indicate if parts 3. Image Segmentation: Before morphometric analysis, im
of the image are overexposed (too much white, indicated by a spike at the right where the region of interest (ROI) is isolated from the res
side) or underexposed (too much black, indicated by a spike at the left side). using various techniques, including thresholding, which s
4. Adjusting Image Settings: Photographers and image editors use histograms to on intensity differences.
adjust settings such as exposure, brightness, and contrast to achieve the desired 4. Feature Extraction: Morphometric analysis may involve
look for an image. features, such as the number of cells, the size and shape
5. Diagnosing Image Quality: In scientific imaging, histograms can help identify if organelles within a cell.
the image has been correctly exposed or if there is a loss of detail due to saturation 5. Statistical Analysis: The data obtained from morphome
or extreme dark areas. to statistical analysis to understand the variability, distribu
6. Post-Processing Guidance: During post-processing, histograms are used to different morphological parameters.
guide adjustments that can optimize the image's appearance and ensure that the
Advanced Super-resolution Techniques full dynamic range of the image is utilized.
6. Applications: Morphometric analysis is used in various f
• PALM and STORM: Utilize special fluorophores that can be turned on by an activation beam and localized with high ○ Cell Biology: To study cell growth, differentiation,
7. Machine Learning and Analysis: In computer vision and image analysis,
accuracy, followed by photobleaching. ○ Anatomy: For measuring the dimensions of organs
histograms can be used to extract features from images, which can be useful for
tasks like object recognition or image classification. ○ Pathology: To diagnose diseases based on the siz
What is FIONA? 8. Non-Destructive Editing: Since looking at a histogram doesn't alter the image, it's ○ Ecology and Taxonomy: To measure the physica
species identification and ecological studies.
Confocal Microscope FIONA, which stands for Fluorescence Imaging with One Nanometer Accuracy, is a super-resolution microscopy technique that a non-destructive way to evaluate and plan the editing process.
7. Tools and Software: There are numerous software tools
is part of a family of methods known as localization microscopy. Like PALM and STORM, FIONA also relies on the precise 9. Consistency in Batch Processing: When processing multiple images, histograms
localization of single fluorescent molecules to achieve a resolution far beyond the diffraction limit of light. can help ensure consistency across various images by targeting similar intensity that can automate or assist with measurements, providin
Here's a general overview of how FIONA works: distributions.
1. Sample Preparation: The biological sample is labeled with high-affinity fluorescent dyes or proteins that can be Visual Explanation of Segmentation
10. Technical Assessment: In professional settings, such as in print or film
specifically bound or fused to the target molecules within the cell. production, histograms are used to make sure that images meet technical
• 2D Intensity Plot: Intensity as the Z-axis, with bright regions a
2. Immobilization: The sample is typically fixed and immobilized on a slide to prevent movement during imaging. standards for brightness and contrast. threshold as background.
3. Activation and Localization: The fluorescent molecules are activated one by one, either stochastically or through a
controlled process, and their precise location is determined. This is often achieved by exciting the fluorophores with a Determining whether a histogram is "good" in the context of imaging depends on the
laser and then using advanced algorithms to determine the center of the point spread function (PSF) for each activated specific goals and requirements of the image. However, there are some general
molecule. guidelines and characteristics that can indicate a well-exposed and well-distributed
4. High Precision: FIONA aims to achieve localization precision of about one nanometer, which is significantly better than histogram:
the typical 20-30 nanometer resolution achieved by conventional light microscopy. 1. Full Utilization of Dynamic Range: A good histogram will spread across the
5. Data Accumulation: The process is repeated thousands or millions of times, accumulating a large dataset of molecular majority of the dynamic range without bunching up at either end. This suggests that
positions. the image has a good balance of highlights and shadows, and it is using the full
6. Image Reconstruction: The collected data is used to reconstruct a super-resolved image of the sample, where each tonal range available.
point of light represents the precise location of an individual fluorescent molecule. 2. No Clipping: There should be no spikes touching the very right or left edges of the
7. 3D Imaging: FIONA can also be extended to three dimensions by acquiring images at different focal planes and histogram, which would indicate that highlights or shadows are "clipped" or
computationally reconstructing the 3D distribution of the labeled molecules. overexposed/underexposed. Clipped areas can result in loss of detail.
8. Applications: FIONA is useful for studying molecular interactions, protein complexes, and the organization of cellular 3. Balanced Distribution: The graph should be relatively balanced, with a natural
structures with high spatial precision. distribution of tones from darks to lights. A skewed histogram might indicate an
image that is too dark or too light overall.
4. Detail in Shadows and Highlights: A good histogram will show detail in the
FIONA vs PALM shadow. ROI Generation and Manipulation
FIONA (Fluorescence Imaging with One Nanometer Accuracy) and PALM (Photoactivated Localization Microscopy) are both • ROI (Region of Interest): Specific areas of interest within an im
super-resolution microscopy techniques that rely on the precise localization of individual fluorescent molecules. However, there 6. Maximizing Dynamic Range Utilization
• Utilize the dynamic range efficiently by adjusting parameters so that the maximum image • Methods: Image segmentation or manual drawing.
are some differences in their approaches and applications:
value is about 50%-70% of the dynamic range. • Binary Image: Commonly uses values 0 and 255.
1. Fluorophore Activation:
○ PALM: Uses photoactivatable fluorescent proteins (PA-FPs) or dyes that can be switched from a non-fluorescent to Analysis of Nuclei by Hoechst 33342
a fluorescent state by light. 7. Types of Image Acquisition
• Time lapse • Fluorescence: Hoechst 33342 labels DNA, emitting blue fluore
○ FIONA: Typically employs organic dyes that are directly excited to a higher energy state from where they can emit
light. The specific method of fluorophore activation can vary, but it does not necessarily rely on photoactivation as in • Monochrome • Intensity Contribution: Nucleus intensity from chromatin DNA
PALM. • Multi-color • Background Subtraction: Subtracting constant background in
2. Localization Precision: • 3D (multi-z sections)
○ PALM: Aims to achieve nanometer-scale precision in localizing individual molecules, but the typical precision is • Single-color 3D Quantification in Nuclei Analysis
around 10-20 nanometers. • Multi-color 3D time lapse • Quantifiable Features:
○ FIONA: As the name suggests, it aims for even higher precision, with the goal of achieving near 1-nanometer ○ Number of nuclei (cell count).
accuracy in localization. 8. Monochrome and Multi-color Imaging ○ Area of each nucleus (nucleus size).
3. Imaging Process: • Monochrome: Single channel, example: ER in BSC1 cells expressing GFP-Sec61β. ○ Mean intensity of each nucleus (chromatin density).
○ PALM: Involves a cycle of activating a subset of fluorophores, localizing them, and then photobleaching them so • Multi-color: Multiple channels, each with its own color, example: Golgi complex and ER ○ Total intensity of each nucleus (total chromatin content).
they do not interfere with subsequent rounds of imaging. exit sites in BSC1 cells. Merging the image later.
○ FIONA: The specifics can vary, but it generally involves a similar cycle of activation, localization, and inactivation of GLIM: Golgi Localization Method
fluorophores. The key difference lies in the potential for higher precision and possibly different strategies for 9. 3D Imaging • Nocodazole Treatment: Induces Golgi mini-stacks formation.
achieving this. • Involves capturing images at different depths (z-stacks) with a consistent z-step distance. • Localization Quotient (LQ): A measure used to calculate Gol
4. Data Analysis: • XY,XZ,YZ • Criteria for Selection:
○ PALM: Uses algorithms to determine the center of the PSF for each molecule, which is then used to reconstruct the ○ Signal-to-noise ratio ≥ 30.
super-resolved image. 10. Time Lapse Imaging ○ Axial angle criteria: d1 ≥ 70nm.
○ FIONA: May employ more advanced or different algorithms to achieve the claimed 1-nanometer accuracy. The • Captures images at regular time intervals, requires live cell imaging, and maintaining a
analysis techniques are critical to extracting the highest possible resolution from the data. ○ Co-linear criteria: /tanα/ or /tanβ/ ≤ 0.3.
constant time interval.
5. Applications:
High-Resolution Golgi Protein Mapping
○ Both techniques are used to study the nanoscale organization of cellular structures and molecular interactions.
However, the higher precision achievable with FIONA could potentially offer even more detailed insights into • Resolution: ~30 nm, allowing for densely packed Golgi protein
Time-lapse imaging is a photographic technique used to capture the appearance or
biological processes. • Kinetics: Quantitative analysis of RUSH system reporters and
behavior of subjects over a period of time, where a series of images are taken at regular
6. Technical Requirements: intervals. Here's how it generally works and how photobleaching can affect it:
○ PALM: Requires specific photoactivatable fluorophores and a setup capable of precise control over the activation Intra-Golgi Trafficking Model
How Time-Lapse Imaging Works:
and imaging light. • Golgi Complex: Linear structure with cargos entering at the ci
1. Intervalometer: A device or software feature that triggers the camera at set time
○ FIONA: May require more sophisticated imaging and analysis techniques to achieve the higher precision, including intervals to capture images automatically. • Secretory Cargos: Exit before reaching the TGN, with TGN ta
potentially more advanced microscopy hardware and software. 2. Consistent Conditions: The subject and camera settings (focus, exposure, white
balance, etc.) are kept consistent between shots to ensure continuity.
F-techniques in Microscopy 3. Capture: The camera takes a series of individual images over a set period, which
• FRAP (Fluorescence Recovery After Photobleaching): Used for live cell imaging to measure the kinetics of molecular can range from seconds to hours or even days.
diffusion and to determine the continuity of cellular structures like the ER (Endoplasmic Reticulum). 4. Stacking: The individual images are then stacked or compiled to create an
animation or video, giving the illusion of time being compressed.
5. Software: Specialized software is often used to process the sequence of images
Principle: Removes out-of-focus light using point illumination and point collection. into a time-lapse video.
Scanning Mechanism: Point-by-point scanning, with pixel dwelling time in microseconds.
6. Live Cell Imaging: In scientific applications, such as cell biology, time-lapse
Features of the Confocal Microscope microscopy is used to observe cellular processes over time without disturbing the
• Epi-fluorescence illumination with a laser. cells.
• Pinhole for confocality, with a trade-off between light collection and confocal quality. Photobleaching and Its Effects on Time-Lapse Imaging:
• Photomultiplier tube (PMT) for detection, which amplifies signals but lacks spatial resolution. 1. Photobleaching: This is a process where the fluorescence from a specimen fades
Image Acquisition in a Confocal Microscope over time due to repeated exposure to light, particularly from the illumination
• Pixel dwelling time is controlled by scanning speed. source used in fluorescence microscopy.
• Resolution can be adjusted through zoom levels (e.g., 128x128, 256x256, 512x512). 2. Fluorescent Molecules: Photobleaching primarily affects samples that use
• Manipulation of pinhole size and laser power affects image quality. • fluorescent markers or dyes, as these molecules are more susceptible to light-
induced degradation.
PMT 3. Impact on Time-Lapse: In time-lapse imaging, especially in fluorescence
A Photomultiplier Tube (PMT) is a highly sensitive electronic device that is used to detect and microscopy, photobleaching can reduce the fluorescence signal over time, leading
amplify light signals, particularly in low-light conditions. It is commonly used in various scientific to a decrease in image quality and potentially affecting the scientific outcome of the
applications, including fluorescence microscopy, as well as in medical and industrial settings. study.
Here's how a PMT works and its key features: 4. Mitigation Strategies:
○ Minimize Light Exposure: Use the lowest light intensity necessary to obtain
Working Principle: a good image.
1. Photoelectric Effect: When light (photons) strikes the photocathode, which is a light - ○ Optimize Exposure Time: Shorter exposure times reduce the chance of
sensitive material, it can eject electrons through the photoelectric effect. photobleaching.
2. Amplification: The emitted electrons are accelerated towards a series of electrodes ○ Use Anti-Fade Reagents: These can help to protect the fluorescent
Mitotic ER is continuous *It discusses the default exit of secretory cargos before reaching t
called dynodes by an electric field. Each dynode is at a higher voltage than the previous molecules from degradation.
one. • FLIP (Fluorescence Loss In Photobleaching): Compared to FRAP, used to demonstrate the continuous connection of to the TGN being signal-dependent, which adds a layer of regulat
cellular structures. ○ Cooling the Specimen: Lower temperatures can slow down the rate of
3. Secondary Electron Emission : As the electrons strike each dynode, they cause the photobleaching. *The use of molecular markers (like VSVG, E -cadherin, TNFα, C
emission of multiple secondary electrons. This process is repeated at each dynode, ○ Use of Photo-Stable Dyes: Some fluorescent dyes are more resistant to protein trafficking within the Golgi could be a focus of the lecture
leading to a cascade effect that significantly amplifies the original signal. Dynode become photobleaching than others. different cargo molecules are sorted and transported.
more and more positively charged. 5. Non-Fluorescent Samples: If the time-lapse imaging does not involve
4. Anode Collection: The final stage collects the amplified electrons at the anode, which is Epilogue
fluorescence, photobleaching is not a concern.

Quick Notes Page 2


Lecture 8 and 9: Bioimaging Electron Microscopy
Tutorial Questions:
Lecture Overview: 1. What are the two major differences between SEM and TEM?
Lecture 10: Medical Imaging Lecture 11: SPECT and PET
can capture, often measured in bits 1. Overview of Medical Imaging 1. Overview of Nuclear Medicine and Imaging
1. Basic principles of SEM (Scanning Electron Microscopy) and TEM (Transmission Electron Microscopy) 2. How do you prepare samples for conventional TEM, and what are the pros and cons?
• Non-invasive visualization of internal organs and tissues. • Nuclear Medicine: Medical specialty using radioactive substances for diagnosis and treatment.
nt pixels from reaching maximum 2. Chemical fixation and conventional TEM 3. Name three approaches/techniques to obtain 3D information of cells or tissues. The term "non-invasive visualization of internal organs and tissues" refers to medical imaging Nuclear medicine is a branch of medical imaging that uses small amounts of radioactive materials, known
3. 3D EM volume imaging techniques (TEM- and SEM-based) 4. Explain two methods to localize a protein of interest by TEM. techniques that allow healthcare professionals to see and examine the inside of the body without as radiotracers or radiopharmaceuticals, to diagnose and treat diseases. The field harnesses the emission
4. Methods to localize/visualize proteins in TEM 5. Which EM-based approach would you use for high-resolution structure determination the need for invasive procedures. In other words, it's a way to visualize the structure and function of of gamma rays from these substances to visualize and measure the functioning of organs, tissues, or bone
5. Cryo fixation (vitrification) and freeze-substitution of a regular, homogenous, and purified protein complex or virus particle? organs and tissues by using imaging technologies that do not require making incisions or inserting in the body. Here are some key points about nuclear medicine:
lar structures from a single 2D image.
6. Single Particle Analysis (SPA) and Electron Tomography (ET) 6. What is Electron Tomography, how does it work, and what are its pros and cons? instruments into the body. 1. Diagnostic Applications: Nuclear medicine is primarily used for diagnostic purposes. It can help in
g in 3D reconstruction. Here are some key points about non-invasive visualization:
7. Cryo electron tomography (cryoET) of cells and tissue 7. Describe three major steps needed to image a cellular structure by cryoET. the early detection of diseases, including various types of cancers, heart disease, endocrine
zed via laser scanning confocal 1. Safety: Because it does not involve surgery or the insertion of instruments into the body, there disorders, and neurological conditions.
8. Correlative light and electron microscopy (CLEM)
The answers to the tutorial questions based on the provided lecture content: is a lower risk of complications compared to invasive procedures. 2. Therapeutic Applications: In addition to diagnosis, nuclear medicine can also be used
it is often a critical technique for 9. Resolving power comparison between light and electron microscopes 2. Real-Time Imaging: Some non-invasive imaging techniques, such as ultrasound and certain therapeutically. For example, it can be employed to treat certain types of cancer, particularly those
1. Two major differences between SEM and TEM:
confocal microscopy is renowned for. types of MRI, can provide real-time images of the body's internal structures. that are not easily accessible by surgery, by delivering targeted radiation to tumor cells.
Resolving Power of Light and Electron Microscopes:
○ SEM examines the surface of a sample by detecting electrons that are deflected
by the sample (BSEs and SEs), providing surface topography and composition 3. Repeatability: Non-invasive imaging can be repeated as often as necessary without causing 3. Imaging Techniques: The most common imaging techniques in nuclear medicine are Single Photon
s to reconstruct a 3D structure from a • Super Resolution: Techniques to surpass the diffraction limit of light. information. harm to the patient, which is useful for monitoring changes over time, such as the progression Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET). These
mmon in fields like histology and cell • Wavelength Limitation: Electron microscopes use shorter wavelengths than light, allowing higher resolution. ○ TEM analyzes electrons that have transmitted through the sample, offering of a disease or the effects of treatment. techniques provide detailed, three-dimensional images that can show the function of organs and the
ial organization of cells or tissues in • Electron microscope has wider and higher resolution range than Super resolution Microscope 4. Detail: Modern non-invasive imaging technologies can provide detailed images that are metabolic activity within the body.
detailed cross-sectional views of the internal structure of the sample.
2. Sample preparation for conventional TEM: comparable to or even surpass the resolution of some invasive methods. 4. Radiopharmaceuticals: These are the radioactive substances used in nuclear medicine. They are
s a form of sectioning known as optical Electron Scattering in EM 5. Wide Range of Applications: They are used for diagnosing a variety of conditions, from typically composed of a radioactive isotope attached to a compound that is known to accumulate in
○ Key steps: Chemical fixation, staining, trimming, sectioning, collection,
ht, which results in a thin, well-defined broken bones to internal bleeding, tumors, and more. specific organs or tissues. Once administered, the radiopharmaceutical travels through the body and
mages at different depths (z-stacks), a embedding, infiltration, and dehydration.
○ Pros: Good contrast due to heavy metal staining, suitable for studying overall 6. Radiation Exposure: Some non-invasive imaging methods, like X-rays and CT scans, use emits radiation that can be detected by imaging devices.
cell morphology, not critically dependent on humidity and temperature, and ionizing radiation. However, the amount is carefully controlled to minimize risk while still 5. Non-Invasive: Nuclear medicine procedures are generally non-invasive, with the
monitored over time without fixing or samples can be stored and imaged multiple times. providing the necessary diagnostic information. radiopharmaceuticals being administered through injection, inhalation, or ingestion.
d, the cells are imaged in their entirety ○ Cons: Chemical fixation can be toxic and harmful to cells, not in native state, 6. Physiological Imaging: Unlike other imaging modalities that primarily provide anatomical images
7. Contrast Agents: In some cases, non-invasive imaging may involve the use of contrast
slow penetration, may alter protein structure, material extraction due to agents, which are substances that are introduced into the body to enhance the visibility of (like X-rays or CT scans), nuclear medicine provides physiological images. This means it can show
ocal microscopy can still provide better dehydration, deformations/distortions, and limited resolution. certain structures. These agents can be injected, swallowed, or inhaled, depending on the how organs or tissues are functioning rather than just what they look like.
croscopy. However, to fully explore the 7. Safety: The amounts of radioactive material used in nuclear medicine are carefully controlled to
3. Three approaches/techniques to obtain 3D information of cells or tissues: type of imaging being performed.
n at different focal planes.
○ Serial Block Face SEM (SBF-SEM): Involves automated sectioning and 8. Patient Comfort: Non-invasive procedures are generally more comfortable for patients and ensure they are safe for the patient and deliver the lowest possible radiation dose necessary to obtain
on the type of sample. For instance, imaging of the sample to create a 3D reconstruction. do not require anesthesia or sedation. the diagnostic information.
ectioning might not be necessary as the ○ Focused Ion Beam SEM (FIB-SEM): Uses a focused ion beam to mill away 8. Theragnostics: This is an emerging field combining the diagnostic and therapeutic aspects of nuclear
thin layers of the sample and image the freshly exposed surface. • Use of 2D (f(x,y)) or 3D (f(x,y,z)) signals for clinical analysis, medical intervention, and visual medicine, where the same radiopharmaceutical or a similar one is used for both imaging and
enhance the 3D imaging capabilities ○ Cryo-Electron Tomography (cryoET): Captures 3D information of vitrified treatment.
representation of organ/tissue function.
plication. The decision to use (rapidly frozen) samples, preserving near-native structures for high-resolution The phrase "Use of 2D (f(x,y)) or 3D (f(x,y,z)) signals for clinical analysis, medical intervention, and 9. Advantages: Nuclear medicine can detect diseases at a very early stage and can monitor the
ple, and the specific imaging goals. 3D imaging. visual representation of organ/tissue function" refers to the application of two -dimensional and effectiveness of treatments over time.
4. Two methods to localize a protein of interest by TEM: three-dimensional data representations in medical imaging to analyze and understand the structure 10. Limitations: As with any medical procedure, nuclear medicine has limitations. It may not provide
○ Immuno-gold labeling: Uses antibodies tagged with gold particles to bind to and function of organs and tissues within the body. detailed anatomical images, and there can be a waiting period between the administration of the
specific proteins, providing high contrast in TEM due to the gold particles. Here's a breakdown of the concepts involved: radiopharmaceutical and the imaging procedure to allow for uptake in the target area.
○ APEX2 or miniSOG tagging: These are enzyme tags that can be genetically 1. 2D Signals (f(x,y)): In the context of medical imaging, a 2D signal is a representation of data
fused to proteins of interest. APEX2 catalyzes the polymerization of DAB into an that is confined to two dimensions, typically length and width (x and y axes). An example of • Imaging Focus: Function over anatomy, making it a physiological imaging modality.
electron-dense product, while miniSOG generates singlet oxygen upon 2D imaging is a standard X-ray, which provides a flat image that represents a projection of the • Common Modalities: SPECT (Single-photon emission computed tomography) and PET (Positron emission
illumination, leading to DAB photo-oxidation for protein localization. body part being imaged. The signal function f(x,y) describes the intensity of the image at each tomography).
5. EM-based approach for high-resolution structure determination: point in the 2D plane.
○ Single Particle Analysis (SPA) Cryo-EM: Ideal for regular, homogenous, and 2. 3D Signals (f(x,y,z)): A 3D signal, on the other hand, adds depth (z-axis) to the data, allowing
purified protein complexes or virus particles. Key steps include vitrification of for a more comprehensive representation of the volume and spatial relationships of the body
samples, automated data collection, particle picking, alignment, and 3D part being imaged. Technologies like CT (Computed Tomography) scans and MRI (Magnetic
reconstruction. Resonance Imaging) can generate 3D data sets, where the signal function f(x,y,z) describes
6. Electron Tomography: the intensity or other properties at each point in three-dimensional space.
○ How it works: A series of 2D images are taken at different angles as the 3. Clinical Analysis: Doctors and medical professionals use these 2D and 3D signals to
sample is tilted, which are then computationally combined to reconstruct a 3D analyze the images for diagnostic purposes. They look for abnormalities, measure the size of
model of the sample. structures, and assess the function of organs.
○ Pros: Provides 3D structural analysis of rare, large, and irregularly shaped 4. Medical Intervention: The data can guide medical procedures, such as biopsies or surgeries.
objects, can be performed on isolated particles or structures inside cells, and For example, 3D imaging can help surgeons plan and navigate around critical structures
offers high-resolution imaging. during an operation.
○ Cons: Time-consuming, low throughput, limited to relatively small areas, and 5. Visual Representation of Organ/Tissue Function: Beyond structure, certain imaging
subject to the missing wedge problem, which reduces axial resolution and modalities can provide information about the function of organs and tissues. For instance,
introduces anisotropic resolution. functional MRI (fMRI) can show areas of the brain that are active during specific tasks, and
7. Three major steps to image a cellular structure by cryoET: PET (Positron Emission Tomography) scans can reveal how tissues are metabolizing
○ Cryo-fixation: Rapidly freezing the sample in liquid ethane or propane to glucose, which is an indicator of cellular activity.
preserve its native state.
○ Cryo-FIB milling: Thinning the frozen-hydrated sample to a suitable size for • Establishes a database of normal anatomy and physiology to identify abnormalities.
TEM imaging using a focused ion beam. • Part of biological imaging.
○ Cryo-tomography data collection: Acquiring a series of 2D images at different
*vaccum to mminimise scattering. tilt angles, followed by computational reconstruction into a 3D model 2. Types of Imaging
• Anatomical: e.g., Intracranial Disease and Injury.
then reconstruct using computer . Electron scattering in electron microscopy (EM) is a fundamental process that affects the quality and type of 1. Purpose: To visualize the structure and anatomy of the body's organs and tissues.
information that can be obtained from a sample. When an electron beam interacts with a sample, several scattering 2. Techniques: 2. Principles of Nuclear Medicine
events can occur: ▪ X-ray radiography: Uses ionizing radiation to create 2D images of bones and dense • Non-invasive: Determines physiological processes without surgery.
alization of different proteins is 1. Elastic Scattering: structures. • Tracer Principle: Radiopharmaceuticals are distributed and metabolized according to their chemical
○ Electrons are deflected by the electrostatic field of the atomic nuclei without losing energy. ▪ CT (Computed Tomography): Employs X-rays and computer processing to generate structure.
○ This type of scattering is predominant in TEM and is useful for high-resolution imaging as it provides detailed cross-sectional images, often used for diagnosing conditions related to bones, • Data Display: Images, numerical data, and time-activity curves.
information about the sample's internal structure. blood vessels, and soft tissues. • Physiological Information: Can provide insights into glucose metabolism, blood flow, tissue uptake,
2. Inelastic Scattering: ▪ MRI (Magnetic Resonance Imaging): Utilizes a strong magnetic field and radio waves to receptor binding, and oxygen utilization.
FP, and GM130 for protein ○ Electrons lose energy during interaction with the sample's atoms, often due to electron -excitation produce high-resolution 2D or 3D images of the body's internal structures without using
events. ionizing radiation. 3. Gamma Camera(use gamma rays to image)
○ This scattering can lead to a background signal that may reduce image contrast and resolution in 3. Applications: • Components: Collimators, Scintillation detector, Electronics & computer elements.
TEM. ▪ Identifying structural abnormalities, such as tumors, fractures, or anatomical • Function: Captures gamma rays using a NaI crystal and lead collimator to image from a specific point.
calculate pixel size. 3. Secondary Electron Emission: malformations. A gamma camera is a key component in nuclear medicine imaging, particularly for Single Photon Emission
○ Occurs when the incident electron has enough energy to eject another electron from the sample's ▪ Assessing the size, shape, and position of organs. Computed Tomography (SPECT). It is designed to capture the gamma rays emitted by
anning resolution in laser scanning
atoms, typically in SEM. ▪ Guiding biopsies and surgical procedures. radiopharmaceuticals that have been administered to a patient. Here's a breakdown of the gamma camera
4. Backscattered Electrons (BSEs): 4. Strengths: High spatial resolution, excellent for detailed structural images, and can visualize and its components along with their functions:
○ Higher-energy electrons that are scattered back towards the detector in SEM, providing information dense tissues like bones and calcifications. 1. Collimators:
about the sample's composition and topography. ○ Function: Collimators are made of dense material, usually lead, and are designed with small,
Cavalieri's principle. precise holes to allow only gamma rays that are traveling in specific directions to pass through.
• Functional: e.g., metabolic diseases, lesions, cognition.
Scanning Electron Microscopy (SEM): 1. Purpose: To evaluate the function and metabolic activity of organs and tissues. They act like a filter to ensure that the gamma rays detected come from the desired area of the
• Examines deflected electrons (BSEs and SEs) to provide surface information. 2. Techniques: body, improving image clarity.
• Can image whole organisms, tissues, and cells. ○ Importance: Without collimation, gamma rays coming from various angles would create a
○ PET (Positron Emission Tomography): Involves the use of radiotracers that are taken up
blurred image. Collimators help to localize the source of the gamma rays.
• Various SEM techniques for large tissue volumes (e.g., Serial Blockface SEM, FIB-SEM). by metabolically active cells, providing information about cellular function and glucose
○ Serial Blockface SEM metabolism. 2. Scintillation Detector:
○ fMRI (functional Magnetic Resonance Imaging): Detects changes in blood flow and ○ Function: This is the part of the gamma camera that actually detects the gamma rays. The
i. Principle: SBF-SEM involves the automated removal of thin layers (typically nanometers to tens of most common scintillation detector is made of a large area sodium iodide (NaI) crystal, often
nanometers thick) from the surface of a resin-embedded block-face sample and then imaging the oxygenation in the brain, reflecting brain activity.
○ SPECT (Single Photon Emission Computed Tomography): Similar to PET but uses a doped with thallium to increase its sensitivity to gamma rays.
exposed surface with an SEM. ○ Process: When a gamma ray passes through the collimator and strikes the crystal, it is
ii. Process: different type of radiotracer and gamma camera to image blood flow and function in
various organs. converted into a flash of light (scintillation). This light is then converted into electrical signals.
□ The sample is embedded in a resin and trimmed to create a flat surface. 3. Photomultiplier Tubes (PMTs):
3. Applications:
□ A diamond or glass knife is used to shave off a thin layer of the block face. ○ Function: These are attached to the scintillation detector. When light from the crystal is
○ Assessing how well an organ is functioning, such as the heart's pumping ability or brain
□ The newly exposed surface is imaged with an SEM, which detects secondary electrons (SEs) activity during different tasks. produced, PMTs amplify this light signal into an electrical signal through a process called
and backscattered electrons (BSEs) to generate a high-resolution 2D image. ○ Detecting areas of the brain that are active or inactive, which can be useful in photoelectric effect.
□ The process is repeated, with the SEM imaging the freshly exposed surface after each slice is neurological disorders. ○ Detail: Each PMT is positioned to correspond to a specific part of the crystal, allowing the
removed, creating a series of 2D images. ○ Identifying tumors and determining if they are metabolically active. system to determine where the gamma ray interaction occurred.
iii. Advantages: 4. Strengths: Provides information on physiological processes, metabolism, and function, which 4. Electronics & Computer Elements:
□ Provides high-resolution 3D images of large volumes of biological samples. can be crucial for understanding diseases that may not cause structural changes. ○ Function: The electrical signals from the PMTs are processed by the electronics to determine
□ Allows for the visualization of cellular and tissue structures in their entirety. the position and energy of the incoming gamma rays. The computer then uses this information
iv. Challenges: to construct an image.
○ Role in Imaging: These systems are crucial for creating a two-dimensional image (planar
□ The process can be time-consuming due to the large number of images that need to be imaging) or, when combined with data from multiple angles, to generate a three-dimensional
acquired and aligned. image (as in SPECT).
□ The quality of the images can be affected by the evenness of the block-face and the precision 5. Gantry:
of the slicing.
○ Function: The gantry is the structure that supports the gamma camera and allows it to rotate
around the patient. This rotation is essential for acquiring multiple views of the patient's body,
which are then used to create cross-sectional images in SPECT.
○ Importance: The gantry ensures that the camera can move in a controlled manner to capture
data from different angles.
6. Data Acquisition System (DAS):
○ Function: The DAS is responsible for collecting the raw data from the gamma camera and
formatting it in a way that can be used by the computer for image reconstruction.
○ Process: It coordinates the timing of data collection and ensures that the data from different
angles are properly aligned.
7. Computer System:
○ Function: The computer system processes the data from the DAS and uses algorithms to
reconstruct a two-dimensional or three-dimensional image of the distribution of the
Key Differences radiopharmaceutical in the patient's body.
• Focus: Anatomical imaging focuses on the physical structure and morphology, while ○ Software: It often includes specialized software for image analysis, quantification, and
○ Focused ion beam SEM functional imaging concentrates on how organs and tissues are working. interpretation.
• Data: Anatomical imaging provides detailed images of body parts, whereas functional imaging
i. Principle: FIB-SEM uses a focused ion beam to mill or cut into the surface of a sample, with the
provides data about biological processes and activities. 4. Radiopharmaceuticals
material removed being imaged by an SEM to create a 3D representation of the sample.
esholding. • Applications: Anatomical imaging is often used for diagnosing structural problems, while • Definition: Radioactive materials administered to patients for imaging or therapy.
ii. Process:
xels. functional imaging is used to understand and diagnose conditions that affect how organs and • Composition: A chemical molecule and a radionuclide that emits detectable radiation by gamma camera or
□ A focused ion beam (typically gallium ions) is used to sputter or mill away a thin layer of the tissues function. body fluid.
sample's surface. • Technological Requirements: Functional imaging often requires the use of radioactive • Diagnostic vs. Therapeutic: Diagnostics require minimal radiation dose; therapy aims to maximize
□ The removed material forms a small mound or "curtain" that can be imaged by the SEM, tracers or contrast agents to highlight specific biological processes.
ment of the size and shape of radiation to the diseased area while minimizing damage to healthy tissues.
revealing the structure beneath the surface.
oscopy, morphometric analysis refers □ The ion beam is then used to create a flat surface, and the process is repeated in a serial
properties of structures within an 3. Image Characteristics and Quality 5. Ideal Radionuclides for Diagnostics and Therapy
manner, with each layer being imaged before the next is milled.
es that are visible under a microscope. • Diagnostics: High photon emission(for detection), no charged particles, short half -life.
iii. Advantages: • Five major components: patient, operator, observer, imaging system, image • Therapy: Emits energetic charged particles(beta,auger,alpha), low gamma photon abundance allowing
metric analysis often involves □ Offers precise control over the milling process, allowing for the creation of highly detailed 3D activity to be traced, fairly short half-life.
images.
n cell biology, one might measure the
e of a nucleus. □ Can be used to create 3D images of large volumes of material, including thick cellular or tissue 6. Administration of Radiopharmaceuticals
samples. • Methods: Intravenous injection, inhalation, ingestion.
etric measurements are typically made
nits (like micrometers) is calibrated, iv. Challenges:
urements into actual physical □ The ion beam can cause damage to the sample, particularly for beam-sensitive materials. 7. Targeting Tissue or Organ
□ The process can be complex and requires specialized equipment and expertise. • Chemical Properties: Determine the metabolism of the radiopharmaceutical in the body.
mages often undergo segmentation, • Types: Ions, particles/aggregates, radio-labelled blood cells, complex molecules.
st of the image. This can be done
separates structures of interest based 8. Interventional Nuclear Medicine
• Unsealed Source Radiotherapy: Uses soluble radioactive substances.
e identifying and measuring specific • Sealed-Source Therapy (Brachytherapy): Radioisotope remains in a capsule or wire.
e of cell nuclei, or the distribution of In nuclear medicine, the terms "sealed" and "unsealed" sources refer to two different methods of
administering radioactive material for therapeutic purposes. Each method has its own applications,
etric measurements is often subjected advantages, and considerations, which are outlined below:
bution, and relationships between
Sealed-Source Therapy
fields, including: 1. Brachytherapy: Sealed-source therapy is often associated with brachytherapy, where a radioactive
and pathology. source is enclosed in a sealed container or capsule that does not leak radioactive material.
s and tissues. 2. Targeted Delivery: The sealed source is placed directly into or near the tumor or the area being
ze and shape of cells or tissues. treated. It can be implanted permanently or temporarily.
al characteristics of organisms for 3. Protection: Because the radioactive material is sealed, there is minimal risk of exposure to
healthcare workers and the patient's family, as the radiation does not escape the source's container.
• Staining: sputter-coated with thick layer of gold or platinum 4. Controlled Dose: The dose rate and total radiation dose can be carefully controlled by selecting the
s designed for morphometric analysis
ng greater precision and efficiency. appropriate radioisotope, its activity, and the duration of implantation.
Transmission Electron Microscopy (TEM): 5. Common Uses: Sealed sources are used to treat various types of cancer, including prostate,
• Observes electrons transmitted through the sample for internal views. cervical, and some brain tumors.
• Requires thin (~60-300 nm) sections for electron transparency. • Display scale: intensity and color information about physical parameters(quality factors). 6. Limited Distribution: The radioactivity is contained within the sealed source, so there is no systemic
as peaks and bases below the
• Offers higher resolution than SEM but is limited by sample size and thickness. • 3D display: distribution of recorded intensity in 3D space. distribution of the radioisotope in the body.
• Staining: osmium, uranium, lead • Quality factors: contrast, blur, noise, artifacts, and distortion. Unsealed-Source Therapy
• Contrast –different shades of gray, light intensities, or colors. 1. Systemic Administration: Unsealed sources involve administering radioactive material that is not
sealed, such as radiopharmaceuticals that can enter the body's systemic circulation.
2. Targeted Therapy: These radiopharmaceuticals are designed to be taken up preferentially by the
target cells or tissues, such as cancer cells, due to their biological properties.
3. Exposure Risk: There is a higher risk of radiation exposure to others since the radioactive material is
not sealed. This requires careful handling and disposal, as well as shielding and isolation measures
for the patient during treatment.
4. Whole-Body Radiation: The radioactive material can distribute throughout the body, which can be
beneficial for treating systemic diseases like metastatic cancer.
5. Common Uses: Unsealed sources are used in therapies such as radioiodine therapy for thyroid
cancer or non-Hodgkin's lymphoma.
6. Complex Dosimetry: Calculating the radiation dose in unsealed-source therapy is more complex due
to the variable uptake and distribution of the radiopharmaceutical in the body.
Why Both Methods Exist
The choice between sealed and unsealed sources depends on several factors:
• Type of Disease: Some diseases are more effectively treated with localized radiation (sealed
mage. sources), while others may require systemic treatment (unsealed sources).
• Accessibility: Sealed sources can be used when the target tissue is easily accessible for direct
Sample Preparation for TEM:
implantation.
• Chemical Fixation: Crosslinking proteins with aldehydes like glutaraldehyde (GA) or paraformaldehyde (PFA).
• Risk Management: Sealed sources reduce the risk of radiation exposure to medical staff and family
members.
escence under UV light. • Treatment Duration: Sealed sources may be used for either short-term or long-term implantation,
A and background. while unsealed sources involve a systemic distribution that decays over time.
ntensity to isolate the nucleus signal. • Patient Considerations: Factors such as patient preference, the extent of disease, and overall health
can influence the choice of therapy.
Both sealed and unsealed source therapies play crucial roles in nuclear medicine, providing targeted
treatment options for a variety of conditions. The selection of the appropriate therapy depends on the
specific needs of the patient and the nature of the disease being treated.

9. SPECT (Single Photon Emission Computed Tomography)


• 3D Imaging: Uses gamma rays to provide cross-sectional slices.
.
• Radionuclides: Typically 99mTc, 67Ga, 111In, 123I.
• Principle: Detects gamma-ray photons emitted from a radiopharmaceutical within the body.

gi protein localization. •

n visualization.
d intra-Golgi trafficking.

s-Golgi and exiting at the TGN. • Blur - primary effect of image blur is to reduce the contrast and visibility of small objects or detail.
argeting being signal-dependent. • Noise – random variation of brightness/ color information in images
• Artifact - can create image features that do not represent a body structure
• Staining: Enhancing contrast with heavy metals like osmium, which reacts with unsaturated fatty acids and • Distortion - inaccurate impression of their size, shape, and relative positions.
crosslinks membrane lipids.
• Sensitivity and Specificity: ability to show a condition as positive or negative.
• ROC Curve: relationship between sensitivity and specificity.

Differences between PET and SPECT


SPECT (Single Photon Emission Computed Tomography) and PET (Positron Emission Tomography) are
both nuclear medicine imaging techniques that provide three -dimensional information about the body's
function and physiology. However, they differ in several key aspects:
1. Principle of Imaging:
○ SPECT: Uses gamma-emitting radionuclides. When the radionuclide decays, it emits a single
gamma photon, which is detected by a gamma camera.
○ PET: Utilizes radionuclides that decay by positron emission. The emitted positron annihilates
with an electron, producing two gamma photons that travel in opposite directions. These are
detected simultaneously using coincidence circuitry.
2. Radionuclides Used:
○ SPECT: Commonly uses radiopharmaceuticals like 99mTc (Technetium), 67Ga (Gallium), 111In
(Indium), or 123I (Iodine).
○ PET: Often uses fluorine-18 labeled compounds, most notably FDG (Fluorodeoxyglucose), but
can also use other positron-emitting isotopes like carbon-11, nitrogen-13, oxygen-15, etc.
3. Detection Mechanism:
○ SPECT: Detects single photons, and the data is acquired from multiple angles around the
patient to create a tomographic image.
○ PET: Requires the simultaneous detection of two photons (coincidence detection), which helps
to precisely locate the source of emission within the body.
4. Image Resolution and Sensitivity:
○ SPECT: Generally has lower resolution compared to PET and may be more prone to artifacts
the TGN, with the secretory targeting *False positive rate= 1- specificity due to the collimator's limiting effect on spatial resolution.
tion to the trafficking process. • Dehydration: Replacing water with organic solvents (e.g., ethanol series). ○ PET: Offers higher resolution and sensitivity due to the direct detection of two annihilation
CD59, and furin) to study the kinetics of photons and advanced image reconstruction techniques.
• Embedding: Infusing and polymerizing plastic resins into cells.
e model, providing insights into how 5. Quantification:
• TEM is limited by sample thickness(no more than 80nm for 2D and 300nm for 3D
○ SPECT: Quantification of tracer concentration can be challenging due to the effects of collimator
penetration and scatter.
○ PET: Allows for more accurate quantification of tracer concentration and provides standardized
It's important to note that while many of these steps are performed at room temperature, some variations such as
uptake values (SUVs), which are important in oncology for assessing tumor metabolism.

Quick Notes Page 3


Lecture 12:MRI Lecture 13: fMRI
Magnetic Resonance Imaging (MRI) 1. Introduction to fMRI:
• Functional neuroimaging using MRI technology.
• Definition: A medical imaging technique used to produce high-quality images of the soft tissues in the human
body. • Measures brain activity by detecting changes in blood flow associated with brain activity.
Soft tissues are the non-mineralized, flexible components of the body that contrast with the hard, mineralized • Cerebral blood flow increases to active brain regions.
tissues like bones and teeth. Here are some examples of soft tissues:
1. Muscles: Comprised of muscle fibers, muscles are responsible for movement and are one of the most 2. Evolution of fMRI:
abundant soft tissues in the body. • 1973: Lauterbur suggests using NMR for imaging.
2. Ligaments: These are tough, fibrous connective tissues that connect bones to other bones, providing • 1977: Clinical MRI scanner patented; Mansfield proposes echo-planar imaging (EPI) for faster image
stability to joints. acquisition.
3. Tendons: Similar to ligaments, tendons are strong, inelastic bands of connective tissue that connect • 1990: Ogawa observes the BOLD (blood-oxygen-level dependent) effect, where blood vessels become more
muscles to bones. visible as oxygenation decreases.
4. Fat: A soft tissue that stores energy, provides insulation, and serves as a cushion to protect organs. • 1991: Belliveau observes functional images using a contrast agent.
5. Blood Vessels: Including arteries, veins, and capillaries, these are the flexible tubes that transport blood • 1992: Ogawa & Kwong publish the first functional images using the BOLD signal.
throughout the body.
6. Nerves: Comprised of nerve fibers, nerves are part of the nervous system and are responsible for 3. fMRI Basics:
transmitting signals between the brain and the rest of the body. • fMRI produces many images at lower resolution compared to MRI, which produces a single high-resolution
7. Skin: The body's largest organ, skin is a flexible, protective barrier that covers the body and includes the image.
epidermis, dermis, and subcutaneous tissue layers. • The BOLD signal is an indirect measure of neural activity, with a delay of about 2 seconds post-activation.
8. Internal Organs: Such as the brain, heart, lungs, liver, kidneys, and intestines, which are all surrounded
by connective tissue and are involved in various vital functions.
9. Cartilage: A flexible connective tissue found in various parts of the body, such as the ears, nose, joints,
and between the vertebrae in the spine.
10. Mucous Membranes: Linings of various body cavities, such as the mouth, nose, and digestive tract,
which secrete mucus to keep these areas moist and protected.
11. Synovial Membranes: These line the joints and produce synovial fluid to lubricate the joint spaces and
reduce friction.
• Basis: Principles of nuclear magnetic resonance (NMR), a spectroscopic technique for obtaining microscopic
chemical and physical information about molecules.
• Evolution: MRI has developed from a tomographic imaging technique to a volume imaging technique.

Synopsis of MRI Process


1. Place the subject in a strong magnetic field.
2. Transmit radio waves into the subject for a brief period (2-10 ms).
3. Turn off the radio wave transmitter.
4. Receive radio waves re-transmitted by the subject.
5. Convert the measured RF (radio frequency) data into an image.
Radio waves are used in Magnetic Resonance Imaging (MRI) for several reasons, which are integral to the
functioning of the MRI system and the quality of the images it produces. Here are the key reasons why radio
waves are utilized in MRI:
1. Nuclear Magnetic Resonance (NMR): The basis of MRI is the NMR phenomenon, where atomic nuclei
with an odd number of protons or neutrons (and thus a non -zero nuclear spin) can absorb and re-emit
radio frequency (RF) energy when placed in a strong magnetic field. This is because these nuclei precess
at a specific frequency (the Larmor frequency) that corresponds to the applied magnetic field.
2. Resonance Frequency: The Larmor frequency is within the radio wave frequency range, which allows
MRI machines to use RF pulses to interact with the hydrogen nuclei (protons) in the body, which are
abundant and have a strong NMR signal. The discovery mentioned in the lecture notes refers to a pivotal moment in the development of functional MRI
3. Energy Transfer: By applying a RF pulse at the Larmor frequency, MRI systems can selectively flip the (fMRI) technology. Here's a more detailed explanation based on the provided information:
magnetic vector of the hydrogen protons out of alignment with the main magnetic field. This process is Discovery of Localized Signal Increase in fMRI:
essential for generating the signal that will be used to create the MRI image. • In 1991, Kwong et al. made a significant discovery that has become foundational to fMRI. They found
4. Non-Ionizing Radiation: Unlike X-rays, which use ionizing radiation that can be harmful to biological that when there is an increase in neuronal activity in a specific area of the brain, there is a
tissues, radio waves used in MRI are non-ionizing and do not damage the body's cells or DNA. This corresponding increase in the MRI-measurable signal in that local region.
makes MRI a safer option for imaging compared to some other imaging modalities. • This increase in signal is only a few percent, but it is detectable and can be used to map brain function.
5. Spatial Encoding: After the RF pulse, the system uses gradient magnetic fields to spatially encode the The process is as follows:
re-emitted RF signals from the protons. This encoding allows the MRI scanner to determine the location of 1. Neuronal Activity: An area of the brain becomes active, such as when a person performs a task
the signals within the body. or perceives a stimulus.
6. Signal Detection: The re-transmitted RF signals from the protons are detected by the MRI system. The 2. Hemodynamic Response: The increased brain activity leads to a local increase in blood flow
time it takes for the protons to relax back to their original state (T1 and T2 relaxation times) varies and oxygen consumption in that area.
between different types of tissues, providing the contrast needed to differentiate between them in the final 3. BOLD Contrast: The blood flow change results in a relative increase in oxygenated blood
image. (oxyhemoglobin) and a decrease in deoxygenated blood (deoxyhemoglobin). Since
7. Image Contrast: The use of RF pulses allows for the manipulation of image contrast based on T1, T2, deoxygenated blood is paramagnetic and distorts the local magnetic field, a decrease in its
and proton density (PD) weighting, which is crucial for diagnosing various medical conditions. concentration leads to a more uniform magnetic field and an increase in the MRI signal.
8. Functional Imaging: By using RF pulses in specific sequences, functional MRI (fMRI) can be performed, 4. fMRI Signal Change: The MRI scanner detects this change in signal, which is then used to infer
which allows for the visualization of brain activity by detecting changes in blood flow related to neural the location and intensity of brain activity.
activity. This discovery was crucial because it provided a non-invasive method to visualize and study brain function in
9. Spectral Analysis: RF pulses can be used to perform Magnetic Resonance Spectroscopy (MRS), which real-time. It laid the groundwork for the use of fMRI in cognitive neuroscience, clinical diagnostics, and the
provides information about the chemical composition of tissues and can be used to detect abnormalities understanding of brain-behavior relationships.
such as tumors. The timing of the BOLD response, as depicted in the lecture notes, is characterized by a delay (C: ~2
seconds), a rise (D: 4-5 seconds), a plateau (E: 5 seconds), a fall (F: 4-6 seconds), and a return to baseline
or undershoot (G). This timeline is essential for understanding the temporal dynamics of the BOLD response
Factors Contributing to MR Imaging and for designing fMRI experiments that can accurately capture changes in brain activity related to specific
• Quantum properties of nuclear spins tasks or stimuli.
• Radio frequency (RF) excitation properties
• Tissue relaxation properties 4. fMRI Setup:
• Magnetic field strength and gradients • Consists of a magnet, radiofrequency (RF) coils, gradients, a computer system, and a patient handling
• Timing of gradients, RF pulses, and signal detection system.
• The magnet aligns protons, RF coils transmit and receive energy, and gradients induce spatial encoding.
Nuclei Suitable for NMR
• Nuclei must have spin and charge.
• Nuclei composed of protons and neutrons, both with spin ½, but only protons have a charge.
• Good MR nuclei include 1H (hydrogen), 13C, 19F, 23Na, 31P.
• Hydrogen (1H) is the best for MRI due to its abundance in biological tissues and its MR sensitivity.

Timeline of MR Imaging Development


• Key milestones from 1924 to the creation of functional images using blood-oxygenation contrast in 1990.

Magnetic Resonance Techniques


• NMR
• MRI
• EPI (Echo-Planar Imaging)
• fMRI (Functional MRI)
• MRS (Magnetic Resonance Spectroscopy)
• MRSI (MR Spectroscopic Imaging)

Nobel Prizes in Magnetic Resonance


• Awards for significant contributions to the field, including those to MRI technology.
Spatial encoding in the context of Magnetic Resonance Imaging (MRI), including functional MRI (fMRI),
refers to the process by which the position of the signal within the body is determined. This is a critical step in
Background on Hydrogen Atom creating a detailed MRI image. Here's a breakdown of how spatial encoding works:
• The human body is primarily composed of fat and water, both rich in hydrogen atoms. 1. Magnetic Field Gradients:
• Hydrogen's single proton and its spin property make it ideal for MRI. • The main magnetic field in an MRI scanner is uniform, meaning it's the same throughout the imaging
volume. To create an image, additional magnetic field gradients are applied, which are linear changes
Magnetic Principles in the magnetic field's strength. These gradients are applied in three orthogonal directions: X (left-right),
Y (anterior-posterior), and Z (superior-inferior).
2. Radiofrequency (RF) Pulses:
• When a patient is placed in the MRI scanner, the static magnetic field aligns the hydrogen nuclei
(protons) in the body. An RF pulse is then applied, which tips these protons out of alignment with the
magnetic field.
3. Gradient Application and Signal Acquisition:
• After the RF pulse, as the protons relax and realign with the magnetic field, they emit a signal. The
spatial encoding gradients are applied in a specific sequence during this relaxation period.
○ A gradient in the X-direction, for example, will cause protons on one side of the gradient to
precess at a slightly different frequency than those on the other side. This frequency difference
corresponds to their position along the X-axis.
○ By switching the gradients on and off in a controlled manner and measuring the emitted signals at
different times, the scanner can determine where the signal is coming from within the body.
4. Fourier Transform:
• The signals collected during the MRI scan are a function of the spatial encoding gradients applied.
These signals are digitized and processed using a mathematical technique called Fourier transform.
The Fourier transform converts the time-domain signal into a spatial frequency distribution, which can
be interpreted as an image.
5. Image Reconstruction:
• The result of the Fourier transform is a set of data that represents the spatial distribution of the MRI
signal within the body. This data is then used to reconstruct a 2D image slice. By repeating this process
with different orientations of the gradients, multiple 2D slices can be acquired, which can be stacked to
create a 3D representation of the body part being imaged.
• Hydrogen protons act like small magnets and align with an external magnetic field(B0). In summary, spatial encoding in MRI is the technique used to determine the location of signals within the
• The precessional frequency is proportional to the magnetic field (described by the Larmor frequency). body, which allows for the creation of detailed images of internal structures. It's a fundamental aspect of MRI
technology and is essential for the diagnostic and research applications of MRI and fMRI.

5. BOLD in fMRI:
• Primary form of fMRI contrast.
• Maps neural activity by imaging changes in blood flow related to brain cell energy use.
• Deoxyhemoglobin is paramagnetic, affecting the magnetic susceptibility of blood.
• Sensitive to venous changes

6. Hemodynamic Response (HDR):


• The change in MR signal from neuronal activity.
• Lags neuronal events by 1-2 seconds, peaks about 5 seconds post-stimulus, and may show an undershoot
after activity stops.
• If the neuron keep firing(continuos stimulis), the peak spreads to a flat plateau while the neurons stay active.

7. Vascular Response to Activation:


• • Oxygen metabolism leads to changes in the ratio of deoxyhemoglobin (dHb) and oxyhemoglobin (HbO2),
influencing the BOLD signal.

8. Sources of BOLD Signal:


• An indirect measure of neural activity via the hemodynamic response.
• BOLD response reflects local field potential activity.

Basic Physics of MRI


• RF pulse duration and strength determine the flip angle of the magnetic vector.
• The stages of magnetic resonance include RF energy retransmission, T1 recovery, and T2/T2* relaxation.

MRI Scanner Components


• Static Magnetic Field Coils
• Gradient Magnetic Field Coils
• Magnetic shim coils
• Radiofrequency Coil
• Subsystem control computer
• Data transfer and storage computers
• Physiological monitoring and recording hardware

9. Advantages of BOLD:
• Non-invasive, high spatial and temporal resolution, enables observation of entire brain networks.

10. Disadvantages of BOLD:


• Limited spatial and temporal resolution, not fully understood functional organization, difficulty in differentiating
between excitation/inhibition, and neuromodulation.
The term "surrogate signal of haemodynamic activity" in the context of functional MRI (fMRI) refers to the fact
that the BOLD (blood-oxygen-level dependent) signal used in fMRI serves as a proxy or substitute for directly
measuring the underlying neural activity. The BOLD signal does not measure neural firing directly but instead
reflects changes in blood flow and oxygenation levels in the brain that are associated with neural activity.
Here's a more detailed explanation:
Haemodynamic Activity:
• This refers to the various physiological changes that occur in the blood vessels of the brain in response
to neural activity. When a particular area of the brain is active, it requires more oxygen and nutrients,
leading to an increase in blood flow to that region.
Common Uses of MRI Surrogate Signal:
• Diagnosing and monitoring conditions like tumors, heart problems, blood vessel blockages, liver diseases, • A surrogate signal is an indirect measure that stands in for a more direct but potentially more difficult or
intestinal diseases, etc. invasive measurement. In fMRI, the BOLD signal is a surrogate for the actual neural activity because it
is easier to measure and non-invasive.
Disadvantages of MRI • The BOLD signal changes as a result of the presence of deoxyhemoglobin (dHb), which is
• Precautions with metallic objects paramagnetic and distorts the local magnetic field. When neural activity increases in a brain region,
more oxygenated blood is delivered than is needed for immediate metabolic demands, leading to a
• Incompatibility with certain medical devices like pacemakers
decrease in the concentration of dHb and a corresponding change in the MRI signal.
• Claustrophobia issues Physical and Biological Constraints:
• Noise • The phrase "which has physical and biological constraints" refers to the limitations and factors that
• Size limitations for patients affect the reliability and interpretation of the BOLD signal as a measure of neural activity:
• Long scanning times ○ Physical Constraints: These include the spatial and temporal resolution limits of the MRI
• High cost scanner, the sensitivity of the BOLD signal to magnetic field inhomogeneities, and the effects of
physiological noise (e.g., from respiration and cardiac cycles) on the signal.
○ Biological Constraints: These involve the complex relationship between neural activity, blood
MRI Safety Risks and Contraindications flow, oxygen consumption, and the dynamics of the hemodynamic response. The BOLD signal is
• Risks associated with magnetic fields and RF energy influenced by a variety of factors, including the balance between oxygen supply and demand, the
• Contraindications including implanted medical devices and electronic or magnetized foreign bodies reactivity of cerebral blood vessels, and the draining veins' efficiency.
Metallic objects must be removed from the room when an MRI scanner is operating due to several safety and In essence, the BOLD signal is a useful but imperfect proxy for neural activity. It is subject to constraints that
technical reasons: can affect the accuracy and interpretation of the images produced by fMRI. Understanding these constraints
1. Strong Magnetic Field: MRI machines use powerful static magnetic fields, which can be tens of is crucial for designing fMRI experiments and correctly interpreting the results.
thousands of times stronger than the Earth's magnetic field. This strong magnetic field can attract and
move metallic objects, potentially causing injury to patients, staff, or damage to the equipment. 11. BOLD and Magnetic Susceptibility:
2. Projectile Effect: If ferromagnetic objects (those that are strongly attracted to magnets) are brought into • BOLD contrast changes due to differences in magnetic susceptibility between oxygenated and deoxygenated
the MRI room, they can become dangerous projectiles that are pulled toward the magnet with blood.
considerable force. This can lead to accidents and damage. The relationship between hemoglobin, its oxygenation state, and its paramagnetic or diamagnetic properties
3. Image Artifacts: Metal objects can cause significant distortion and artifacts in the MRI images, reducing is crucial for understanding the BOLD (blood-oxygen-level dependent) contrast used in functional MRI
the diagnostic quality. These artifacts can make it difficult or impossible to accurately interpret the images. (fMRI). Here's how it works:
4. Altered Magnetic Field: The presence of metal can alter the homogeneity of the magnetic field, which is 1. Diamagnetism:
critical for accurate imaging. An uneven magnetic field can lead to incorrect image intensities and spatial ○ Most substances, including biological tissues, are diamagnetic, which means they have little to no
distortions. response to an external magnetic field. They are slightly repelled by the magnetic field and do not
5. Heat Generation: When radiofrequency (RF) pulses are applied during an MRI scan, metal objects can maintain any magnetism once the external field is removed.
absorb energy and potentially heat up. This could cause burns or discomfort to the patient if the metal is 2. Paramagnetism:
in contact with their skin. ○ Paramagnetic substances, on the other hand, are attracted to a magnetic field and can induce a
6. Interference with Devices: Some metallic objects, such as electronic devices or credit cards with magnetic field of their own when placed within an external magnetic field. They retain their
magnetic strips, can be damaged by the strong magnetic field. Additionally, the MRI can interfere with the magnetism as long as the external field is present.
operation of these devices. 3. Oxyhemoglobin (HbO2):
7. Pacemaker and Medical Implants: Patients with certain types of metallic implants, such as pacemakers ○ Oxyhemoglobin, which carries oxygen, is diamagnetic. It does not significantly distort the
or cochlear implants, are at risk of device malfunction or movement of the implant during an MRI scan. magnetic field because of its lack of unpaired electrons. In the context of MRI, this means that the
Special precautions or alternative imaging methods are required for these individuals. presence of oxyhemoglobin does not cause significant signal loss.
8. Fire Hazard: In extremely rare cases, the attraction of metallic objects in the presence of the MRI's 4. Deoxyhemoglobin (dHb):
magnetic field can lead to a fire if the object being attracted is part of the equipment or if it damages ○ Deoxyhemoglobin, which has released oxygen, is paramagnetic. It has unpaired electrons that
electrical wiring or gas lines. make it susceptible to the magnetic field, causing it to distort the local magnetic field around
9. RF Coil Damage: The MRI's RF coils, which are used to transmit and receive signals, can be damaged blood vessels. This distortion affects the MRI signal in a way that can be detected and used to
or their performance can be degraded by the presence of metal. infer brain activity.
5. BOLD Contrast in fMRI:
Tissue Differentiation and Relaxation Types ○ When a region of the brain is active, it consumes more oxygen, leading to an increase in
oxyhemoglobin and a decrease in deoxyhemoglobin. Since deoxyhemoglobin is paramagnetic
• T1 and T2 relaxation times are crucial for tissue contrast in MRI.
and distorts the local magnetic field, its decrease results in a more uniform magnetic field and a
• T1-weighted images are useful for differentiating fat from water, while T2-weighted images are good for imaging relative signal increase in the MRI (a higher BOLD signal).
edema. ○ The BOLD signal change is not a direct measure of neural activity but an indirect measure based
on the hemodynamic response (the change in blood flow and oxygenation in response to neural
In the context of Magnetic Resonance Imaging (MRI), T1 and T2 refer to two different types of relaxation times

Quick Notes Page 4


○ Brightfield and Phase Contrast Microscopy:These lamps are suitable for conventional secondary antibody will emit light when excited by the appropriate wavelength, allowing the
microscopy techniques like brightfield and phase contrast, which do not require a specific visualization of the antigen.
wavelength. 11. Analysis:
○ Fluorescence Microscopy: Some fluorescence microscopy applications use broad-spectrum ○ The resulting fluorescence image is analyzed to determine the location and distribution of the antigen
lamps to excite a range of fluorophores, although more specific light sources are often within the cells or tissues.
preferred for this purpose.
• Multi-color labeling to differentiate biomolecules using different fluorophores.
○ Illumination for General Observation:They provide a broad and even light source for
general observation and documentation.
Summary
• He-Ne lasers are preferred for applications that require a specific, monochromatic light source
with high coherence and collimation, such as confocal microscopy and certain types of
spectroscopy.
• Tungsten/fluorescent lamps are more versatile and are used in a broader range of applications,
including general microscopy techniques that do not require a specific wavelength of light.

8. Geometrical Optics
Refraction: Law of refraction and the concept of the refractive index.
The law of refraction, also known as Snell's Law, and the concept of the refractive index are fundamental
to the understanding of how light behaves when it passes from one medium to another , such as from air
into water or glass. Here's a detailed explanation of both:
Law of Refraction (Snell's Law)
1. Description: The law of refraction states that when a light wave passes from one medium to
another, there is a change in its speed, resulting in a change in direction (refraction) unless the
light is incident perpendicularly (normal incidence) to the boundary between the two media.
2. Mathematical Expression: The law is mathematically expressed as: 1sin( 1)= 2sin( 2)n1sin(θ1
)=n2sin(θ2)where:
○ 1n1 and 2n2 are the refractive indices of the first and second media, respectively.
○ 1θ1 is the angle of incidence (the angle between the incident ray and the normal to the
surface).
○ 2θ2 is the angle of refraction (the angle between the refracted ray and the normal).
3. Consequences: When light passes from a medium with a lower refractive index to one with a
higher refractive index, it slows down and bends towards the normal. Conversely, when it passes Objective Specifications
from a medium with a higher refractive index to one with a lower refractive index, it speeds up • Additional Specs: Tube length, working distance (WD), cover glass thickness,
and bends away from the normal. and immersion medium.
Refractive Index • Usage Caution: Never apply oil to a dry lens.
1. Definition: The refractive index (denoted as n) of a medium is a measure of how much the speed
of light is reduced inside the medium as compared to the speed of light in a vacuum (denoted as Numerical Aperture of Condenser
c). • Role: Controls the illumination NA, which affects image brightness and resolution.
2. Relationship with Speed of Light: The refractive index is defined by the ratio of the speed of light • Adjustment: Through the condenser aperture.
in a vacuum to the speed of light in the medium: = where cis the speed of light in the medium.
3. Physical Meaning: A higher refractive index indicates that light travels more slowly in the medium Bright-Field Microscopy
and is more "refracted" or bent when entering it. • Process: Uses bright light to illuminate samples, primarily relying on light
4. Values: The refractive index is a dimensionless number. For example: absorption for visualization.
○ Air: approximately 1.00 • Applications: Common in histological sample staining.
○ Water: approximately 1.33 • Sample Preparation: Involves fixation, embedding and sectioning, and staining.
○ Glycerol: approximately 1.47 1. Fixation:
○ Immersion Oil: approximately 1.52 ○ Purpose: To immobilize, kill, and preserve the cells.
○ Glass: approximately 1.52 ○ Method: Fixatives such as formaldehyde are commonly used to
5. Dispersion: The refractive index often varies with the wavelength of light, leading to dispersion, prevent decay and maintain the structure of the cells.
where different colors (wavelengths) of light are refracted by different amounts, causing them to 2. Embedding and Sectioning:
spread out. ○ Dehydration: After fixation, the tissue is dehydrated to remove water,
which prepares it for embedding.
6. Applications: The concept of the refractive index is crucial in various fields, including optics, where ○ Embedding: The dehydrated tissue is embedded in a supporting
lenses are designed to focus light by bending the rays according to their refractive indices. medium, typically hot liquid wax or resin.
In summary, the law of refraction describes how light changes direction when passing from one medium ○ Solidification: The embedding medium solidifies, providing mechanical
to another, and the refractive index is a measure of how much the speed of light is reduced in a strength to the tissue.
medium, which directly affects the degree of refraction. Together, these concepts explain the behavior ○ Sectioning: The embedded tissue is cut into thin sections (5 - 15 µm 1. Signal:
of light in different optical environments and are essential for designing optical instruments and thick) using a microtome, a precision instrument designed for slicing ▪ In microscopy, the signal is the light emitted or scattered by the specimen that is being imaged,
biological specimens. particularly when it has been labeled with a fluorescent dye or is naturally fluorescent. For instance, in
understanding vision and various physical phenomena. fluorescence microscopy, the signal is the specific light emitted by the fluorophores that are bound to
3. Staining:
○ Purpose: To reveal cellular and subcellular structures by contrasting the target molecules within the sample.
different components within the cells. 2. Noise:
○ Rehydration: The sections are rehydrated to prepare them for staining. ▪ Noise is the random variation of the image intensity that is not related to the signal. It can arise from
○ Staining Techniques: Various staining methods are used, including: various sources, such as electronic noise from the detector, stray light within the microscope,
▪ Hematoxylin and eosin (H&E) staining, which is the most autofluorescence from the sample itself, or even thermal fluctuations.
common technique in histology. Hematoxylin tends to stain cell
nuclei blue-black, while eosin stains the cytoplasm and cell A high signal-to-noise ratio means that the signal is much stronger than the noise, making it easier to detect and
membranes pink. analyze the features of interest within the sample. This is particularly important in fluorescence microscopy, where
○ Mechanism: The molecular mechanism of staining is not entirely clear the signal (fluorescence) can be quite weak compared to the background noise.
but is thought to involve an affinity of certain dyes for specific cellular
components, such as negatively charged molecules for hematoxylin. 7. Wide-field Epi-Fluorescence Microscopy
4. Mounting: • Explains the difference between episcop and diascopic illumination.
○ The stained sections are mounted on a glass slide using an
appropriate mounting medium to secure them in place.
○ A cover slip is placed over the mounted section to protect it and
reduce the risk of damage or contamination.
5. Additional Considerations:
○ Cover Glass Thickness: The thickness of the cover glass (usually
0.17 mm) must be consistent to ensure even contact with the mounting
medium and to facilitate focusing.
○ Immersion Medium: If high magnification objectives are used, an
immersion medium such as oil may be required to improve image
quality by reducing spherical aberration. ○
Lens: Ideal lens, optical axis, focal points, and focal lengths. 6. Microscope Slide Preparation:
Real vs. Virtual Images: Differences, detection methods, and applications. ○ The slides are labeled with relevant information, such as the type of
Real and virtual images are two types of images formed by optical systems, such as lenses and mirrors. tissue, the date of preparation, and any special staining techniques
They have distinct properties and are used in various applications due to their unique advantages. used.
Real Images 7. Storage:
○ Prepared slides are stored in a controlled environment to prevent
Definition:
damage and maintain the quality of the specimen.
• A real image is formed when light rays actually converge to form an image. This happens when 8. Examination:
light rays emitted from an object are refracted by a lens or reflected by a mirror in such a way that ○ Once prepared, the slides are ready to be examined under the
they meet at a point. microscope, where the bright-field technique relies on the absorption
Characteristics: of certain wavelengths of light to visualize the structure of the stained Episcopic Illumination:
• Real images are inverted relative to the object. sample. ○ In episcopic illumination, also known as "epi-illumination," the light source is placed above the
• They can be projected onto a screen because the light rays physically intersect. objective lens, and the light is directed onto the specimen through the objective itself. This means that
Advantages: Phase Contrast Microscopy(A variation of bright field the objective lens acts as a condenser, focusing the light onto the specimen.
• Can be displayed on a screen for viewing by multiple people. ○ Episcopic illumination is particularly used in fluorescence microscopy, where the same objective that
microscopy) is used to focus the light for excitation of the fluorophores is also used to collect the emitted
• Can be captured permanently on photographic film or a digital sensor.
• Advantages: No need for cell staining, allows observation of live cells. fluorescence.
• Can be manipulated using additional optical components to alter size, orientation, or quality.
• Requirements: Special objectives and phase rings. The objective phase ring ○ The advantage of this method is that it allows for high-intensity illumination of the specimen, which is
Applications: must match the condenser phase ring. necessary for the detection of weakly fluorescent signals.
• Projectors use real images to display content on a screen. • Principle: Utilizes differences in refractive indices of cellular organelles to create ○ Episcopic illumination also helps to minimize the background light, as the objective can be designed to
• Cameras produce real images on film or a sensor to capture photographs. phase shifts, translating into intensity differences. transmit the fluorescence emission while blocking the excitation light.
• Telescopes and microscopes use real images for detailed observation and analysis. Diascopic Illumination:
○ Diascopic illumination, also known as "transmitted light" or "normal" illumination, involves a light
source that is placed below the specimen. The light passes through the specimen and is then
collected by the objective lens.
○ This is the traditional method of illumination used in bright-field microscopy, where the specimen is
visualized based on the absorption or scattering of light as it passes through the sample.
○ The advantage of diascopic illumination is that it provides a general view of the entire specimen and is
useful for observing the overall morphology and structure.
○ However, diascopic illumination is not ideal for fluorescence microscopybecause the emitted
fluorescence signal is weaker and can be overwhelmed by the strong excitation light if not properly
separated.
• Discusses the role of filters in reducing background noise.

8. How Filters Work in Epi-Fluorescence Microscopy


• Function of excitation, dichroic mirror, and emission filters.
• The critical wavelength and its impact on fluorescence detection.
1. Excitation Filter:
○ The excitation filter is designed to only allow light of a specific wavelength range to pass through,
which corresponds to the excitation wavelength of the fluorophore being used.
○ Its role is to ensure that only the light necessary to excite the fluorophores reaches the sample, while
Virtual Images filtering out other wavelengths that could cause unwanted activation or contribute to background
Definition: noise.
○ By focusing the excitation light, it increases the efficiency of the fluorescence process and reduces the
• A virtual image is formed when light rays diverge but appear to come from a common point when
potential for photobleaching or phototoxicity to the sample.
extended backward. This occurs with a virtual image formed by a magnifying glass or in the 2. Dichroic Mirror (Beam Splitter):
eyepiece of a microscope. ○ The dichroic mirror is a specialized type of mirror thatreflects light of one wavelength range while
Characteristics: transmitting light of a different wavelength range.
• Virtual images are upright and the same orientation as the object. ○ In fluorescence microscopy, it is positioned at a 45-degree angle to the optical path.
• They cannot be projected onto a screen since the light rays do not physically intersect. ○ The dichroic mirror reflects the excitation light towards the sample and then transmits the emitted
Advantages: fluorescence light towards the detector. This dual functionality allows the same objective lens to be
• Can be larger than the object, which is useful for magnification. used for both illumination and detection.
• Do not require a screen for viewing, making them suitable for direct observation. ○ It acts as a barrier between the excitation and emission light, ensuring that the excitation light does
not reach the detector and contaminate the fluorescence signal.
• Can be produced in a compact optical system, as no physical convergence is needed.
3. Emission Filter:
Applications: ○ The emission filter allows light of the specific emission wavelength range of the fluorophore to pass
• Magnifying glasses and reading glasses use virtual images to enlarge text for easier reading. through while blocking the remaining wavelengths, including the reflected excitation light(scattered
• Telescopes and microscopes can produce virtual images for direct viewing. excitation light).
• Endoscopes and other medical instruments use virtual images to view internal structures without
Microscopy Coordinate System (xyz) ○ Its primary role is to eliminate the background noise from the excitation light that may be scattered by
• Convention: xy plane is at the sample plane, z axis is along the optical axis. the sample and any other unwanted light that could interfere with the detection of the true
invasive procedures.
fluorescence signal.
Image Formation and Point Spread Function (PSF) ○ By filtering out this scattered light, the emission filter enhances the contrast and clarity of the
• Concept: Image formation is a point-by-point process involving the diffraction of fluorescence image, making it easier to visualize and analyze the specific labeling within the sample.
light.
• PSF: Represents the 3D image of a point light source, often approximated by the
Airy disk pattern.
Diffraction of light refers to the bending or spreading of light waves when they
encounter an obstacle or pass through an opening. This phenomenon occurs
because light behaves both as a wave and as a stream of particles called
photons. Diffraction is a fundamental property of all types of waves, including
sound waves, water waves, and electromagnetic waves like light.
Here are some key points about the diffraction of light:
1. Wave Nature: Diffraction demonstrates the wave nature of light. When light
waves encounter an obstacle or slit that is comparable in size to their
wavelength, they spread out or diffract.
2. Apertures and Slits: Diffraction can occur when light passes through a
small aperture (hole) or by the edges of an obstacle. The effect is more
pronounced when the size of the aperture or obstacle is similar to or smaller
than the wavelength of the light.
3. Central Maximum: When light passes through a slit, it creates a pattern
with a central bright region known as the central maximum, which is brighter
and wider than the light source itself.
4. Fringes: On either side of the central maximum, there are alternating dark
and light fringes known as the first minimum and subsequent maxima and
minima. These fringes occur due to constructive and destructive interference
of the light waves.
5. Constructive and Destructive Interference: Constructive interference
occurs when light waves are in phase and reinforce each other, leading to
bright fringes. Destructive interference happens when waves are out of
phase and cancel each other out, resulting in dark fringes.
6. Resolution Limit: Diffraction also affects the resolution of optical
9. Components and Features of an Epi -Fluorescence Microscope
instruments like microscopes and telescopes. The ability to distinguish
Combined Advantages and Applications between two closely spaced points (resolution) is limited by the wavelength • Lamphouse, filter turret, filter cube, and their roles.
of the light used and the size of the aperture or objective lens. Here's a description of the lamphouse, filter turret, and filter cube, and their roles in epi-fluorescence
Both real and virtual images have their own set of advantages and are chosen based on the specific
microscopy:
requirements of the application: 7. Airy Disk: In microscopy, the point spread function (PSF), which is the
image of a point source of light, is often represented by the Airy pattern. The 1. Lamphouse (Light Source):
• Education and Presentations: Projectors create real images on screens for educational purposes ○ The lamphouse contains the light source used to illuminate the specimen. In modern
central bright area of this pattern is known as the Airy disk, and it is
and business presentations. microscopes, this can be a high-intensity lamp, such as a mercury or xenon arc lamp, or a light-
surrounded by a series of concentric rings, each dimmer than the last.
• Photography and Videography: Cameras capture real images to produce photographs and videos. emitting diode (LED) source.
8. Wavelength Dependence: The extent of diffraction is inversely proportional
• Scientific Research: Microscopes and telescopes use real images for detailed observation and to the wavelength of the light. Shorter wavelengths (like blue and violet light) ○ The lamphouse is designed to provide stable and intense illumination for fluorescence
analysis in various scientific fields. diffract less than longer wavelengths (like red and yellow light). microscopy. It may include a mechanism to adjust the intensity of the light, which is important for
• Medical Diagnostics: Endoscopes and similar instruments use virtual images to allow doctors to controlling the exposure of the sample and the detector to the excitation light.
9. Applications: Diffraction is used in various applications, including the
Mercury lamps, which include mercury-vapor and mercury-arc lamps, have been
view internal body structures without surgery. development of optical devices like diffraction gratings, which spread light
traditionally used in various applications, including as a light source in fluorescence
• Reading Aids: Magnifying glasses and reading glasses create virtual images to help individuals with into its component colors, and in the study of the structure of materials using
microscopy. Despite their utility, there are several limitations associated with the use of
vision impairments. techniques like X-ray diffraction.
mercury lamps:
In summary, real images are useful when a permanent record or projection is needed, while virtual 1. Environmental Impact: Mercury is a toxic substance, and lamps containing
images are ideal for direct viewing and magnification without the need for projection. Each type of mercury pose environmental risks due to potential mercury contamination if not
image serves different purposes and is chosen based on the requirements of the specific application or disposed of properly.
task at hand. 2. Limited Lifespan: Mercury lamps have a finite lifespan and will degrade over time,
leading to a decrease in light output and the need for replacement.
3. Energy Inefficiency: Mercury lamps are relatively inefficient in converting electrical
energy into light, with much of the energy being emitted as heat, which requires
additional cooling mechanisms.
4. Spectral Line Emission: They emit light primarily at specific spectral lines rather
than a continuous spectrum, which can limit their use for certain applications
requiring specific excitation wavelengths.
5. UV Radiation Hazard: Mercury lamps emit significant amounts of ultraviolet (UV)
light, which can be harmful to living samples and require additional filtering to protect
both the sample and the observer.
6. Heat Generation: The lamps produce a lot of heat, which can affect the stability of
the microscope optics and the sample, potentially leading to increased
photobleaching and phototoxicity.
7. Size and Weight: Traditional mercury lamps are often larger and heavier than
alternative light sources, which can be a disadvantage for space-constrained
applications or portability.
8. Warm-Up Time: Mercury lamps require a warm-up period before they can be used
effectively, which can be inconvenient for time-sensitive experiments.
9. Instability: The light output of mercury lamps can be unstable, particularly when first
turned on or as they age, which can affect the consistency of experimental results.
10. Cost: The cost of ownership for mercury lamps can be higher due to the need for
regular replacement and the additional equipment required to manage heat and UV
emissions.
Due to these limitations, mercury lamps are being increasingly replaced by more modern
light sources such as light-emitting diodes (LEDs), which offer advantages like a
continuous spectrum, lower heat generation, longer lifespan, and environmental safety.

2. Filter Turret:
9. Compound Microscope Systems
○ The filter turret is a rotating wheel or slide mechanism that holds multiple filter sets. Each filter
Finite vs. Infinity Systems: Differences, advantages, and applications. set typically consists of an excitation filter, a dichroic mirror, and an emission filter.
In the context of optical microscopy, finite and infinite systems refer to two different types of optical ○ By rotating the turret, different filter sets can be brought into the optical path to select the
configurations used in microscopes, particularly concerning the way light is managed between the appropriate wavelengths for exciting and detecting different fluorophores.
objective lens and the eyepiece or tube lens. ○ The turret allows for quick and easy switching between different fluorescence channels without
the need for manual filter changes, which can be time -consuming and may risk contamination.
3. Filter Cube:
○ The filter cube is a complete assembly that includes the excitation filter, the dichroic mirror, and
the emission filter, all housed together in a single unit.
○ The filter cube is designed to match the specific excitation and emission characteristics of a
particular fluorophore. It ensures that only the light necessary to excite the fluorophore passes
through the excitation filter, the emitted light is transmitted by the dichroic mirror, and the
unwanted light is blocked by the emission filter.
○ Using a filter cube with the correct specifications is crucial for obtaining a high -quality
fluorescence signal and minimizing background noise.
• High numerical aperture objective and alignment ease.
• Considerations for sample mounting and illumination control.

Finite System
1. Description:
○ A finite optical system is one in which the objective lens forms a real image at a specific
distance (finite distance) from the lens itself.
2. Components:

Quick Notes Page 5


Here's how a PMT works and its key features: 4. Mitigation Strategies:
○ Minimize Light Exposure: Use the lowest light intensity necessary to obtain
Working Principle: a good image.
1. Photoelectric Effect: When light (photons) strikes the photocathode, which is a light - ○ Optimize Exposure Time: Shorter exposure times reduce the chance of
sensitive material, it can eject electrons through the photoelectric effect. photobleaching.
2. Amplification: The emitted electrons are accelerated towards a series of electrodes ○ Use Anti-Fade Reagents: These can help to protect the fluorescent
Mitotic ER is continuous *It discusses the default exit of secretory cargos before reaching t
called dynodes by an electric field. Each dynode is at a higher voltage than the previous molecules from degradation.
one. • FLIP (Fluorescence Loss In Photobleaching): Compared to FRAP, used to demonstrate the continuous connection of to the TGN being signal-dependent, which adds a layer of regulat
cellular structures. ○ Cooling the Specimen: Lower temperatures can slow down the rate of
3. Secondary Electron Emission : As the electrons strike each dynode, they cause the photobleaching. *The use of molecular markers (like VSVG, E -cadherin, TNFα, C
emission of multiple secondary electrons. This process is repeated at each dynode, ○ Use of Photo-Stable Dyes: Some fluorescent dyes are more resistant to protein trafficking within the Golgi could be a focus of the lecture
leading to a cascade effect that significantly amplifies the original signal. Dynode become photobleaching than others. different cargo molecules are sorted and transported.
more and more positively charged. 5. Non-Fluorescent Samples: If the time-lapse imaging does not involve
4. Anode Collection: The final stage collects the amplified electrons at the anode, which is Epilogue
fluorescence, photobleaching is not a concern.
connected to an electronic circuit that measures the current. • Optical Microscopy: Capable of high-resolution and quantitat
6. Adaptive Techniques: Some modern microscopes and imaging systems are
• Field Dynamics: Continuous advancements in instrument des
Key Features: equipped with adaptive features that can adjust the light intensity dynamically to
methods.
• High Gain: PMTs can amplify weak light signals to a level that is measurable by minimize photobleaching.
standard electronic equipment. Gains can range from a few thousand to over 10^7, In summary, time-lapse imaging is a powerful tool for visualizing changes over time, but
depending on the design and application. it can be affected by photobleaching, especially in fluorescence microscopy. Careful
planning and use of appropriate techniques can help to minimize the impact of
• Fast Response: PMTs have a very fast response time, which makes them suitable for
photobleaching and ensure high-quality results.
detecting short light pulses or rapidly changing light signals.
• Low Light Detection: They are extremely sensitive and can detect single photons, 11. Image Storage Formats
making them ideal for applications where light levels are very low.
Mitotic ER is continuous • TIFF: No compression, preserves intensity values, stores metadata.
• Linearity: PMTs typically have a linear response to light intensity, which means that the
output signal is directly proportional to the input light over a wide range. • FRET (Förster Resonance Energy Transfer): A method for real-time monitoring of distances or interactions between • JPEG: Lossy compression, smaller file size, not recommended for scientific images due
two molecules that are in close proximity (typically 1–10nm). Nonradiative process to information loss. ARTIFACTS!
• Wide Spectral Range: PMTs can be made sensitive to a wide range of wavelengths by
using different photocathode materials or windows. 12. Digital Imaging as a Sampling Process
• Noise: Like all electronic devices, PMTs also generate noise. However, their design aims • Sampling involves measuring intensity at discrete positions.
to minimize this noise, particularly the dark current and thermal noise.
• Magnification controls the sampling interval (pixel size).
Applications:
• Fluorescence and Luminescence Detection: PMTs are used in fluorescence
microscopy to detect the light emitted by fluorophores.
• Analytical Instruments: They are used in devices like mass spectrometers and particle
detectors for detecting ionizing radiation.
• Medical Imaging: PMTs are used in imaging technologies like positron emission
tomography (PET) to detect gamma rays. 13. Nyquist Sampling Theorem
• Astrophysics and Astronomy: They are used in telescopes to detect light from distant • To utilize full microscope resolution, images must be sampled at the Nyquist rate, which
celestial objects. is less than half the resolution distance.
• Bioluminescence Detection: PMTs are used to measure the light emitted by The Nyquist sampling theorem, also known as the Nyquist–Shannon sampling theorem, is a
bioluminescent organisms or reactions. fundamental principle in the field of signal processing and information theory. It states the
minimum rate at which a signal should be sampled to avoid aliasing—when a rapidly changing
signal is sampled too slowly, resulting in a lower frequency version of the signal that does not
accurately represent the original.
Here are the key points of the Nyquist sampling theorem:
1. Sampling Rate: The theorem specifies that the sampling rate, or the number of
1. Requirements for FRET: samples per second, must be at least twice the maximum frequency component of
▪ The donor molecule must be able to fluoresce when excited. the signal being sampled. This minimum rate is known as the Nyquist rate.
▪ The acceptor molecule must have an absorption spectrum that overlaps with the donor's emission spectrum. 2. Avoiding Aliasing: If the signal is sampled at a rate higher than the Nyquist rate,
▪ The relative orientation of the donor and acceptor transition dipoles should be favorable (typically parallel) for the original signal can be perfectly reconstructed from the samples without any loss
efficient energy transfer. of information. Sampling at a rate lower than the Nyquist rate can lead to aliasing.
2. Energy Transfer Mechanism: The energy transfer occurs through a dipole–dipole coupling mechanism, which is a 3. Frequency Components: The theorem applies to band-limited signals, which are
through-space interaction that does not require physical contact between the two molecules. signals that contain no frequency components above a certain maximum
frequency.
• FLIM (Fluorescence-Lifetime Imaging Microscopy): Measures the fluorescence lifetime, which is sensitive to FRET 4. Reconstruction: According to the theorem, any continuous-time signal can be
and can be used to determine interactions between molecules. reconstructed from its samples if the samples are taken at the Nyquist rate or
Comparison: Confocal vs. Wide-Field Microscopy faster, and if the signal is band-limited.
FLIM, which stands for Fluorescence Lifetime Imaging Microscopy, is a specialized microscopy technique that measures 5. Practical Implications: In practice, to ensure that a signal can be perfectly
Confocal Pros: Removes out-of-focus light, optical sectioning, and precise manipulation.
the average time a fluorophore spends in the excited state before returning to the ground state by emitting a photon . This reconstructed, the sampling rate is often chosen to be significantly higher than the
Confocal Cons: Slower for large samples, risk of photobleaching, and lower light efficiency. time is known as the fluorescence lifetime (τ, or tau), and it is typically on the order of nanoseconds. Nyquist rate. This is known as oversampling.
Here's how FLIM works and why it's useful: 6. Applications: The Nyquist theorem is used in various domains, including
Objective-Based TIRFM (Total Internal Reflection Fluorescence Microscopy) 1. Fluorescence Lifetime: The fluorescence lifetime is an intrinsic property of a fluorophore that is independent of its telecommunications, seismology, audio and video technology, and medical
concentration and the intensity of the excitation light. It is affected by the molecular environment of the fluorophore. imaging.
2. Lifetime Measurement: In FLIM, the fluorescence lifetime is measured for each pixel in an image. This is typically 7. Temporal and Spatial Sampling: The concept applies not only to time-varying
done by exciting the fluorophores with a short pulse of light and then measuring the time it takes for the signals (temporal sampling) but also to spatial sampling, such as in imaging where
fluorescence to decay. the resolution is determined by the sampling rate (e.g., pixels per unit length).
3. Imaging: FLIM generates an image where each pixel's color or intensity represents the average fluorescence 8. Nyquist Frequency: The actual frequency that corresponds to the Nyquist rate is
lifetime of the fluorophores at that location. called the Nyquist frequency, which is half of the sampling frequency.
4. Sensitivity to Environment: Changes in the fluorophore's environment, such as pH, ion concentration, or the 9. Digital Audio: A common example is digital audio, where the sampling rate for
presence of a quencher, can alter the fluorescence lifetime.This makes FLIM a sensitive tool for studying these CDs is 44.1 kHz, which is more than twice the highest frequency (20 kHz) that the
parameters. average human can hear, thus adhering to the Nyquist theorem.
5. FRET and FLIM: FLIM is particularly useful for detecting Förster Resonance Energy Transfer (FRET). When FRET 10. Limitations: The theorem assumes an idealized scenario. In real-world
occurs between a donor and an acceptor fluorophore, the fluorescence lifetime of the donor is shortened. By applications, signals are often not perfectly band-limited, and anti-aliasing filters are
measuring changes in the donor's lifetime, researchers can infer the presence and proximity of the acceptor. used to approximate band-limited behavior before sampling.
6. Applications:
○ FLIM is used to study protein-protein interactions and molecular complexes in cells.
○ It can be used to investigate the dynamics of cellular processes, such as signaling pathways.
○ FLIM can also be used to differentiate between different fluorophores with overlapping emission spectra but
different lifetimes.
7. Advantages:
○ Unlike traditional fluorescence microscopy, which is affected by concentration and photobleaching, FLIM
provides information that is independent of these factors.
○ FLIM can provide more detailed information about molecular interactions and the local environment of
fluorophores.
8. Technical Requirements: FLIM requires specialized equipment capable of time-resolved detection, such as single-
photon avalanche diode (SPAD) cameras or time-correlated single-photon counting (TCSPC) systems.
Total Internal Reflection: Occurs when light travels from a medium with a higher refractive index to
one with a lower refractive index at an angle greater than the critical angle.

The concepts of oversampling, undersampling, and temporal sampling are all


related to the Nyquist rate and the Nyquist theorem in signal processing. Here's
what each term means:
1. Oversampling:
○ Oversampling occurs when a signal is sampled at a rate higher than
the minimum Nyquist rate, which is twice the maximum frequency of
the signal.
○ Advantages: It can provide several benefits, such as:
▪ Improved signal-to-noise ratio (SNR) because noise is averaged
out over more samples.
▪ Easier filtering and processing since the sampled signal has a
lower bandwidth relative to the sampling rate.
Evanescent Field: A light intensity field that rapidly decreases with distance from the interface, used for ▪ Reduced aliasing effects, as the higher sampling rate provides a
illuminating and observing only the plasma membrane of adherent cells. better representation of the original signal.
▪ More flexibility in anti-aliasing filter design, which can be less
critical since the filter requirements are relaxed.
2. Undersampling:
○ Undersampling is when a signal is sampled at a rate below the Nyquist
rate.
○ Consequences: This can lead to aliasing, where high-frequency
components of the signal are incorrectly represented as lower
frequencies in the sampled data. This misrepresentation can cause
significant distortion and make it impossible to accurately reconstruct
the original signal from the samples.
3. Temporal Sampling and the Nyquist Theorem:
○ Temporal sampling refers to the process of taking samples of a signal
FRET Applications in Cell Biology(10nm) over time.
• FRET Pairs: GFP and mCherry can be used as a FRET pair to study protein interactions and chromatin compaction in ○ According to the Nyquist theorem, to avoid aliasing in temporal
live cells. sampling, the sampling rate must be at least twice the maximum
• FRET efficiency EFRET = 1 − (τDA/τD) τDA is the fluorescence lifetime in the presence of both donor and acceptor. τD is the frequency component of the signal (the Nyquist rate).
Basic Instrumental Approaches to TIRFM fluorescence lifetime in the presence of the donor only. ○ Applications: This concept is crucial in various domains, such as:
Objective-based TIRFM is the most common configuration, requiring a high numerical aperture (NA) ▪ Digital audio recording, where the standard sampling rate for CDs
objective and capable of both TIRF and wide -field observation. is 44.1 kHz, which is more than twice the highest frequency the
human ear can typically hear (20 kHz).
Objective-based Total Internal Reflection Fluorescence Microscopy (TIRFM) is a specialized form of
▪ Medical imaging, where the sampling rate determines the
microscopy that combines the principles of total internal reflection with high -quality optical resolution and quality of the images.
microscopy. Here's a breakdown of the key components and capabilities mentioned in the ▪ Seismology, where accurate sampling rates are needed to
statement: capture and analyze seismic signals without loss of information.
In summary, oversampling is a technique that intentionally samples a signal at a
1. Objective-Based TIRFM: This refers to a TIRFM setup where the total internal reflection is
higher rate than the minimum required by the Nyquist theorem, providing benefits
achieved through the objective lens itself, rather than through some other means such as
in signal processing and quality. Undersampling, on the other hand, is the practice
prisms or external mirrors.
of sampling below the Nyquist rate, which can result in aliasing and a loss of
2. High Numerical Aperture (NA) Objective: information. Temporal sampling is the general term for taking samples over time
○ The numerical aperture of an objective lens is a measure of its light -gathering ability and and must be done at or above the Nyquist rate to ensure accurate signal
its resolving power. A higher NA allows for better resolution and the capture of more representation.
light, which is crucial for TIRFM. *Folds = d/x.xxd
○ In TIRFM, a high NA objective (typically >1.4) is necessary to achieve the critical angle *sampling rate= microscope resolution(nm)/ nm per pixel
for total internal reflection. This is because the critical angle depends on the refractive These two are the same equation.
indices of the media involved and can be estimated using the formula arcsin (
θc=arcsin(n1n2), where n1 is the refractive index of the medium in which the light is Relationship Between Nyquist, Scanning, and Microscope Resolution:
traveling (usually glass for the objective lens), and n2 is the refractive index of the ○ To avoid undersampling and ensure that the digital image accurately
second medium (often water or air). represents the detail resolved by the microscope, the scanning resolution
3. Total Internal Reflection: When light is incident on the interface between two media at an (pixel size) should be no larger than half the size of the smallest resolvable
angle greater than the critical angle, it is completely reflected back into the medium with the detail (i.e., the microscope's resolution).
higher refractive index. This is the principle behind TIRFM. ○ For example, if the microscope has a resolution of 200 nm, then the pixel size
4. Evanescent Field: At the point of total internal reflection, an evanescent wave is created that (or sampling interval) should be 100 nm or smaller to satisfy the Nyquist
penetrates a short distance into the medium with the lower refractive index. This wave decays criterion.
rapidly with distance from the interface and is the key to TIRFM's ability to illuminate a very thin
section of the sample. 14. Image Arithmetic and Signal-to-Noise Ratio (SNR)
5. Capable of Both TIRF and Wide-Field Observation: Objective-based TIRFM setups are • Image arithmetic involves operations like addition and subtraction between images of the
versatile and can be used for both TIRF microscopy, where the evanescent wave is used to same dimension.
selectively illuminate a thin section of the sample, and wide-field microscopy, where the entire • Averaging acquisitions boosts the SNR by suppressing random noise.
field of view is illuminated. This dual capability allows researchers to switch between different
imaging modes depending on the requirements of their experiment. 15. Image Filtering
• Filtering computes new pixel values using neighboring pixels, useful for noise removal.

16. Ethical Guidelines for Image Manipulation


• Acceptable manipulations: Brightness, contrast, background subtraction, filtering,
cropping.
• Misleading manipulations: Adding/subtracting features, partial enhancement, changing
gamma (requires disclosure).

Gamma, also known as the contrast or tone curve, affects the display of an image by
defining how the intensity values in the image are mapped to the display output. It is a
measure of the overall "contrastiness" or the way in which the mid-tones in an image are
rendered. Here's how gamma impacts image display:
1. Linear vs. Non-Linear Response:
○ A gamma value of 1.0 represents a linear response, meaning that there is a
one-to-one correspondence between the input values and the output. This
results in a display that has the same contrast as the original image data.
○ Values greater than 1.0 produce a more contrasted or "crisp" image, with
darker darks and lighter lights, often referred to as being in the "gamma
correction" mode.
○ Values less than 1.0 result in a less contrasted or "flat" image, where the
differences between light and dark areas are less pronounced.
2. Human Perception of Brightness:
○ The human eye perceives light in a non-linear fashion, which is why many
display devices are calibrated with a gamma value of around 2.2. This
compensates for the eye's response and makes the image appear more
natural.
3. Display Devices:
○ Different display devices may have different native gamma values. For
example, older CRT monitors typically had a gamma of around 2.5, while
modern LCD monitors are often calibrated to a gamma of 2.2.
4. Image Processing:
○ Adjusting gamma can be used as a simple form of image processing to
improve the appearance of an image on a particular display device without
altering the actual pixel values in the image.
5. Digital Image Representation:
○ In digital image files, gamma correction is often applied during the encoding
process, and this information is stored in the file's metadata. When the image
is displayed or processed, the gamma value is taken into account to ensure
the image appears as intended.
6. Professional Settings:
○ In professional imaging and printing, accurate gamma correction is crucial for
ensuring that the image on the display matches the final printed result.
7. Software Tools:
○ Image editing software often includes gamma adjustment tools, allowing
users to modify the gamma curve for creative or technical purposes.
8. Night Vision and Low Light Conditions:
○ In some cases, such as night vision or low-light photography, a gamma
adjustment can help to bring out details in the darker parts of the image.
9. Compatibility Issues:
○ Incorrect gamma settings can lead to compatibility issues where an image
looks correct on one device but appears too dark or too light on another.
In summary, gamma is a critical factor in how an image is displayed. It influences the
perceived brightness and contrast of the image and must be properly managed to ensure
that images are displayed accurately across different devices and viewing conditions.

17. Basic Image Manipulation Techniques


• Adjusting brightness, contrast, and gamma can change the display but not the underlying
image values.

18. Conclusion
• Emphasizes the importance of proper image acquisition, manipulation, and storage to
maintain scientific integrity and data quality.

Quick Notes Page 6


patient to create a tomographic image.
○ PET: Requires the simultaneous detection of two photons (coincidence detection), which helps
to precisely locate the source of emission within the body.
4. Image Resolution and Sensitivity:
○ SPECT: Generally has lower resolution compared to PET and may be more prone to artifacts
the TGN, with the secretory targeting *False positive rate= 1- specificity due to the collimator's limiting effect on spatial resolution.
tion to the trafficking process. • Dehydration: Replacing water with organic solvents (e.g., ethanol series). ○ PET: Offers higher resolution and sensitivity due to the direct detection of two annihilation
CD59, and furin) to study the kinetics of photons and advanced image reconstruction techniques.
• Embedding: Infusing and polymerizing plastic resins into cells.
e model, providing insights into how 5. Quantification:
• TEM is limited by sample thickness(no more than 80nm for 2D and 300nm for 3D
○ SPECT: Quantification of tracer concentration can be challenging due to the effects of collimator
penetration and scatter.
○ PET: Allows for more accurate quantification of tracer concentration and provides standardized
It's important to note that while many of these steps are performed at room temperature, some variations such as
ive imaging. uptake values (SUVs), which are important in oncology for assessing tumor metabolism.
cryo-fixation and cryo-sectioning are used to preserve the sample in a more native state, especially for samples
sign, probes, and quantification that are sensitive to chemical fixation and room temperature processing. Cryo-techniques involve rapid freezing 6. Clinical Applications:
(vitrification) to quench the sample's structure and are used to minimize artifacts and maintain the sample's near- ○ SPECT: Widely used for cardiac perfusion imaging, bone scans, and certain types of
Cryo-electron tomography (cryo-ET) and cryo-electron microscopy (cryo-EM) are advanced neurological and oncological studies.
native state.
techniques used to visualize and analyze the three-dimensional (3D) structure of biological ○ PET: Particularly valuable in oncology for detecting and staging cancer, assessing brain
Cryo-electron microscopy (cryo-EM), including cryo-TEM and cryoET, is a powerful approach that allows the
specimens at near-atomic resolution. Both methods involve rapidly freezing samples to preserve function, and evaluating the effectiveness of treatments.
examination of samples in a near-native, hydrated state by vitrifying the sample at very low temperatures (usually
their native state, but they differ in their application and the type of information they provide: 7. Cost and Availability:
liquid nitrogen or liquid helium temperatures). This method is particularly useful for studying biological
macromolecules, such as proteins and viruses, and cellular structures in their native conformations. Cryo-Electron Tomography (cryo-ET) ○ SPECT: Generally less expensive and more widely available than PET, as it does not require an
1. Purpose: Cryo-ET is used to study the 3D structure of larger and more heterogeneous on-site cyclotron to produce short-lived isotopes.
objects, such as cells, organelles, or macromolecular complexes in their native state. ○ PET: More costly due to the need for a cyclotron to produce the short-lived isotopes used in
2. Sample Preparation: The sample is plunge-frozen or high-pressure frozen to create a PET scans, and specialized equipment for coincidence detection.
vitreous (glass-like) ice layer, which preserves the native structure without the use of chemical The Receiver Operating Characteristic (ROC) curve, sensitivity, and specificity are concepts used in 8. Hybrid Imaging:
fixatives or stains. the field of diagnostic testing and statistical analysis to evaluate the performance of a binary ○ SPECT/CT: Combines functional SPECT imaging with the detailed anatomical information from
3. Imaging Process: The frozen-hydrated sample is mounted on a specialized holder and tilted classifier system, such as those used in medical diagnostics. CT scans.
through a range of angles (usually up to ±60 to 70 degrees) in the electron microscope. A ○ PET/CT: Offers the functional information from PET along with the detailed anatomy from CT,
series of 2D images are collected at each angle.
Sensitivity often providing more accurate diagnoses and treatment planning.
Sensitivity, also known as the true positive rate, is the proportion of actual positives which are 9. Radiation Exposure:
4. Data Analysis: The collected images are computationally processed to reconstruct a 3D correctly identified by the test. It's a measure of the test's ability to detect the condition when it is
model of the sample. This involves aligning the images, back-projecting them, and using ○ Both SPECT and PET involve exposure to ionizing radiation, but the amount can vary
present. The formula for sensitivity is: depending on the specific radiopharmaceutical used and the protocol followed.
iterative algorithms to refine the 3D reconstruction. Sensitivity=True Positives (TP)/True Positives (TP)+False Negatives (FN)
5. Advantages: Cryo-ET provides a detailed view of cellular structures and their spatial In summary, while both SPECT and PET are powerful tools in nuclear medicine, they have distinct
High sensitivity means that the test is good at identifying the condition when it exists (few false
relationships, allowing researchers to study the organization of macromolecular complexes negatives).
characteristics that make them more suitable for different types of diagnostic and therapeutic applications.
within cells. The choice between SPECT and PET often depends on the clinical question, the available resources, and
• A test with 100% sensitivity would correctly identify all individuals with the condition. the specific needs of the patient.
6. Limitations: The technique is time-consuming, has a low throughput, and is limited by the
thickness of the sample (typically up to ~300 nm). Additionally, the "missing wedge" effect due Specificity
to the limited tilt range can affect the resolution along different axes. Specificity, also known as the true negative rate, is the proportion of actual negatives which are
correctly identified by the test. It measures the test's ability to exclude the condition when it is not
Cryo-Electron Microscopy (cryo-EM) present. The formula for specificity is:
1. Purpose: Cryo-EM is primarily used for determining the high-resolution 3D structures of Specificity=True Negatives (TN)/True Negatives (TN)+False Positives (FP)
isolated, homogeneous macromolecules, such as proteins, nucleic acids, and large protein High specificity means that the test is good at identifying those without the condition (few false positives).
complexes. • A test with 100% specificity would correctly identify all individuals without the condition.
2. Sample Preparation: The macromolecules are vitrified in a thin layer of ice, preserving their ROC Curve
Protein labelling in EM native conformation.
The ROC curve is a graphical plot that illustrates the diagnostic ability of a binary classifier system
3. Imaging Process: The vitrified sample is imaged in the electron microscope, typically at low
as its discrimination threshold is varied. It is created by plotting the true positive rate (sensitivity)
temperatures to minimize radiation damage.
against the false positive rate (1 - specificity) at various threshold settings. The curve is a useful tool
4. Data Analysis: For single-particle cryo-EM, thousands of particle images are collected, and for evaluating and comparing the performance of different diagnostic tests or models.
individual particles are automatically picked, classified, and averaged to determine their 3D • The x-axis represents the false positive rate (0 to 1).
structure. For 2D crystals, the diffraction pattern is analyzed to obtain the structure.
• The y-axis represents the true positive rate (sensitivity), also from 0 to 1.
5. Advantages: Cryo-EM allows for the determination of structures at near-atomic resolution (as
low as 2-3 Å) for well-ordered samples. It can also accommodate heterogeneous samples, Key points about the ROC curve:
providing insights into different conformational states. • An ideal test would have a point in the upper left corner of the plot (100% sensitivity, 0% false
positive rate).
6. Limitations: Cryo-EM typically requires a homogeneous sample, and the resolution may be
lower for more complex or heterogeneous samples. Additionally, radiation damage limits the • The more the curve approaches the upper left corner, the better the test's performance.
total electron dose that can be used, which can affect the signal-to-noise ratio. • The area under the ROC curve (AUC) is a measure of the overall ability of the test to 10. SPECT Applications
In summary, cryo-ET is particularly useful for studying cellular structures and their spatial discriminate between the two conditions. An AUC of 0.5 suggests the test is no better than
random chance, while an AUC of 1.0 indicates a perfect test. • Uses: Tumor imaging, infection imaging, thyroid imaging, bone scintigraphy, cardiac, and brain imaging.
organization within cells, while cryo-EM is the method of choice for determining the high-resolution
structures of isolated macromolecules. Both techniques take advantage of the preservation of native Clinical Use 11. SPECT/CT Hybrid Scan
structures achieved through rapid freezing, but they differ in their application, sample requirements, In clinical practice, the balance between sensitivity and specificity can be critical. For instance, a • Advantage: Combines functional SPECT with anatomic CT for precise localization and identification of
and the type of 3D information they provide. test for a serious condition might be designed to be more sensitive to ensure that as many people issues like tumors or Alzheimer's disease.
with the condition as possible are identified (few false negatives), even if it means more false
positives. Conversely, a test for a less severe condition might prioritize specificity to avoid 12. PET (Positron Emission Tomography)
unnecessary worry or treatment for those without the condition. • Functional Imaging: Observes metabolic processes using positron-emitting radionuclides.
Understanding these concepts is crucial for interpreting the results of diagnostic tests and for • 3D Image Construction: Computer analysis of gamma ray pairs emitted from positron annihilation.
The statement you've provided refers to the limitations imposed by sample thickness in the context designing and evaluating new diagnostic tools.
Positron annihilation is a physical process that occurs when a positron (β+), which is the antiparticle of the
of 3D electron tomography and the relationship between the accelerating voltage of the electron electron with the same mass but with a positive charge, encounters an electron (β -) in matter. Here's a
microscope and the thickness of the sample that can be effectively imaged. detailed explanation of the process:
1. Sample Thickness Limitation: 1. Positron-Electron Collision: When a positron, emitted from a radioactive substance, travels a short
○ In 3D electron tomography, the ability to obtain a clear and informative 3D reconstruction distance in the body until it comes into close proximity with an electron.
Immuno-gold labelling is limited by the thickness of the sample. Thicker samples scatter electrons more, leading 2. Annihilation Event: The positron and electron meet and interact in a way that results in their mutual
In the context of immuno-labeling for electron microscopy (EM), the terms "pre-embedding" and "post- to a decrease in image resolution and an increase in the risk of damaging the sample destruction. This is because of the principle of charge conservation; a particle and its antiparticle
embedding" refer to the timing of the labeling process relative to the embedding of the biological sample into a through radiation. cannot coexist in the same space.
resin or other support medium. Here are the key differences: 2. 300 nm Limit: 3. Mass-Energy Conversion: The mass of the positron and electron is converted into energy according
Pre-Embedding Labeling ○ The general rule is that the sample thickness should not exceed approximately 300 to Einstein's equation, E=mc², where 'E' is energy, 'm' is mass, and 'c' is the speed of light in a
nanometers (nm) for optimal results in 3D tomography. Beyond this thickness, the image vacuum.
1. Timing: The labeling occurs before the sample is embedded in the resin. quality deteriorates due to increased electron scattering and radiation damage.
2. Process: Antibodies are applied to the sample while it is still in a semi -intact state, allowing for the 4. Production of Gamma Photons: The energy released during annihilation typically takes the form of
3. Rule of Thumb for Acceleration Voltage and Sample Thickness: two gamma photons (γ), which are emitted back-to-back, or 180 degrees apart, with each photon
penetration of antibodies into cells or tissues. ○ The statement provides a guideline that 1 kilovolt (kV) of acceleration voltage is required
3. Penetration: May require permeabilization with detergents to allow antibody access to intracellular targets, having an energy of 511 keV (kilo-electron volts). This is because the rest mass of an electron (and
to effectively image through 1 nanometer of sample material. This means that if you have positron) is equivalent to 511 keV/c².
which can potentially disrupt the cellular structure. a thicker sample, you would need a higher accelerating voltageto penetrate and image
4. Detection: Often requires the use of secondary antibodies to bind to the primary antibodies attached to the 5. Detection: In PET imaging, these two gamma photons are detected by a ring of detectors
through it. surrounding the patient. The simultaneous detection of these two photons, known as a coincidence
antigen of interest, which can then be visualized with an electron -dense label like gold particles. 4. Commonly Used TEMs: event, allows the system to pinpoint the location of the annihilation event within the body.
5. Resolution: The resolution might be limited due to the larger size of the antibody -gold complexes and ○ Most transmission electron microscopes (TEMs) in use today have accelerating voltages
potential structural disruption. 6. Image Formation: By detecting many such coincidence events from different positron-electron
in the range of 200-300 kV. This range provides a good balance between the ability to annihilations throughout the body, a computer can reconstruct a three-dimensional image that reflects
6. Advantages: Allows for the detection of a wider range of antigens that may be masked or inaccessible image relatively thick samples and the resolution achievable. the distribution of the positron-emitting radiotracer.
after embedding. 5. Implications for Tomography:
7. Disadvantages: More susceptible to structural artifacts due to the need for permeabilization, and the larger ○ Given the guideline, a 200-300 kV TEM would be suitable for samples up to
antibody size can limit the resolution of the labeling. approximately 200 to 300 nanometers in thickness. However, as mentioned earlier, for
Post-Embedding Labeling 3D tomography, the recommended maximum thickness is much less (around 300 nm) to
1. Timing: The labeling occurs after the sample has been embedded and sectioned. maintain high resolution and minimize radiation damage.
2. Process: Antibodies are applied to the thin sections of the embedded sample that have been mounted on 6. Radiation Damage:
EM grids. ○ It's important to note that even within the acceptable thickness range, there is still a risk
3. Penetration: The embedding process itself can act as a fixative, reducing the need for detergents and of radiation damage during the acquisition of multiple images required for tomography.
preserving the ultrastructure more faithfully. This is especially true for sensitive biological samples.
4. Detection: Direct labeling is possible where the gold particles are attached to the primary antibody, or In summary, the statement highlights the trade-off between the accelerating voltage of the TEM, the
indirect labeling can be used with secondary antibodies.
thickness of the sample, and the quality of the 3D tomographic data that can be obtained. Thicker
5. Resolution: Offers higher resolution due to the ability to use smaller gold particles (e.g., nanogold) and the samples require higher accelerating voltages to be imaged eff
better preservation of the sample's ultrastructure. ectively, but even within the capabilities of modern TEMs, there are practical limits to the thickness 4. Principles of Imaging
6. Advantages: Better preservation of the sample's ultrastructure, higher resolution, and the ability to test that can be used for high-quality 3D tomography to avoid issues like radiation damage and reduced • Resolution: ability to discern two close points.
multiple antibody combinations on a single grid. resolution. • Contrast: ability to discern an object from noise or other tissues.
7. Disadvantages: May not be as effective for antigens that are not accessible on the surface of the
sectioned material. 5. Imaging Modalities
In summary, the choice between pre-embedding and post-embedding labeling depends on the specific
requirements of the experiment, including the antigen's location, the desired resolution, and the need to preserve
the sample's ultrastructure. Post-embedding labeling is often preferred for its higher resolution and better
structural preservation, while pre-embedding labeling can be more effective for detecting a broader range of
antigens.

Horseradish Peroxidase(HRP)
Horseradish Peroxidase (HRP) is an enzyme that is widely used in molecular biology and histology for a variety
of purposes due to its enzymatic properties. Here are some of the main uses and purposes of HRP:
1. Detection and Assays: HRP is commonly used as a label in enzyme -linked immunosorbent assays
(ELISAs) and Western blotting. It is attached to secondary antibodies, and upon binding to the primary
antibody, it can catalyze a colorimetric or chemiluminescent reaction to produce a detectable signal.
2. Histochemistry and Cytochemistry: HRP is used for the detection of specific antigens in tissues and 13. History of PET
cells. When HRP is linked to an antibody, it can be used to localize the antibody's target within a cell or • Inception: Emission and transmission tomography by Kuhal and Edwards in the 1950s.
tissue. • Development: Ido's synthesis of 18-F FDG; Robertson and Cho's ring system prototype.
3. Electron Microscopy (EM): HRP can be used as an electron-dense tracer in transmission electron • Projection X-ray (Radiography)
microscopy (TEM). After being linked to an antibody, HRP can be visualized at high resolution due to the • X-ray Computed Tomography (CT) 14. PET Principles
formation of an electron-dense reaction product with substrates like diaminobenzidine (DAB).
• Medical Ultrasonography (Ultrasound) • Positron Emission: Proton to neutron transformation, emission of a positron.
4. Localization of Proteins: In immuno-electron microscopy, HRP is used to localize proteins within cells or • Annihilation: Positrons annihilate with electrons, producing two 511 keV gamma photons.
• Nuclear Imaging (SPECT, PET)
tissues. The enzyme can be reacted with DAB and hydrogen peroxide (H2O2) to produce an insoluble • Positive beta decay to emits a positron ans an antiparticle of the electron with opposite charge.
precipitate that is visible in EM. • Magnetic Resonance Imaging (MRI)
5. Signal Amplification: HRP can be used to amplify signals in various assays due to its catalytic activity, • Functional Imaging (fMRI, MEG, EEG, PET)
which allows for a single enzyme molecule to convert multiple substrate molecules. 6. X-Rays
6. In Situ Hybridization (ISH): HRP can be used as a label for the detection of specific nucleic acid • Detected as particles of energy (photons).
sequences in tissue sections or cells, providing a way to visualize gene expression patterns. • Discovered by Wilhelm Conrad Roentgen in 1895.
7. Affinity Labeling: HRP can be used in affinity labeling techniques where it is covalently attached to a 7. X-ray Generator System
molecule of interest, allowing for the specific labeling and detection of target molecules. • Cathode and anode within an evacuated glass tube.
8. Neuroanatomy: HRP has been used as a tract-tracing tool in neuroanatomy to map neural pathways by • High voltage generates X-rays.
being taken up by neurons and transported along their axons.
9. Reporter Enzyme: In molecular biology, HRP can be used as a reporter enzyme in various assays to
indicate the presence or activity of other molecules of interest.
10. Bioremediation: HRP is also being studied for its potential use in bioremediation, where it can catalyze the
breakdown of environmental pollutants.
The versatility of HRP stems from its stability, ease of conjugation to other molecules, and its ability to catalyze
reactions that produce easily detectable products.

MiniSOG
MiniSOG is a genetically encoded tag derived from the phototropin protein found in plants, which has been
engineered for use in fluorescence microscopy and correlated light and electron microscopy C ( LEM). Here are
the key characteristics and uses of MiniSOG:
1. Size and Origin: MiniSOG is a smaller version of the Singlet Oxygen Generator (SOG) protein, hence the
name "mini." It is derived from Arabidopsis thaliana phototropin, a plant blue -light receptor.
2. Fluorescence: MiniSOG is fluorescent, allowing it to be used for live cell imaging and fluorescence
microscopy before being used for EM.
3. Singlet Oxygen Generation: When illuminated with blue light, MiniSOG catalyzes the conversion of
molecular oxygen to singlet oxygen, a highly reactive form of oxygen.
4. Photooxidation: The singlet oxygen generated by MiniSOG can catalyze the polymerization of electron - An X-ray generator is a critical component of an X-ray imaging system. Its primary function is to
dense precursors like diaminobenzidine (DAB) into a product that is osmiophilic (attracts osmium), which produce X-rays, which are a form of high-energy electromagnetic radiation that can penetrate
enhances contrast in electron microscopy. various materials, including the human body. Here's a detailed explanation of the function and
5. Correlated Light and Electron Microscopy (CLEM): MiniSOG is particularly useful for CLEM, where it components of an X-ray generator:
allows for the correlation of light microscopic images with high -resolution electron microscopic images. It Components of an X-ray Generator
bridges the resolution gap between light and electron microscopy. 1. X-ray Tube: The tube is where the actual production of X-rays takes place. It typically 15. Radionuclides and Radiotracers in PET
6. Protein Tagging: MiniSOG can be genetically fused to proteins of interest, enabling the localization of consists of a vacuum-sealed glass or metal envelope containing a cathode (electron source) • Common Isotopes: Carbon-11, Nitrogen-13, Oxygen-15, Fluorine-18.
these proteins within cells or tissues at the electron microscopic level. and an anode (target for the electrons). • FDG (Fluorodeoxyglucose): Most used in clinical PET for oncology and neurology.
7. Ultrastructural Preservation: The photooxidation reaction catalyzed by MiniSOG helps to preserve 2. Cathode: The cathode, often in the form of a heated filament, emits electrons when heated. • Metabolic trapping of FDG
cellular ultrastructure, which is crucial for detailed cellular and molecular studies. These electrons are then accelerated towards the anode. Fluorodeoxyglucose (FDG) is a radiotracer that is widely used in positron emission tomography (PET)
8. 3D Localization: When used in conjunction with electron tomography, MiniSOG can provide three - 3. Anode: The anode is usually made of a high atomic number material, such as tungsten, imaging, particularly for oncology. It is an analogue of glucose, the primary source of energy for most cells
dimensional information about the localization of the tagged protein within the cell. which can withstand the high energy impact of the electrons. The anode is designed to rotate in the body. The metabolic trapping of FDG is a process that allows PET scans to visualize areas of
9. Multiplexing: MiniSOG can be used in combination with other tags or fluorophores to visualize multiple or oscillate to dissipate heat, as the process of X-ray generation generates a significant increased glucose metabolism, which is a common characteristic of many cancer cells.
proteins or structures simultaneously. amount of it. Here's how metabolic trapping of FDG works:
10. Research Applications: MiniSOG has been used to study various cellular processes, including protein 4. High Voltage Supply: This component provides the necessary high voltage (tens of 1. Administration: FDG, labeled with the positron-emitting isotope fluorine-18, is injected into the
trafficking, organelle structure, and cellular signaling pathways. thousands to millions of volts) to accelerate the electrons from the cathode towards the patient's bloodstream.
The development of MiniSOG and its application in CLEM has significantly advanced the field of cellular anode.
2. Transport: Like glucose, FDG is transported into cells via glucose transport proteins, which are often
microscopy by providing a tool to localize proteins with high precision in both light and electron microscopy. 5. Control Unit: The control unit manages the operation of the X-ray generator, regulating the overexpressed in rapidly growing cancer cells.
voltage, current, and exposure time to produce the desired X-ray output.
3. Phosphorylation: Once inside the cell, FDG is phosphorylated by the enzyme hexokinase to form
APEX/APEX2 6. Cooling System: Since the generation of X-rays is a heat-intensive process, a cooling FDG-6-phosphate. This is a key step in the metabolic pathway of glucose and occurs rapidly in cells
APEX (Ascorbate Peroxidase) and its improved version APEX2 are engineered enzymes derived from the system is often integrated into the design to prevent overheating and maintain the stability and with high metabolic activity.
longevity of the X-ray tube.
bacterial enzyme ascorbate peroxidase. These enzymes have been adapted for use as tags in biological 4. Trapping: Unlike glucose-6-phosphate, FDG-6-phosphate is not a substrate for further metabolism. It
research, particularly for protein localization and imaging in electron microscopy (EM). Here are the key features Functioning of an X-ray Generator cannot be converted into glucose-1-phosphate or other metabolites that can enter the glycolysis
and applications of APEX and APEX2: 1. Electron Emission: When the X-ray generator is activated, the cathode heats up and emits a pathway or the glycogen synthesis pathway. As a result, FDG-6-phosphate becomes trapped within
1. Enzymatic Activity: Both APEX and APEX2 are peroxidases, which means they can catalyze the stream of electrons. the cell.
oxidation of various substrates in the presence of hydrogen peroxide (H2O2). 2. Acceleration: The high voltage supply accelerates these electrons to a high speed towards 5. Accumulation: Because FDG-6-phosphate is not metabolized further, it accumulates in cells that
2. Genetic Fusion: APEX and APEX2 can be genetically fused to proteins of interest, allowing for the the anode. have taken up FDG, with higher accumulation in cells with higher metabolic rates.
localization of these proteins within cells or tissues. 3. Impact and X-ray Production: As the high-speed electrons hit the anode, the sudden 6. Detection: The radioactive decay of fluorine-18 in FDG-6-phosphate results in the emission of
3. Electron Microscopy: The primary use of APEX and APEX2 is in EM, where they are used to generate deceleration causes the conversion of some of the kinetic energy of the electrons into X-ray positrons, which, upon annihilation with electrons, produce gamma photons that are detected by the
electron-dense labels that enhance contrast and allow for high -resolution imaging of the tagged proteins. photons, a process known as bremsstrahlung (braking radiation). Additionally, some X-rays PET scanner.
4. Activity in Cytosol: Unlike some other peroxidases, APEX2 is active in the cytosol, which makes it a can be produced through characteristic X-ray emission when an electron fills a vacancy in an 7. Imaging: The PET scanner uses the detected gamma photons to create an image that reflects the
versatile tool for labeling a wide range of cellular structures. inner shell of an atom in the anode material. distribution of FDG in the body. Areas with high FDG uptake, indicating high metabolic activity, appear
5. Labeling Method: APEX and APEX2 catalyze the conversion of electron -dense precursors, such as 4. X-ray Beam: The produced X-ray beam exits the X-ray tube and is directed towards the as "hot spots" on the PET image.
diaminobenzidine (DAB), into an insoluble, brownish precipitate that is visible in EM. This reaction produces patient. The intensity and energy of the X-ray beam can be adjusted by varying the voltage 8. Application: This process is particularly useful for detecting tumors, as cancer cells often have
a label that marks the location of the APEX/APEX2 fusion protein. and current supplied to the X-ray tube. increased glucose metabolism compared to surrounding normal tissues. FDG -PET is also used to
6. Correlated Light and Electron Microscopy (CLEM): APEX2 can be used in CLEM experiments, where it 5. Imaging: The X-ray beam passes through the patient's body. Dense materials, such as stage cancers, evaluate treatment response, and monitor for recurrence.
provides a bridge between light microscopy and EM, allowing for the correlation of protein locations bones, absorb more X-rays and allow fewer to pass through, while less dense materials, such The metabolic trapping of FDG is a critical aspect of its utility in PET imaging. It allows for the visualization
observed in light microscopy with ultrastructural details resolved by EM. as soft tissues and air, absorb less and allow more X-rays to pass through. This differential of glucose metabolism, which can highlight areas of abnormal activity that may be indicative of disease,
7. 3D Localization: When used in conjunction with electron tomography, APEX2 can provide three - absorption creates an image that can be captured on a detector or film, providing a visual especially cancer. However, it's important to note that not all increased FDG uptake is due to cancer; other
dimensional information about the localization of the tagged protein within the cell. representation of the internal structures of the body. conditions with increased metabolic activity, such as inflammation, can also result in increased FDG uptake
8. High Specificity and Sensitivity: APEX2 offers high specificity and sensitivity for protein localization, 6. Safety: The control unit ensures that the X-ray exposure is limited to the necessary diagnostic and must be considered in the interpretation of PET scans.
allowing for the detection of even low-abundance proteins. range, minimizing the patient's exposure to ionizing radiation.
In summary, the X-ray generator plays a crucial role in medical imaging by producing X-rays that 16. PET Probes for Biological Imaging
9. Research Applications: APEX and APEX2 have been used to study various cellular processes, including
protein trafficking, organelle structure, and cellular signaling pathways. can penetrate the body and create images of internal structures. The generator's components and • Applications: Hemodynamic parameters, substrate metabolism, protein synthesis, enzyme activity, drug
control systems allow for precise control over the X-ray output to achieve the best possible action, receptor affinity, neurotransmitter biochemistry, gene expression.
10. APEX2 Improvements: APEX2 was engineered to have improved properties over the original APEX,
including higher stability, increased activity, and better solubility, making it a more reliable and user -friendly diagnostic images while minimizing radiation exposure. • Chelating agent: An organic compound that binds to charged ions to increase absorption.
tool for cellular imaging. ○ EDTA, ethilenediamine, phosphite, DOTA, DTPA
The development of APEX and APEX2 has significantly advanced the field of cellular microscopy by providing a 8. Radiography Process
tool to localize proteins with high precision in both light and electron microscopy, thereby facilitating a deeper • Conventional vs. Digital Radiography. 17. Benefits and Limitations of PET Scan
understanding of cellular structures and functions. Conventional radiography and digital radiography are two methods used to create images of the • Benefits: Unique information, cost-effective over surgery, early disease detection, low radiation exposure.
internal structures of the body, primarily using X-rays. They differ in the way the images are • Limitations: Time-consuming, lower resolution, high equipment costs, false results with chemical
captured, processed, and viewed. Here's a comparison of the two: imbalances, timing sensitivity.
Conventional Radiography 18. PET Applications
1. Imaging Process: Conventional radiography uses X-ray film that is coated with a light-
sensitive silver bromide emulsion. When the film is exposed to X-rays, the silver bromide • Cancer: FDG-PET for diagnosis, staging, and treatment monitoring.
grains are reduced to metallic silver, creating a latent image. • Neuroimaging: Brain activity in dementia, cognitive neuroscience, psychiatry.
AgBr (latent image) + developer→Ag (metallic silver) + Br− + developer oxidation products • Cardiology: Detection of "hibernating myocardium" and atherosclerosis.
2. Development: After exposure, the film is developed in a darkroom using chemicals that react • Infectious Diseases: Imaging bacterial infections using FDG.
with the exposed silver bromide to produce a visible image. The unexposed silver bromide is
removed during the development process. 19. PET/CT and PET/MRI Scans
3. Viewing: The developed film is then viewed as a physical print, often on a lightbox for better • PET/CT: Combines PET's functional data with CT's anatomic detail.
visualization of the image. • PET/MRI: Hybrid technology merging MRI's soft tissue imaging with PET's metabolic imaging.
4. Storage: The film images must be stored physically, which requires space and can be subject
to degradation over time. 20. PET/CT and PET/MRI Advantages and Challenges
5. Quality: The quality of the image can be affected by the development process and the quality • Advantages: Better localization, increased diagnostic accuracy, decreased scan time.
of the film itself. • Challenges: High costs, difficulty in producing and transporting short -lived radiopharmaceuticals, longer
6. Dynamic Range: Conventional radiography has a limited dynamic range, which can make it acquisition times for PET/MRI.
challenging to visualize structures with varying densities within the same image.
7. Archiving and Retrieval: Retrieval of images can be time-consuming, as physical files must
be located and retrieved.
8. Cost: Initially, the cost of conventional radiography may be lower, but the ongoing costs of
film, chemicals, and storage can add up over time.
Digital Radiography (DR)
Cryo Fixation and Cryo-Electron Tomography (cryoET): 1. Imaging Process: Digital radiography uses a detector, such as a photostimulable phosphor
plate (PSP) or a solid-state detector, to capture the X-ray image. The detector converts the X-
ray energy into digital data.
2. Data Capture: The detector is exposed to the X-ray beam, and the resulting latent image is
stored as digital data.
3. Processing: The digital data is then processed by a computer, which can manipulate the
image to enhance contrast and clarity.
4. Viewing: The digital image can be viewed on a computer monitor or other digital display
devices immediately after processing.
5. Storage: Digital images can be stored electronically, saving space and allowing for easier
retrieval and transfer.
6. Quality: Digital radiography offers high-quality images with the ability to adjust the contrast
and brightness after the image has been captured. It also has a wider dynamic range,
allowing for better visualization of different tissue densities.
7. Post-Processing: Digital images can be post-processed to enhance certain features, such as
zooming in on specific areas or adjusting the contrast to better visualize certain structures.
8. Archiving and Retrieval: Digital images can be easily archived and retrieved electronically,
making them more accessible and reducing the risk of loss or damage.
9. Cost: While the initial investment in digital radiography equipment may be higher, the long-
term costs can be lower due to reduced consumables and the ability to manipulate images
without the need for retakes.
10. Teleradiology: Digital radiography facilitates teleradiology, where images can be sent
electronically to radiologists for interpretation, regardless of geographic location.
In summary, digital radiography offers several advantages over conventional radiography, including
immediate image availability, better image quality and manipulation capabilities, space savings from
electronic storage, and the potential for lower long-term costs. However, it requires a greater initial
investment in equipment and relies on having the necessary digital infrastructure in place.

• X-Ray Film: characteristic curve, optical density, silver-bromine grains.


X-ray film, also known as radiographic film, was a crucial component of traditional X -ray imaging
systems. It works on the principle of photographic film, capturing and storing a latent image that is

Quick Notes Page 7


Special precautions or alternative imaging methods are required for these individuals. presence of oxyhemoglobin does not cause significant signal loss.
8. Fire Hazard: In extremely rare cases, the attraction of metallic objects in the presence of the MRI's 4. Deoxyhemoglobin (dHb):
magnetic field can lead to a fire if the object being attracted is part of the equipment or if it damages ○ Deoxyhemoglobin, which has released oxygen, is paramagnetic. It has unpaired electrons that
electrical wiring or gas lines. make it susceptible to the magnetic field, causing it to distort the local magnetic field around
9. RF Coil Damage: The MRI's RF coils, which are used to transmit and receive signals, can be damaged blood vessels. This distortion affects the MRI signal in a way that can be detected and used to
or their performance can be degraded by the presence of metal. infer brain activity.
5. BOLD Contrast in fMRI:
Tissue Differentiation and Relaxation Types ○ When a region of the brain is active, it consumes more oxygen, leading to an increase in
oxyhemoglobin and a decrease in deoxyhemoglobin. Since deoxyhemoglobin is paramagnetic
• T1 and T2 relaxation times are crucial for tissue contrast in MRI.
and distorts the local magnetic field, its decrease results in a more uniform magnetic field and a
• T1-weighted images are useful for differentiating fat from water, while T2-weighted images are good for imaging relative signal increase in the MRI (a higher BOLD signal).
edema. ○ The BOLD signal change is not a direct measure of neural activity but an indirect measure based
on the hemodynamic response (the change in blood flow and oxygenation in response to neural
In the context of Magnetic Resonance Imaging (MRI), T1 and T2 refer to two different types of relaxation times activity).
that describe how the magnetic properties of atomic nuclei, specifically hydrogen nuclei in water molecules, 6. MRI Signal and Relaxation Times:
return to their equilibrium state after being disturbed by a radiofrequency (RF) pulse. These relaxation times are
○ The MRI signal depends on the relaxation times T1, T2, and T2*, which are affected by the local
critical for generating the contrast in MRI images and allow for differentiation between various tissues in the
magnetic field homogeneity. Paramagnetic substances like deoxyhemoglobin can cause local
body.
field inhomogeneities, leading to faster dephasing of the transverse magnetization (T2* decay)
T1 Relaxation (Longitudinal Relaxation) and a loss of signal.
1. Definition: T1, or longitudinal relaxation time, is the time constant for the recovery of the longitudinal In summary, the key to the BOLD contrast in fMRI is the difference in magnetic properties between
component of the nuclear magnetic vector (magnetization) back to its equilibrium state along the direction oxyhemoglobin (diamagnetic) and deoxyhemoglobin (paramagnetic). The change in their concentrations due
of the static magnetic field (B0). to neural activity leads to detectable changes in the MRI signal, allowing for the mapping of brain function.
2. Process: After the RF pulse, the protons (hydrogen nuclei) that were flipped out of alignment with the B0 * Hb0 is deoxyhemoglobin
field begin to realign or "relax" back to their original state. T1 describes how quickly this realignment
occurs. 12. Spatial and Temporal Resolution:
3. Tissue Contrast: T1-weighted images (T1WI) capitalize on the differences in T1 relaxation times • Spatial resolution is determined by voxel size; temporal resolution is the smallest time period reliably
between tissues. Tissues with shorter T1 times (they relax more quickly) appear brighter in T1 -weighted separated by fMRI.
images.
4. Example: Fat has a short T1 relaxation time and thus appears bright on T1 -weighted images. 13. T2* and fMRI:
• T2* decay refers to the exponential decrease in signal strength following the initial excitation pulse.
T2 Relaxation (Transverse Relaxation) In the context of functional Magnetic Resonance Imaging (fMRI), T2* (often pronounced "T two star") refers
1. Definition: T2, or transverse relaxation time, is the time constant for the decay of the transverse to the transverse relaxation time, which is a measure of how quickly the transverse magnetization (the
component of the nuclear magnetic vector, which is the component perpendicular to the static magnetic component of the magnetic field induced by the radiofrequency pulse and perpendicular to the static
field. magnetic field) decays. This decay is due to inhomogeneities in the magnetic field and other local magnetic
2. Process: T2 relaxation describes how quickly the coherence of the transverse magnetization (the field disturbances.
magnetization in the X-Y plane) is lost due to inhomogeneities in the magnetic field or other local Here's a more detailed explanation of T2* and its relevance to fMRI:
environmental factors. 1. Transverse Relaxation (T2):
3. Tissue Contrast: T2-weighted images (T2WI) highlight differences in T2 relaxation times. Tissues with • T2 is the time constant for the decay of transverse magnetization in the absence of any external
longer T2 times (they hold transverse magnetization longer) appear brighter in T2 -weighted images. magnetic field inhomogeneities. It is related to the natural processes within the tissue that cause the
4. Example: Fluids, such as water in edematous or inflamed tissues, have long T2 relaxation times and loss of phase coherence between the protons.
therefore appear bright on T2-weighted images. 2. T2 (Transverse Relaxation Time):*
Differences Between T1 and T2 • T2* is the "effective" transverse relaxation time that is observed during an MRI experiment. It is
typically shorter than T2 due to additional dephasing effects caused by magnetic field inhomogeneities.
• Direction of Recovery: T1 is associated with the recovery of magnetization along the B0 field, while T2 is These inhomogeneities can be from the magnetic field itself, the sample (e.g., differences in tissue
related to the decay of magnetization in the plane perpendicular to B0. susceptibility), or external factors.
• Speed: T2 relaxation typically occurs more quickly than T1 relaxation, meaning that transverse • In fMRI, T2* is particularly important because the BOLD (blood-oxygen-level dependent) contrast relies
magnetization is lost more rapidly than longitudinal magnetization is regained. on the detection of these magnetic field inhomogeneities caused by changes in the concentration of
• Image Weighting: T1-weighted images are useful for distinguishing between tissues with different T1 deoxyhemoglobin in the blood vessels.
properties, like fat and water. T2-weighted images are more sensitive to differences in T2 properties, 3. BOLD Contrast and T2:*
making them ideal for detecting pathology where there is an increase in water content, such as in edema • The BOLD contrast in fMRI is sensitive to changes in the concentration of deoxyhemoglobin, which is a
or inflammation. paramagnetic substance. When there is more deoxyhemoglobin present, it causes local magnetic field
• Clinical Utility: T1-weighted images often provide better anatomical detail and are good for visualizing inhomogeneities, leading to faster dephasing of the transverse magnetization and a decrease in the T2
the brain's gray and white matter. T2-weighted images are more sensitive to pathology and are used to * relaxation time.
highlight areas of disease where there is an increase in free water, like in tumors or areas of inflammation. • As a result, areas of the brain with increased oxygen consumption (and thus increased neuronal
By manipulating the timing parameters of the MRI pulse sequence (TR for T1 weighting and TE for T2 activity) will have a lower T2* signal intensity because the paramagnetic effect of deoxyhemoglobin
weighting), radiologists can generate images that emphasize either T1 or T2 contrast, allowing for a more speeds up the loss of phase coherence between the protons.
detailed examination of different tissues and potential abnormalities. 4. Importance of T2 in fMRI:*
• The BOLD fMRI technique relies on the T2*-weighted imaging sequences to detect these changes in
T1 and T2 Relaxation signal intensity. By acquiring images quickly and comparing them during different task conditions (such
• T1 relaxation involves the recovery of longitudinal orientation, while T2 relaxation involves the loss of transverse as rest versus activity), researchers can identify regions of the brain that are more active.
magnetization. • The choice of echo time (TE) in fMRI sequences is critical because it determines the weighting of the
images with respect to T2*. A longer TE will increase the T2* weighting and the sensitivity to changes
in deoxyhemoglobin concentration.
5. Limitations and Artifacts:
• T2* is also associated with certain limitations and artifacts in MRI, such as signal dropout near air-
tissue boundaries (like sinuses) due to the rapid dephasing caused by the large magnetic susceptibility
differences. This can lead to regions of the image appearing "black" or devoid of signal, commonly
referred to as "susceptibility artifacts."
In summary, T2* is a fundamental concept in fMRI that relates to the detection of brain activity through the
BOLD contrast mechanism. It is influenced by the presence of paramagnetic substances like
deoxyhemoglobin, which affect the local magnetic field and, consequently, the MRI signal.

14. fMRI Image Acquisition:


• Subjects undergo sensory stimulation or tasks, and BOLD-sensitive images are acquired.
• Time series analysis determines signal changes in response to stimulation.

15. fMRI Image Acquisition Techniques:


• SPIRAL fMRI as an alternative to EPI, with different artifacts and reconstruction requirements.
16. fMRI Image Acquisition Parameters:
• TE (echo time), TR (repeat time), matrix size, resolution, and flip angle are all critical parameters.
17. fMRI Image Acquisition Stages:
• Anatomical images (high-resolution T1) followed by functional images (low-resolution T2*).
18. fMRI Statistical Analysis:
• Involves motion correction, smoothing, spatial normalization, and the application of the General Linear
Model.

19. Limitations in fMRI Image Acquisition:


• BOLD effect signal dephasing, signal dropout near air-tissue boundaries, image warping, and physiological
noise.
The BOLD (blood-oxygen-level dependent) effect is central to functional MRI (fMRI), allowing for the indirect
measurement of brain activity. However, several factors can affect the quality and interpretation of BOLD
fMRI signals. Here's an explanation of the issues mentioned:
Differing MR Images 1. BOLD Effect Signal Dephasing:
○ Dephasing refers to the loss of phase coherence between different spins (protons) in the MRI
• T1-weighted, T2-weighted, and PD (Proton Density) weighted images provide different tissue contrasts.
signal. In the context of BOLD contrast, dephasing occurs because of magnetic field
inhomogeneities caused by the presence of paramagnetic deoxyhemoglobin in active brain
regions. This dephasing leads to a signal loss that can be detected and used to infer brain
activity.
2. Signal Dropout Near Air-Tissue Boundaries:
○ At the boundaries between tissue and air (such as near the sinuses or the lungs), there are
abrupt changes in magnetic susceptibility. This can cause rapid dephasing of the MRI signal,
leading to signal dropout or "black holes" in the image. These regions appear dark because the
MRI signal is lost due to the severe distortion of the magnetic field.
3. Image Warping:
○ Warping artifacts in fMRI can occur due to imperfections in the magnetic field or the way the data
is acquired and reconstructed. For instance, echo-planar imaging (EPI), a common rapid imaging
technique used in fMRI, is susceptible to distortions because it uses a single shot to acquire an
entire image. Field inhomogeneities can cause signal from one part of the image to be misplaced,
leading to geometric distortions.
4. Physiological Noise:
○ Physiological noise in fMRI refers to fluctuations in the MRI signal that are caused by normal
bodily functions, such as the heartbeat, respiration, and blood flow pulsations. These
physiological processes can introduce variations in the signal that can be mistaken for neural
activity if not properly accounted for. For example, respiratory and cardiac cycles can cause
changes in blood volume and flow, which in turn affect the BOLD signal.

20. fMRI Experimental Design:


• Controlling cognitive operations, stimulus properties, timing, and subject instructions to test or generate
Brownian Motion and T1/T2 Values hypotheses.
• The behavior of fat and water molecules affects their T1 and T2 values, which are critical for disease The purpose of fMRI (functional Magnetic Resonance Imaging) experiment design is to create a structured
identification. approach to investigate the neural correlates of various cognitive, sensory, or motor processes in the human
Here are some key points about Brownian motion: brain. The design of an fMRI experiment is critical for ensuring that the data collected is reliable, valid, and
1. Randomness: The movement of the particles is random, with no discernible pattern or directionality. can effectively address the research questions. Here are the main purposes of fMRI experiment design:
2. Cause: It is caused by the thermal energy of the fluid's molecules, which are in constant motion. The 1. Hypothesis Testing: To test specific predictions about brain function related to a particular cognitive,
collisions between these molecules and the suspended particles result in the erratic movement. sensory, or motor task.
3. Observability: Brownian motion is observable under a microscope, particularly with larger particles or at 2. Brain Activation Localization: To identify the specific brain regions that are active during the
lower temperatures when the motion is less frantic. performance of a task or in response to a stimulus.
4. Temperature Dependence: The intensity of the Brownian motion increases with the temperature of the 3. Temporal Dynamics Analysis: To understand the time course of brain activity and how different brain
fluid. Higher temperatures mean more energetic molecular collisions, leading to more vigorous Brownian regions interact over time.
motion. 4. Cognitive Process Isolation: To isolate the neural activity related to a particular cognitive process by
5. Particle Size: Smaller particles exhibit more noticeable Brownian motion because they are more easily controlling for other confounding variables.
influenced by the random impacts of the fluid's molecules. 5. Statistical Rigor: To ensure that the results are statistically robust and not due to random chance.
6. Evidence for Molecular Theory: Brownian motion provided some of the earliest and most convincing 6. Replicability: To design experiments that can be replicated to confirm the findings.
evidence for the molecular theory of matter, supporting the idea that matter is composed of atoms and
molecules in constant motion. 21. fMRI Experiment Designs:
7. Applications: Understanding Brownian motion is crucial in various fields, including physics, chemistry, • Blocked designs for state changes, event-related designs for time course estimation, and mixed designs for a
and biology. It is particularly important in the study of diffusion processes, the behavior of particles in combination of both.
colloids, and the development of stochastic models in physics and finance.
8. Statistical Nature: Brownian motion is a stochastic or random process and is often modeled using
statistical methods. It is a type of random walk, where the particle's path is made up of a series of random
steps.

Main Tissue Contrast Controls


• Echo Time (TE) and Repetition Time (TR) are key parameters for controlling T1 and T2 weightings.
• Short TE(T1) and LongTR(T2)

1. Blocked Designs:
○ Involve performing a task for a block of time followed by a rest period or a block of a different task.
○ Useful for examining state changes and detecting brain activations associated with a particular state.
2. Event-Related Designs:
○ Tasks or stimuli are presented as discrete events, and the BOLD response to each event is analyzed.
○ Powerful for estimating the time course of brain activity and determining baseline activity.
○ Allows for post hoc trial sorting and is well-suited for tasks with variable timing.
3. Mixed (Block-Event) Designs:
○ Combine elements of both blocked and event-related designs.
○ Can be used to analyze both state-dependent and item-related effects.
○ More complex to analyze but provide a comprehensive picture of brain activity.
4. Randomized Designs:
○ Events or trials are presented in a random order to reduce the predictability of the task and minimize
MRI Image Formation confounds related to temporal autocorrelation.
• Involves converting frequency domain data (k-space) to an image using an Inverse Fourier Transform (IFT). 5. Cognitive Paradigm Designs:
○ Tailored to specific cognitive processes, such as memory, attention, or language, and may incorporate
a variety of tasks or stimuli to investigate these processes.
6. Resting-State Designs:
○ Participants are instructed to rest while their brain activity is monitored; used to study the functional
connectivity between brain regions in the absence of an explicit task.

22. Applications of fMRI:


• Clinical neuroscience for pre-surgical mapping, diagnosis, and functional recovery.
• Cognitive neuroscience for localizing functions and brain-behavior correlations.
• Animal research to validate fMRI techniques and predict human results.

23. EEG-fMRI:
• A multimodal technique recording EEG and fMRI data synchronously to study electrical brain activity and its
correlation with haemodynamic changes.
Hemodynamic activity refers to the various physiological processes related to the circulation of blood and
the delivery of oxygen and nutrients to tissues, as well as the removal of waste products like carbon dioxide.
In the context of the brain and functional MRI (fMRI), hemodynamic activity is particularly important because
it underlies the BOLD (blood-oxygen-level dependent) signal that fMRI measures. Here's a breakdown of the
key components of hemodynamic activity in the brain:
1. Blood Flow: The delivery of blood to the brain is regulated by the cardiovascular system. Blood flow to
a particular brain region can increase or decrease based on the metabolic demands of that region.
2. Oxygenation: Oxygen is carried to the brain by hemoglobin in red blood cells. The oxygenated blood
delivers oxygen for the metabolic needs of the brain tissue.
MRI Contrast Agents 3. Neurovascular Coupling: This is the relationship between neural activity (electrical activity of
• Used for more specific imaging, with gadolinium-based chelates being common intravenous agents. neurons) and the hemodynamic response (changes in blood flow and oxygenation). When neurons in a
MRI contrast agents are substances that are introduced into the body to enhance the visibility of internal brain region are active, they require more oxygen and nutrients, leading to an increase in blood flow to
structures in MRI scans. They work by altering the local magnetic environment, which changes the relaxation that area.
times (T1, T2, and T2*) of nearby water molecules, resulting in differences in signal intensity on the MRI 4. Hemoglobin States: Hemoglobin can exist in two main states:
images. Here are the main functions and benefits of using MRI contrast agents: ○ Oxyhemoglobin (HbO2): When bound to oxygen, it is diamagnetic, meaning it doesn't
1. Improved Visualization: Contrast agents increase the contrast between different types of tissues, significantly distort magnetic fields.
making it easier to distinguish between normal and abnormal tissues. ○ Deoxyhemoglobin (dHb): Without oxygen, it becomes paramagnetic, which means it can distort
2. Detection of Pathology: They can highlight areas of the body that may not be visible or clearly defined local magnetic fields and is the primary source of the BOLD contrast in fMRI.
without enhancement, such as tumors, areas of inflammation, or blood vessels with blockages. 5. Cerebral Metabolism: The brain's metabolic rate of oxygen (CMRO2) refers to how much oxygen is
3. Characterization of Lesions: By changing the appearance of lesions on MRI scans, contrast agents can consumed per unit of time for metabolic processes in the brain.
help determine if a lesion is benign or malignant, or differentiate between recurrent tumor and scar tissue 6. Venous Drainage: After oxygen is delivered and utilized, deoxygenated blood is collected in veins and
post-surgery. eventually drained from the brain.
4. Vascular Imaging: Contrast agents are used in MR angiography (MRA) to provide detailed images of 7. Autoregulation: The brain has mechanisms to maintain a relatively constant blood flow, despite
blood vessels, which can be used to diagnose aneurysms, atherosclerosis, or vascular malformations. changes in blood pressure, through autoregulation.
5. Functional MRI (fMRI): Although not typically referred to as "contrast agents," changes in blood In fMRI, the BOLD signal is sensitive to changes in the concentration of deoxyhemoglobin. An increase in
oxygenation levels are used to highlight areas of the brain that are active during specific tasks or stimuli in neural activity in a brain region leads to a local increase in blood flow and oxygen delivery, which exceeds
functional MRI. the immediate metabolic demand. This results in a relative increase in oxyhemoglobin and a decrease in
6. Perfusion Imaging: MRI contrast agents can be used to measure blood flow and blood volume in deoxyhemoglobin, causing a more uniform magnetic field and an increase in the MRI signal强度 (signal
tissues, which is useful in the evaluation of brain disorders, tumor grading, and assessment of tissue intensity). This change in the MRI signal is used to infer brain activity.
viability. Understanding hemodynamic activity is crucial for interpreting fMRI data because the BOLD signal is an
7. Molecular Imaging: Targeted MRI contrast agents are being developed to bind to specific biological indirect measure of neural activity, primarily reflecting the vascular response to changes in brain metabolism.
markers, potentially allowing for the imaging of specific cellular or molecular processes.
8. Safety and Relative Ease of Use: Most MRI contrast agents are based on chelates of gadolinium, which 24. Types of Imaging:
are generally considered safe. They are administered intravenously and are usually well -tolerated by • CT, US, MRI, Nuclear, and Optical imaging each focus on different aspects of body function and structure,
patients. such as anatomy, physiology, metabolism, and molecular processes.
9. Complementary to Other Imaging Modalities : Contrast agents can provide additional information that
may not be available through other imaging techniques like X -ray, CT, or non-contrast MRI.
10. Guidance for Biopsy and Interventions : Enhanced MRI images can be used to guide minimally
invasive procedures, such as biopsies or the placement of catheters.

The statement provided highlights various imaging modalities, each with unique capabilities for examining
different aspects of body function and structure. Here are examples of the processes that each imaging type
can visualize:
1. CT (Computed Tomography):
○ Focus: Primarily used for detailed images of the body's anatomy, especially bones, hard tissues,
and some soft tissues.
○ Example Process: CT can create cross-sectional images of the head to visualize the skull, brain

Quick Notes Page 8


Finite System
1. Description:
○ A finite optical system is one in which the objective lens forms a real image at a specific
distance (finite distance) from the lens itself.
2. Components:
○ The real image formed by the objective lens is then magnified by the eyepiece, similar to a
simple magnifying glass.
3. Light Path:
○ The light rays converge to form a real image in the space between the objective and the
eyepiece.
4. Advantages:
○ Relatively simple design.
○ Allows for a more straightforward optical path.
5. Applications:
○ Older microscope designs. 10. Wide-field Fluorescence Microscopy
○ Some modern educational microscopes. • Resolution and image brightness considerations.
• Utilization of a CCD chip for imaging.
Infinite System (Infinity-Corrected System)
1. Description: 11. Multi-Color Imaging
○ An infinite optical system, also known as an infinity-corrected system, is designed such that • Technique for obtaining monochromatic images and merging them in software .
the objective lens forms a real image at the focal plane of a tube lens, which is at an infinite • Limitations on the maximum number of distinguishable fluorophores.
distance from the objective lens. Multi-color imaging, while a powerful tool in microscopy and other imaging techniques, does have several
2. Components: limitations:
○ The tube lens (or an internal lens within the microscope) re-images the real focal plane at a 1. Spectral Overlap:
specific location where the eyepiece or a camera can be placed. ○ Different fluorophores used for multi-color imaging may have emission spectra that overlap,
○ The eyepiece then serves to magnify this image for the observer or projects it onto a sensor Rayleigh Criterion and Microscope Resolution making it challenging to distinguish between the colors. This can lead to bleed -through or cross-
talk between channels.
for imaging. • Definition: The smallest distance between two resolvable point light sources.
3. Light Path: 2. Limited Number of Distinct Colors:
• Limit: The resolution limit for a light microscope is about 200nm .
○ The light rays are made to appear as parallel (collimated) after refraction by the objective ○ There is a practical limit to the number of different fluorophores that can be used
• Calculation: Resolution (d) can be calculated using the formula d = simultaneously. This is due to the limited number of distinct excitation and emission
lens and before they enter the tube lens. 1.22λ/(NAobjective + NAcondenser) or d = 0.61λ/NA. wavelengths available, as well as the availability of suitable filter sets.
4. Advantages: 3. Photobleaching:
○ Allows for more flexibility in the placement of imaging detectors (like cameras) along the The Connection ○ Some fluorophores are sensitive to photobleaching, where repeated exposure to light can cause
optical path. The Rayleigh criterion and the concept of useful total magnification are closely them to lose their fluorescence. This can be a particular issue in multi -color imaging, where
○ Reduces the impact of manufacturing imperfections on image quality. linked: multiple rounds of excitation may be required.
○ Accommodates various optical devices (like filters, beam splitters) without significant • The Rayleigh criterion defines the smallest distance between two points that 4. Phototoxicity:
realignment. can be resolved, which is a function of the wavelength of light and the ○ Prolonged exposure to light can also be harmful to live cells or tissues, leading to phototoxicity.
5. Applications: numerical aperture of the objective lens. This is a concern when acquiring multiple images or using multiple excitation wavelengths.
○ Modern research and professional microscopes. • The useful total magnification is a guideline to ensure that the microscope is 5. Complex Image Analysis:
used within its resolving capabilities. It is based on the idea that beyond a ○ Analyzing multi-color images can be complex, particularly when there is spectral overlap
○ Microscopes where high image quality and compatibility with different imaging modalities certain magnification (around 500× the NA), the microscope cannot resolve between channels. Advanced software and algorithms may be required to accurately separate
are required. any finer details, and further increasing the magnification only results in a and quantify the different colors.
Summary larger, but not clearer, image. 6. Fluorophore Selection:
• Finite Systems are simpler and more traditional, with a real image formed at a specific, finite ○ Finding suitable fluorophores with distinct excitation and emission spectra that do not interfere
distance from the objective lens. with each other can be challenging. Additionally, the choice of fluorophores may be limited by
• Infinite Systems offer more flexibility and precision, with an optical design that allows for a real factors such as brightness, photostability, and compatibility with the sample.
image to be formed at an "infinite" distance, facilitating the use of additional optical components 7. Instrumentation Complexity:
and modern imaging devices. ○ Multi-color imaging often requires more complex and expensive instrumentation, such as multi -
band emission filters, multiple laser lines for excitation, or specialized detectors.
Anatomy of a Bright-Field Upright Microscope: Components and their functions. 8. Sample Preparation:
○ Preparing samples for multi-color imaging can be more challenging and time-consuming,
particularly when using multiple antibodies or other labeling agents.
9. Quantitative Analysis:
○ Quantitative analysis of multi-color images can be complicated by factors such as variations in
fluorophore brightness, photostability, and detection efficiency.
10. Biological Variability:
○ In biological samples, variability in the expression levels or localization of different targets can
make it challenging to compare the signals from different channels.

12. Advantages of Wide-Field Microscopy


• Simplicity and robustness of the design.
○ Simplicity: Wide-field microscopes typically have a straightforward design that is easy to operate and
Diffraction-Limited Objects maintain. The basic components, such as the light source, filters, and camera, are often user-friendly and
• Examples: Nuclear pores, transport vesicles, ribosomes, and single molecules. do not require complex alignment or calibration procedures.
○ Robustness: These microscopes are built to be durable and reliable, with components that can withstand
Nuclear pores 120nm regular use. The solid construction and fewer moving parts contribute to their robustness, making them
Transport vesicles 50-100nm suitable for long-term and repeated use in various environments, from research labs to clinical settings.
Ribosomes 30nm
• High photon collection efficiency.
○ Objective Lens: The objective lens of a wide-field microscope is designed to collect as much light as
Single molecules 1-2nm possible from the specimen. This is particularly important in fluorescence microscopy, where the emitted
light can be quite weak.
○ Numerical Aperture (NA): High-quality objective lenses with a high numerical aperture are used to
Convolution and Deconvolution maximize the light-gathering capacity, which directly translates to better image brightness and contrast.
• Process: Convolution is the process of image formation, while deconvolution can ○ Light Path: The design of the microscope ensures that the light path is optimized for photon collection, with
enhance resolution, albeit to a limited extent. minimal loss of light due to scattering or absorption within the optical components.
In the context of image processing and microscopy, convolution and • Simultaneous illumination and imaging capabilities.
deconvolution are techniques used to analyze and improve images. Here's a ○ Epi-Fluorescence: In wide-field epi-fluorescence microscopy, the same objective lens is used for both
• By adjusting the size of the aperture, one can control the amount of light entering the optical system, detailed explanation of each: illuminating the sample and collecting the emitted fluorescence. This eliminates the need for additional
which is essential for achieving a properly exposed image. components or complex setups.
○ Efficiency: The ability to illuminate and image the sample simultaneously makes the process more efficient,
Convolution reducing the time required for data acquisition and minimizing the potential for sample movement or
10. Optical Train and Conjugate Planes
1. Definition: Convolution is a mathematical operation that combines two changes during the imaging process.
Optical Train: The path light takes through the microscope. functions to produce a third function. In image processing, it is widely used ○ Real-Time Imaging: This feature also facilitates real-time imaging, where dynamic processes can be
Conjugate Planes: Illuminating and image-forming planes, and their importance in microscopy. for various purposes, including blurring, sharpening, and edge detection. observed as they occur, without the need to switch between illumination and detection modes.
2. Process: The convolution of an image with a kernel (a small matrix of
numbers) results in a new image. The kernel is slid over the image, and at 13. Practical Tips
each position, the kernel's values are multiplied by the corresponding pixel • Preventing overheating and reducing infrared light (heat).
values of the image. The results are summed and placed in the output • Attenuating all lights at the same rate for uniformity.
image at the current position of the kernel. • Shutting down lights when the sample is not in use to conserve energy and prolong bulb life.
3. Effects:
○ Blurring: A common kernel used for blurring is the Gaussian kernel,
which smooths the image and reduces detail.
○ Sharpening: A sharpening kernel can enhance edges and increase
the contrast of an image.
○ Filtering: Convolution can be used to apply various filters to an image,
such as low-pass filters that remove high-frequency details (noise) or
high-pass filters that emphasize edges.
4. Point Spread Function (PSF): In microscopy, convolution is used to model
how the microscope's PSF affects the image of a specimen. The image
observed is the convolution of the actual specimen with the PSF.
Deconvolution
1. Definition: Deconvolution is the process of reversing the convolution
operation to recover the original image from its blurred version. It aims to
remove the blur caused by the PSF and improve the image's resolution.
2. Process: Deconvolution involves applying an algorithm that estimates the
original image by reversing the effects of the convolution. This is done by
using the known or estimated PSF and the blurred image.
3. Challenges:
○ Illumination and Noise: Deconvolution can be sensitive to noise and
requires a good estimate of the PSF. Inaccuracies in the PSF or the
presence of noise can lead to artifacts in the deconvolved image.
○ Computationally Intensive: Deconvolution algorithms can be
computationally intensive, especially for high-resolution images.
4. Applications: Deconvolution is used in various fields, including microscopy,
astronomy, and medical imaging, to improve image quality and resolution.
5. Advantages:
○ Resolution Enhancement: Good deconvolution can increase the
spatial resolution of an image, although it is limited by the inherent
resolution of the imaging system.
○ Restoration: Deconvolution can help restore fine details that were lost
due to the blurring effects of the PSF.
Convolution and Deconvolution in Microscopy
In microscopy, convolution models how the image of a specimen is formed by the
microscope, taking into account the blurring effects of the PSF. Deconvolution, on
the other hand, aims to reverse these effects and recover a sharper, higher -
resolution image of the specimen.
By using deconvolution algorithms with an accurate estimate of the microscope's
PSF, it is possible to improve the image quality and reveal finer details that may
not be visible in the original, blurred image. This can be particularly useful in
fluorescence microscopy, where the PSF can limit the resolution of the image.
However, it's important to note that deconvolution is not a magic solution and has
its limitations. It can enhance the image and make it appear sharper, but it cannot
provide information that was not present in the original image due to the limitations
of the imaging system.

There are typically two sets of conjugate planes of interest:


1. Illuminating Conjugate Planes:
○ The first set consists of planes that are involved in the illumination of the specimen. This
includes:
▪ The plane of the light source (or lamp filament in the case of a microscope).
▪ The plane of the condenser aperture (focus light).
▪ Back focal plane/objectibe rear focal plane
▪ Iris of the eye
○ The illuminating conjugate planes are aligned to ensure that light is efficiently and evenly
distributed across the specimen.
2. Image-Forming Conjugate Planes: Spatial Resolution
○ The second set includes planes that are part of the image-forming process. This set typically The spatial resolution of an image refers to the degree of detail that can be discerned in
consists of: the image, specifically the ability to distinguish between two closely spaced points or
▪ The field diaphragm objects as separate entities. It is a critical measure of the quality of an image,
▪ The sample plane/specimen plane particularly in fields that require precise visual analysis, such as microscopy,
▪ The intermediate image plane (if present, this is where an intermediate image is photography, and satellite imaging.
formed before being further magnified by the eyepiece or a camera lens). Here are some key points about spatial resolution:
▪ The plane of the eyepiece or camera sensor (where the final image is viewed or 1. Defining Resolution: Spatial resolution is often defined as the minimum distance
between two points that can still be identified as distinct from one another in the
captured). Or retina.
image.
○ The image-forming conjugate planes are aligned to ensure that the light emanating from the
2. Related to Pixel Size: In digital imaging, spatial resolution is related to the
specimen is correctly focused to form a clear image. number of pixels (image elements) and their size. A higher pixel count with smaller
pixels can capture more details and thus provide higher spatial resolution.
3. Influenced by Optics: In optical imaging systems like microscopes or cameras,
the spatial resolution is also influenced by the quality of the optics and the
numerical aperture (NA) of the lens system.
4. Rayleigh Criterion: In microscopy, the Rayleigh criterion is a standard for
determining the resolving power of an optical instrument. It states that two points
are just resolvable when the central maximum of the diffraction pattern of one
point coincides with the first minimum of the diffraction pattern of the other.
5. Wavelength and Numerical Aperture : The resolution limit in an optical system is
given by the Rayleigh criterion formula: d=NA0.61λwhere dis the
resolution, λis the wavelength of light used, and NAis the numerical aperture
of the objective lens.
6. Limitations: The spatial resolution is ultimately limited by the physics of light and
the optical system's ability to focus light. For visible light, this limit is on the order
of hundreds of nanometers.
7. Applications: High spatial resolution is crucial in applications where fine details
are important, such as in medical imaging for diagnostics, satellite imagery for
geographic information systems, and in microscopy for cellular and molecular
biology research.
8. Digital Resolution vs. Optical Resolution: It's important to differentiate between
digital resolution (the number of pixels in an image) and optical resolution (the
ability of an optical system to separate fine details). A high digital resolution does
not necessarily imply high optical resolution, and vice versa.
11. Kohler Illumination 9. Enhancement Techniques: Techniques such as deconvolution can be used to
Purpose: To provide even illumination of the sample. improve the effective spatial resolution of an image by reversing some of the
Procedure: Steps to set up and achieve Kohler illumination. blurring effects introduced by the optical system.
1. Focus the specimen.
2. Close the field diaphragm to small.
3. Focus (and center) the condenser to obtain the sharp edge of the field diaphragm.
4. Open the field diaphragm.

12. Inverted Microscopes


Design: Similar to an inverted upright microscope.
Advantages: Compatibility with live-cell imaging and other advanced techniques.

Quick Notes Page 9


Quick Notes Page 10
9. Cost: While the initial investment in digital radiography equipment may be higher, the long-
term costs can be lower due to reduced consumables and the ability to manipulate images
without the need for retakes.
10. Teleradiology: Digital radiography facilitates teleradiology, where images can be sent
electronically to radiologists for interpretation, regardless of geographic location.
In summary, digital radiography offers several advantages over conventional radiography, including
immediate image availability, better image quality and manipulation capabilities, space savings from
electronic storage, and the potential for lower long-term costs. However, it requires a greater initial
investment in equipment and relies on having the necessary digital infrastructure in place.

• X-Ray Film: characteristic curve, optical density, silver-bromine grains.


X-ray film, also known as radiographic film, was a crucial component of traditional X -ray imaging
systems. It works on the principle of photographic film, capturing and storing a latent image that is
created by the exposure to X-ray radiation. Here's a detailed explanation of X-ray film:
Composition
1. Base Material: The film typically has a polyester or cellulose acetate base, which provides a
flexible and durable support for the emulsion layer.
2. Emulsion Layer: This is a thin layer of a photosensitive silver compound, usually silver
bromide (AgBr), suspended in a gelatin matrix. The emulsion is applied to one or both sides of
the base material.
3. Protective Layers: The emulsion is often covered with a protective layer to prevent physical
damage and to facilitate handling.
Imaging Process
1. Exposure to X-rays: When the X-ray film is exposed to X-rays, the silver bromide crystals
absorb the radiation energy. The amount of energy absorbed varies depending on the density
of the tissue through which the X-rays pass. Dense tissues like bones absorb more X-rays,
resulting in fewer silver bromide crystals being exposed.
2. Formation of Latent Image: The absorbed energy creates a latent image within the emulsion
layer, which is not visible to the naked eye.
3. Development: The exposed film is processed using a developer solution, which reduces the
exposed silver bromide crystals to metallic silver, forming a visible image. The developer
• Vitrification: Rapid freezing to preserve cellular structures in a near-native state. works by converting the silver ions (Ag⁺) to metallic silver (Ag), which appears as dark areas on the
• Cryo-ET: 3D imaging of vitreous (glass-like) frozen samples. film.
• Advantages: Minimizes artifacts from chemical fixation and dehydration. 4. Fixing: After development, the film is treated with a fixing agent, usually a thiosulfate or
similar compound, which removes the unexposed silver bromide crystals. This step stabilizes
Types of cryo fixation the image and prevents further development.
1. Plunge Freezing or Snap Freezing: 5. Washing and Drying: The film is then washed to remove the fixing agent and any residual
○ The most common method. chemicals, followed by drying to complete the process.
○ Involves rapidly immersing the sample into a cryogen, typically liquid ethane or propane, which is cooled by Characteristics
liquid nitrogen or liquid helium. • Resolution: X-ray film can provide high-resolution images, capturing fine details of the
○ This method is suitable for small samples and can preserve near -native structures with minimal artifacts. internal structures of the body.
○ 2-5 um thick samples • Dynamic Range: It has a limited dynamic range compared to digital systems, which can
○ -189 degrees make it challenging to visualize different tissue densities in a single image.
• Processing: The film must be physically processed, which requires a darkroom and involves
the use of chemicals.
• Storage: The developed film must be stored in a controlled environment to prevent
degradation over time.
Digital Radiography and X-ray Film
The advent of digital radiography has largely supplanted the use of X-ray film due to several
advantages, including immediate image availability, the ability to manipulate and enhance images
after capture, and the elimination of the need for chemical processing. Digital systems also offer
better scalability, easier storage and retrieval, and the potential for lower long-term costs.
However, X-ray film still has a place in certain applications, particularly where digital technology is
not available or in situations where the high resolution and detail of film are required. It also serves
as an important historical reference for the development of medical imaging technology.
○ 9. X-Ray Interaction
• Interaction with the body in a specific energy band.
• Depth information limitation.
• Variance in absorption by different tissues.

2. High-Pressure Freezing (HPF):


○ Uses a high-pressure device to quench the sample.
○ The pressure helps to prevent the formation of large ice crystals, which can damage the sample structure.
○ Ideal for thicker samples (up to 200 µm), such as tissue slices or larger cellular structures.
○ Provides better preservation of cellular ultrastructure compared to chemical fixation and room temperature
methods.
○ -180 degrees

Dehydration and embedding using freeze-substitution 10. Radiographic Densities


Dehydration and embedding using freeze-substitution is a process used in the preparation of biological samples for • Recognized densities: air, fat, soft tissue, bone, metal.
transmission electron microscopy (TEM). It is an alternative to conventional chemical fixation and dehydration methods • Image creation: dark for X-rays that pass through, light for X-rays blocked.
and is particularly useful when trying to preserve the native structure of the sample with minimal artifacts. Here's a
breakdown of the process and its steps:
1. Cryo-Fixation (Plunge Freezing or High-Pressure Freezing): The first step in freeze-substitution is the rapid
cryo-fixation of the sample, which involves plunge freezing (dipping the sample into a cryogen like liquid ethane
or propane) or high-pressure freezing (HPF). This step is critical to preserve the sample's structure in a near-
native state by vitrifying the water content.
2. Freeze-Substitution: After cryo-fixation, the sample is moved to a freeze-substitution device where it is kept at a
low temperature (often around -90°C to -120°C) and gradually dehydrated by a solvent, typically an organic
solvent like acetone, ethanol, or methanol. This process replaces the water in the sample with the organic solvent
while minimizing the formation of ice crystals that can distort the cellular structure.
3. Dehydration: The purpose of dehydration is to remove water from the sample, which is essential before
embedding because water interferes with the polymerization of the resin and can cause electron scattering in the
electron microscope.
4. Embedding: Once the sample is dehydrated, it is infiltrated with a resin. The resin is a viscous polymer that
hardens (polymerizes) to form a solid block. The sample is gradually immersed in increasing concentrations of
the resin and solvent mixture, followed by complete immersion in the resin.
5. Polymerization: The resin is then polymerized to harden it, which can be done using UV light or by heating the
sample to a specific temperature (e.g., 50-60°C for several hours).
6. Sectioning: After polymerization, the embedded sample is sectioned into thin slices (typically 70-100 nm for
TEM) using an ultramicrotome. These thin sections are then placed on TEM grids for imaging.
The temperature during freeze-substitution is lower than room temperature but higher than the temperature of the
cryogen used in plunge freezing. The reason for this temperature difference is to facilitate the substitution of water with 11. Image Noise in X-Ray
the solvent while still preventing the formation of large ice crystals that could damage the sample's ultrastructure. The • Sources: fluctuations in absorbed photons, energy, and silver halide grains.
solvent used in freeze-substitution does not vitrify but rather slowly dehydrates the sample, which is a controlled Quantum mottle and random darkening are two types of noise that can affect the quality of images
process that helps maintain the cellular architecture. produced by X-ray radiography and other imaging techniques that rely on the detection of photons
In summary, freeze-substitution is a method that comes after cryo-fixation (plunge freezing or HPF) and is used to or other quanta.
dehydrate and embed the sample for TEM analysis. The temperature used in freeze-substitution is carefully chosen to
ensure the preservation of the sample's native structure while transitioning from the vitreous state to a resin -embedded
Quantum Mottle
state suitable for sectioning and imaging. 1. Definition: Quantum mottle, also known as quantum noise or photon noise, is a fundamental
type of noise that arises from the statistical variation in the number of photons or other quanta
(such as electrons) that are detected or absorbed during the imaging process.
2. Cause: This variation is inherent to the quantum nature of light and other forms of radiation,
where the number of quanta arriving at a detector in a given time period follows a Poisson
distribution. This means that even if the source of radiation is constant, the number of quanta
detected will still vary due to the probabilistic nature of quantum events.
3. Effect on Image: Quantum mottle manifests as a grainy or speckled appearance in the
image, which can reduce the contrast and visibility of small objects or fine details. The level of
quantum mottle is directly related to the intensity of the radiation; higher intensity radiation will
generally produce less mottle due to a larger number of quanta being detected.
4. Reduction: Quantum mottle can be reduced by increasing the exposure time or the intensity
of the radiation, which increases the number of quanta detected and thus averages out the
statistical fluctuations. However, this must be balanced against the potential risks of increased
radiation exposure to the patient.

Chemical fixation vs cryo-fixation


• Menbrane shrinking in chemical fixation

Negative stain EM vs cryo EM

Random Darkening
1. Definition: Random darkening, also known as film fogging or developer fog, refers to the
appearance of a uniform, undesired darkening of the film that is not related to the exposure to
the X-ray radiation.
2. Cause: This type of noise can be caused by various factors, including:
○ Chemical reactions within the film emulsion that occur even in the absence of radiation,
Negative stain electron microscopy (negative stain EM) is a technique used to visualize biological samples such as such as those caused by heat or humidity.
macromolecules, macromolecular complexes, and viruses by enhancing their contrast in the electron microscope. The ○ Mishandling of the film, which can lead to unwanted reactions with the emulsion.
term "negative stain" refers to the method of applying a heavy metal salt, which preferentially stains the background ○ Defects or contamination in the film or the processing chemicals.
rather than the biological sample itself, creating a dark background with lighter regions where the sample is located. 3. Effect on Image: Random darkening can reduce the overall contrast of the image and make it
This results in the sample appearing as a "negative" or inverse image against the darkly stained background. more difficult to distinguish between different structures. It can also introduce a bias towards
Here are the key aspects of negative stain EM: darker tones, which can affect the interpretation of the image.
1. Sample Preparation: The biological sample is applied to a supportive surface, typically a carbon-coated copper 4. Reduction: To reduce random darkening, it is important to:
grid. ○ Store the film under proper conditions (e.g., cool, dry, and away from light).
2. Staining Process: After the sample is adhered to the grid, the excess liquid is wicked away, and a drop of a ○ Handle the film carefully to avoid physical damage or contamination.
heavy metal salt solution (the negative stain) is applied. Commonly used stains includeuranyl acetate, uranyl ○ Use fresh, properly mixed processing chemicals and follow the recommended
formate, or ammonium molybdate. processing times and temperatures.
3. Drying: The grid is then tilted or blotted to remove the excess stain, leaving the sample surrounded by a layer of In summary, quantum mottle and random darkening are two distinct sources of noise that can affect
the heavy metal salt. the quality of X-ray images. Quantum mottle is a fundamental limitation of photon -based imaging
4. Drying and Vitrification: The grid is allowed to dry, and the sample is vitrified if it is in a thin layer of water. This due to the quantum nature of light, while random darkening is an undesired effect that can be
can help to maintain the native conformation of the sample. minimized through careful handling and processing of the film. Both types of noise can reduce the
5. Imaging: The stained, dried sample is then examined in a transmission electron microscope. The heavy metal contrast and visibility of the image, making it more challenging to diagnose conditions based on the
salt scatters electrons more strongly than the biological material, creating a dark background that makes the radiographic image.
biological sample visible as a lighter region.
6. Advantages: Negative staining is a relatively quick and straightforward technique that does not require cryogenic 12. Image Quality Factors
temperatures or high-vacuum conditions. It is also useful for studying samples that are difficult to crystallize, such • Source: energy of photons, collimation.
as large macromolecular complexes or viruses. • Object: attenuation coefficient, source-object geometry.
7. Limitations: The resolution of negative stain EM is typically limited to about 15-20 Å (1.5-2 nm) due to the • Detector: object-detector geometry, efficiency.
staining process and the possibility of the sample shrinking away from the stain. Additionally, the technique can
sometimes introduce artifacts, such as distortion or flattening of the sample, and it does not provide 3D structural 13. X-Ray Radiography Clinical Applications
information. • Chest, skeletal, abdomen, dental assessments.
8. Applications: Negative stain EM is widely used for the initial characterization of macromolecular structures, for • Advantages: inexpensive, low radiation, good contrast.
studying the morphology of viruses, and for assessing the homogeneity and stability of biological samples. • Disadvantages: poor soft tissue differentiation, 2D only.
Negative stain EM is a valuable tool in structural biology, providing insights into the shape, size, and surface features
of macromolecules and complexes that are not readily obtained by other methods. It can also be a complementary 14. X-ray Computed Tomography (CT)
technique to cryo-electron microscopy, especially when studying samples that may not be suitable for crystallization or • Cross-sectional images from multiple X-ray images taken at different angles.
when high-resolution structural information is not required.
• History: Radon transform, Allan Cormack, Godfrey Hounsfield.
Differences between negative stain and cryo EM • From 128x128 matrix to 512x512matrix
Negative stain electron microscopy (negative stain EM) and cryo-electron microscopy (cryo-EM) are both techniques CT Scanning and Electromagnetic Energy
used to visualize biological samples at high resolution, but they differ in several key aspects, including sample 1. X-ray Source: In a CT scan, an X-ray source emits a beam of electromagnetic energy, which
preparation, the state of the sample, and the resulting image quality. Here's a comparison of the two methods: consists of X-ray photons.
Negative Stain EM 2. Rotation: The X-ray source and an array of detectors rotate around the patient, capturing
1. Sample State: The sample is dried onto a support grid, often after adsorption to a carbon film. images from multiple angles. This rotation ensures that the X-rays pass through the patient
2. Staining: Uses heavy metal salts (like uranyl acetate or ammonium molybdate) that stain the background, from various directions.
leaving the biological sample appearing lighter by contrast. 3. Attenuation: As the X-rays pass through the patient's body, they are attenuated (lessened in
3. Sample Preparation: Relatively straightforward; the sample is applied to a grid, excess liquid is wicked away, intensity) by different amounts depending on the density of the tissues they encounter. Bones,
and the negative stain is added. for example, absorb more X-rays than soft tissues or air.
4. Hyddration: The sample is dehydrated, which can cause conformational changes or collapse of the structure. 4. Detection: The detectors on the opposite side of the patient from the X-ray source measure
the attenuated X-ray energy. Because the X-rays are coming from different angles, the
5. Resolution: Generally limited to about 15-20 Å (1.5-2 nm) due to the staining process and potential sample
detectors can gather a range of data about the internal structures.
distortion.
5. Data Collection: The intensity of the X-ray beam is measured at multiple points around the
6. Artifacts: Can introduce artifacts such as flattening or shrinkage of the sample.
body, creating a set of data for each angular position.
7. Imaging: The sample is imaged in a non-cryogenic, or room temperature, state. 6. Reconstruction: The collected data is then used by a computer to reconstruct a 2D cross-
8. Applications: Suited for initial structural characterization, morphology studies, and assessing sample sectional image (slice) of the patient's anatomy. This process involves complex mathematical
homogeneity. algorithms, such as the filtered back projection or more advanced iterative reconstruction
Cryo-EM techniques.
1. Sample State: The sample is maintained in a near-native, hydrated state by rapid freezing (vitrification) to form a 7. Series of Slices: By repeating the process while incrementally moving the patient through the
glass-like ice layer. scanner, a series of 2D images can be obtained. These can be combined to create a 3D
2. Staining: No staining is used; the sample is imaged as-is within the ice. representation of the area of interest.
3. Sample Preparation: More complex, involving plunge freezing or high-pressure freezing to prevent ice crystal 8. Contrast Agents: In some cases, a contrast agent (such as iodine for vascular studies or
formation, which could damage the sample. barium for gastrointestinal studies) may be introduced into the patient's body to enhance the
differences between tissues and improve the visibility of specific structures.
4. Hydration: The sample remains hydrated, which helps to preserve its native conformation.
5. Resolution: Can achieve near-atomic resolution (better than 2 Å) for well-prepared, homogeneous samples. Benefits of Using Electromagnetic Energy from All Angles
6. Artifacts: Fewer artifacts related to staining; however, radiation damage and the "missing wedge" problem can • Detailed Imaging: The use of X-rays from multiple angles allows for the creation of detailed,
be challenges. high-resolution images that can reveal subtle differences in tissue density.
7. Imaging: The sample is imaged under cryogenic conditions to maintain the vitreous state. • Cross-Sectional Views: CT scans provide cross-sectional images, which are particularly
8. Applications: Ideal for determining high-resolution 3D structures of macromolecules, complexes, and even large useful for visualizing internal structures that may be obscured in traditional planar X-ray
cellular structures. images.
• 3D Reconstruction: The series of 2D images can be used to generate 3D models of the
body's internal structures, which can be valuable for surgical planning and other applications.
Negative stain Vitrified sample
• Diagnostic Accuracy: The comprehensive data obtained from multiple angles can lead to
High contrast image Low contrast image more accurate diagnoses and better patient outcomes.
No special temepearture control 85K(-180 degrees) storage
15. CT Scan Operation
No radiation damage High radiation damage • 2D cross-sectional images.
Particle disorted Particle undistorted • Different body parts absorb beams differently.
Stain shell Image is of the actual particle • Contrast material enhances images.
Low RES High RES 16. Scanner Components
Good for initial sample screening Best choice for reconstruction • Gantry, detectors, array processor, computer, storage, console, scan controller.
Dried sample Frozen hydrated sample 1. Gantry: The gantry is the large, donut-shaped structure that houses the X-ray tube and the
detectors. It rotates around the patient to capture images from different angles.
High signal to noise Low signal to noise
2. Detectors: These are the devices within the gantry that detect the X-ray energy that has
passed through the patient. They convert the X-ray energy into electrical signals, which are
Single Particle Analysis (SPA) and Electron Tomography (ET): then processed into images.
• SPA: Used to determine 3D structures of non-crystalline specimens like proteins and viruses. 3. Array Processor: The array processor is a specialized hardware component that performs
• ET: Generates 3D structures of thicker cellular sections or isolated macromolecular complexes. the complex mathematical operations required to reconstruct the 2D image slices from the
data collected by the detectors.
4. Computer: The computer system manages the overall operation of the CT scanner, including
controlling the X-ray source, managing the data acquisition, and interfacing with the user.
5. Storage: This refers to the storage devices where the raw data and reconstructed images are
temporarily or permanently stored. This can include both local storage within the scanner and
networked storage systems.
6. Console: The console is the operator's interface, typically a separate workstation where the
radiographer or technician controls the scanner settings, initiates scans, and reviews the
acquired images.
7. Scan Controller: The scan controller is the component that manages the timing and
coordination of the scan, including the rotation of the gantry, the emission of the X-ray beam,
and the synchronization with the detector array.

17. Tomographic Image

Quick Notes Page 11


The statement provided highlights various imaging modalities, each with unique capabilities for examining
different aspects of body function and structure. Here are examples of the processes that each imaging type
can visualize:
1. CT (Computed Tomography):
○ Focus: Primarily used for detailed images of the body's anatomy, especially bones, hard tissues,
and some soft tissues.
○ Example Process: CT can create cross-sectional images of the head to visualize the skull, brain
structures, and detect fractures or tumors.
2. US (Ultrasound):
○ Focus: Utilizes sound waves to capture images of soft tissues, blood flow, and organs in real -
time.
○ Example Process: Obstetric ultrasound to monitor fetal development and assess the placenta
and amniotic fluid.
3. MRI (Magnetic Resonance Imaging):
○ Focus: Provides detailed images of soft tissues, including the brain, muscles, heart, and
cancerous tumors.
○ Example Process: MRI can be used to detect and characterize lesions in the brain, assess joint
injuries, and evaluate the extent of a tumor before surgery.
4. Nuclear (Positron Emission Tomography - PET, and Single-Photon Emission Computed
Tomography - SPECT):
○ Focus: Involves the use of small amounts of radioactive material to evaluate organ function and
metabolism.
○ Example Process: PET scans can detect areas of the brain with increased metabolic activity,
which can be useful in diagnosing and tracking conditions like Alzheimer's disease or epilepsy.
5. Optical Imaging:
○ Focus: Uses light to image biological tissues, often at or below the surface, and can be used to
study molecular and cellular processes.
○ Example Process: Fluorescence imaging can be used to track the movement of fluorescently
labeled cells or molecules in real-time during biological experiments.

Quick Notes Page 12


Quick Notes Page 13
Quick Notes Page 14
6. Console: The console is the operator's interface, typically a separate workstation where the
radiographer or technician controls the scanner settings, initiates scans, and reviews the
acquired images.
7. Scan Controller: The scan controller is the component that manages the timing and
coordination of the scan, including the rotation of the gantry, the emission of the X-ray beam,
and the synchronization with the detector array.

17. Tomographic Image

• 2D CT image corresponds to a 3D section.


• Pixel-voxel correspondence.
• Average x-ray attenuation properties.
• Acquisition > alignment>reconstruction

18. Image Display - Pixels and voxels


• Rays, projections, parallel vs. fan beam geometry.
In the context of CT (Computed Tomography) imaging, the terms "ray," "projection," and the two
geometries mentioned—parallel beam and fan beam—are fundamental concepts that describe how data is
acquired to create a 3D image of the patient's internal structures. Here's a breakdown of each term:
Ray
A "ray" refers to a single transmission measurement through the patient that is made by a single
detector at a given moment in time. In CT imaging, an X-ray beam is emitted by the X-ray tube and
passes through the patient. The detector on the opposite side of the patient measures the
attenuated (lessened) X-ray beam, which is now a single "ray" of data. This ray represents the
cumulative attenuation of the X-ray beam as it passed through the patient.
Projection
A "projection" (also known as a "view") is a collection of rays that pass through the patient at the
same orientation. By acquiring multiple rays or projections around a single point (or a series of
• A heterogeneous sample in the context of scientific research, particularly in fields like structural biology, points along the patient's body), the CT scanner can gather a comprehensive set of data that
biochemistry, and materials science, refers to a sample that consists of multiple different components or phases that represents a "slice" through the patient's body.
are not uniformly distributed. These components can vary in their chemical composition, physical properties, or
structural characteristics. Projection Geometries
1. Parallel Beam Geometry:
○ In parallel beam geometry, all the rays in a projection are parallel to one another. This is
the simplest form of beam geometry and is often used in theoretical models because it
allows for direct inversion of the Radon transform to reconstruct the image.
○ The X-ray source emits a narrow, fan-shaped beam that is collimated (narrowed) to
produce a thin slice of X-rays that are parallel.
○ The detectors are aligned in a row opposite the X-ray source, and each detector
element corresponds to a single ray.
2. Fan Beam Geometry:
○ In fan beam geometry, the rays at a given projection angle diverge, creating a fan-
shaped beam of X-rays. This is the most common geometry used in clinical CT
scanners.
○ The X-ray source and the detector array are positioned on opposite sides of the patient,
and both rotate around the patient. As they rotate, the X-ray source emits a broad beam
of X-rays that diverge to cover the entire detector array.
○ The detectors are arranged in an arc or a line, and each detector element measures a
ray that corresponds to a different angle of the fan.
Image Reconstruction
Regardless of the geometry used, the data collected from the rays or projections is used to
reconstruct a 2D image slice of the patient's body. This is typically done using a mathematical
process called back projection, where the data from each ray is "back projected" onto a 2D image
matrix to build up the final image. Modern CT scanners use more advanced algorithms, such as
• filtered back projection or iterative reconstruction techniques, to improve image quality and reduce
artifacts.

*A Single CT image may involve approximately 800 rays taken at 1000 different projection angles.

• Back projection reconstruction.


Back projection is a fundamental technique used in computed tomography (CT) to reconstruct a
two-dimensional (2D) image from a series of projections or views obtained at different angles
around the patient. It's a process that essentially reverses the data acquisition steps to build up an
The missing wedge problem image from the projection data. Here's a step-by-step explanation of how back projection works:
1. Data Acquisition: The CT scanner rotates around the patient, and at each angular position,
an X-ray beam passes through the patient, and the transmitted radiation is measured by
detectors. This process is repeated at multiple angles to create a set of projections.
2. Projection Data: Each projection represents the sum of the X-ray attenuation coefficients (μ)
along a path through the patient. The attenuation coefficient is a measure of how much the X-
rays are absorbed or scattered by the tissues.
3. Back Projection: The projection data is then "back projected" onto a 2D image matrix, which
represents a cross-sectional slice of the patient. This means that each value from the
projection data is spread evenly back along the path the X-rays took.
4. Building the Image: As the back projection process is repeated for each angular position, the
image matrix accumulates data. Areas where the X-ray beam was attenuated more (e.g., by
dense bones) will have higher values because more data is back projected onto those areas.
Conversely, areas where the beam was attenuated less (e.g., by soft tissues) will have lower
values.
5. Resulting Image: After all the projections have been back projected, the image matrix
represents a 2D image of the patient's slice. The image shows the average X-ray attenuation
properties of the tissues in the corresponding voxels (volume elements).
Limitations of Back Projection
• Blur: One of the main limitations of simple back projection is that it tends to produce images
that are blurred, especially near the edges of structures. This is because the back projection
process spreads the data evenly along the path, which can cause neighboring structures to
appear less distinct.
• Streak Artifacts: Another issue is that back projection can create streak artifacts, particularly
from areas of high attenuation, such as bones or metal implants. These streaks extend from
the high-attenuation areas through the rest of the image.
Filtered Back Projection (FBP)
In 3D electron tomography, tiling along the axis can introduce certain problems, one of which is the “missing wedge” To overcome these limitations, a more advanced technique called filtered back projection is
issue. This occurs because the sample cannot be tilted a full 180 degrees due to physical constraints, resulting in a lack commonly used in clinical CT scanners. FBP applies a filter to the projection data before back
of data from certain angles. This missing information leads to anisotropic resolution, where the resolution is not projection to eliminate or reduce the blurring and streak artifacts. The filter modifies the high -
uniform in all directions. Specifically, the resolution along the tilt axis is better than the resolution perpendicular to it, frequency components of the data, which are responsible for the streaking.
In summary, back projection is a basic method for reconstructing images from projection data in CT
and the areas close to the edges of the structure that are perpendicular to the electron beam direction at 0° specimen tilt
scanning. While it has limitations that can affect image quality, it forms the conceptual foundation for
can appear substantially blurred.
more sophisticated reconstruction techniques like filtered back projection.
Moreover, in single-axis tomography, where the specimen is tilted only around one axis (usually the y-axis), the
resolution along the y-axis is better than that along the x-axis, and the resolution along the z-axis (the axis of electron
beam propagation) is the worst. This can result in elongation of structures along the beam direction and a decrease in
resolution for features that are not aligned with the tilt axis.

Recent advancements in reconstruction algorithms have been developed to address these issues, aiming to reduce the
number of projection images needed to attain a certain resolution without violating the Nyquist criterion, which is a
fundamental principle in sampling theory that dictates the minimum rate at which a signal can be sampled without
introducing errors.

The "missing wedge" problem is a limitation encountered in 3D electron tomography, a technique used to obtain
three-dimensional (3D) structural information from biological samples. This issue arises due to the physical 19. CT Generations
constraints of the sample holder and the electron microscope itself, which prevent the sample from being tilted • 1st to 7th generation advancements: detector numbers, scan times, ring artifacts, helical scanning
through a full 180-degree range.
Here are the key points about the missing wedge problem: 1st Generation: Rotate/Translate (Pencil Beam)
1. Tilt Limitation: In electron tomography, a series of 2D images are acquired at different tilt angles to • Features: These early scanners had a single X-ray detector and used a pencil beam (a
reconstruct a 3D image. However, the sample can only be tilted to a certain maximum angle, typically up to narrow, focused beam) that was moved linearly to collect data across a 24 cm field of view
±60 to 70 degrees, due to the physical limitations of the microscope and the holder. (FOV). The scanner would rotate and then translate (move) the beam to cover a full 360
2. Data Inaccessibility: Because the sample cannot be tilted a full 180 degrees, there is a range of angles degrees, obtaining 180 projections at 1-degree intervals.
from which the electron microscope cannot collect data. This results in a "missing wedge" of data in the 3D • Time: It took about 4.5 minutes per scan, with an additional 1.5 minutes required for image
data set. reconstruction.
3. Anisotropic Resolution: The missing wedge leads to anisotropic resolution, where the resolution is not 2nd Generation: Rotate/Translate (Narrow Fan Beam)
uniform in all directions. The resolution is typically better along the tilt axis and worsens as it moves • Improvements: Introduced a linear array of detectors (around 30), which allowed for more
perpendicular to the tilt axis. data to be collected per rotation, improving image quality.
4. Impact on 3D Reconstruction: The missing wedge can cause structures to appear elongated or smeared • Data Collection: The system collected data for 600 rays times 540 views, significantly more
along the beam direction (z-axis), and it can also lead to a loss of information, particularly for structures than the first generation.
near the edges of the sample.
• Speed: The scan time was reduced to about 18 seconds per slice.
5. Sub-Tomogram Averaging (STA): One approach to mitigate the missing wedge problem is through STA,
where similar structures from multiple tomograms are averaged to fill in the missing information. 3rd Generation: Rotate/Rotate (Wide Fan Beam)
• Design: Featured a substantial increase in the number of detectors (to more than 800) and a
wider fan beam that covered the entire patient.
• Mechanics: The X-ray tube and detector array were mechanically linked and rotated together
around the patient, eliminating the need for the scanner to translate the X-ray source.
• Speed: Newer systems could achieve scan times of half a second or less.
4th Generation: Rotate/Stationary
• Innovation: Designed to overcome the issue of ring artifacts that were common in 3rd
generation scanners.
• Detectors: Used a stationary ring of detectors (around 4,800) that did not move with the
gantry.
5th Generation: Stationary/Stationary (Electron Beam CT)
• Technology: No conventional rotating X-ray tube; instead, it used an electron beam that was
steered around a large arc of tungsten encircling the patient, with the detectors on the
opposite side.
• Speed: Capable of extremely fast scan times, around 50 milliseconds, which allowed for
dynamic imaging, such as the beating heart.
6th Generation: Helical (Spiral)
6. Dual-Axis Tomography: Another strategy to overcome the missing wedge is to perform dual -axis or multi- • Innovation: Scanners acquired data while the patient table was continuously moving,
tilt tomography, where the sample is imaged on two orthogonal axes, providing more complete data allowing for more continuous and faster scanning.
coverage. • Advantages: Reduced the need for patient breath-holding, decreased the amount of contrast
agent required, and increased patient throughput.
• Scan Time: In some cases, the entire scan could be completed within a single breath-hold.
7th Generation: Multiple Detector Array (Wide-Detector or
Volumetric CT)
• Detectors: Utilized an increased number of detector rows, allowing for wider coverage of the
patient's anatomy with each rotation and the ability to capture a volume of data instead of just
a single slice.
• Applications: Enabled the use of adaptive statistical iterative reconstruction (ASIR)
techniques, which can reduce radiation dose while maintaining image quality.

20. CT Windowing and Contrast Agents


• Use of iodine and barium-based agents to enhance images like small vessels.
• Risks: radiation damage, allergic reactions, nephropathy(kidney related).
CT windowing, also known as windowing or window level adjustment, is a technique used in
computed tomography (CT) to optimize the visualization of different tissues within the body. It
involves adjusting the range of Hounsfield units (HU) that are displayed as visible on the image.
Hounsfield units are a measure of the attenuation of X-rays by tissues, with each type of tissue
having a characteristic HU value. For example, air is typically around -1000 HU, water is around 0
HU, and bone can be +1000 HU or more.

7. Resolution Improvement: Advances in reconstruction algorithms aim to reduce the impact of the missing
wedge by improving the estimation of missing data and enhancing the overall resolution of the 3D
reconstruction.
8. Sample Thickness: The missing wedge problem is more pronounced in thicker samples, where the
increased scattering and absorption of electrons exacerbate the effects of the missing data.
Here's how CT windowing works:
Cryo FIB-milling(If target is deeep inside the cell) 1. Window Width: This is the range of Hounsfield units that the display is set to show. For
instance, a window width of 2000 HU would display values from -1000 HU to +1000 HU.
2. Window Level: This is the center point of the window width. Adjusting the window level shifts
the range of HU values that are displayed as a mid-gray. For example, if the window level is
set to 40 HU with a window width of 2000 HU, the display will show values from -980 HU to +
1030 HU.
3. Visualizing Different Tissues: By adjusting the window width and level, radiologists can
emphasize the contrast between different types of tissues. For example:
○ A narrow window width and a level set to around 40 HU is used for lung imaging (where
the contrast between air and lung tissue is important).
○ A wider window width and a level set to around 40 HU is used for mediastinum or brain
imaging (to show more detail in soft tissues).
4. Applications: CT windowing is crucial for interpreting CT scans, as it allows radiologists to
focus on specific tissues or structures within the body. It helps in diagnosing various
conditions, assessing the extent of injuries, and guiding treatment decisions.
5. Interactive Windowing: Modern CT viewing software allows radiologists to interactively
adjust the window width and level to get the best view of the anatomy or pathology in
question.
6. Dual or Multi-Windowing: Sometimes, two or more windows are displayed simultaneously,
each with different settings, to compare different tissues or to show both bones and soft
tissues effectively.
In summary, CT windowing is a vital tool in radiology that enhances the interpretability of CT images
by allowing radiologists to adjust the contrast and brightness to better visualize different tissues
within the body.

21. Dose and Adverse Effects


• Radiation dose comparison between X-Ray and CT.
• Mild to severe reactions from contrast agents.

22. X-Ray vs. CT


• Disadvantages: CT has lower resolution and higher radiation doses.
• Advantages: no film quality limit, more sensitive to soft tissue, 3D model reconstruction, faster
scanning.

Key Techniques and Considerations:


• Sample Preparation: Vitrification for native state preservation, chemical fixation for better contrast.
• Labeling Methods: Immuno-gold, HRP, APEX/APEX2, miniSOG for protein localization.
• 3D Imaging: Serial section TEM, FIB-SEM, SBF-SEM, ATLUM-SEM for volume imaging.
• Limitations and Solutions: Missing wedge problem in tomography, anisotropic resolution, and the use of sub-
tomogram averaging (STA) to improve structural details.

Coventional TEM vs Cryo EM

Quick Notes Page 15


Quick Notes Page 16
Quick Notes Page 17
Quick Notes Page 18
Conventional TEM CryoEM
High contrats Low contrast
Low resolution High resolution
Artifacts due to dehydration and chemical fixation No artifacts
Fast Time consuming
Loong term storage and repeated imaging possible Repeated imaging not possible

Correlative Light and Electron Microscopy (CLEM):


• Combines information from light microscopy and electron microscopy to study the same cellular structure or
process.

Future Perspectives:
• Advances in reconstruction algorithms.
• Multi-tilt tomography to overcome the missing wedge problem.
• Development of new labeling and staining techniques for better resolution and contrast.

Quick Notes Page 19


Quick Notes Page 20

You might also like