Digital Image Processing: Prof. B.L. Deekshatulu

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 121

DIGITAL IMAGE PROCESSING

Reference Book:
-----------------------
Digital Image Processing, 3rd Ed.
Gonzalez and Woods
2008
Prof. B.L. Deekshatulu
Imaging
The whole electromagnetic spectrum is used by “imagers”

Electromagnetic Spectrum radio


frequency
microwave
visible (SAR)
gamma
cosmic rays
X-Rays
rays UV IR

-4 -2 2 4 6 8 10 12
10 10 1 10 10 10 10 10 10
wavelength (Angstroms)
-1 0
1 Å = 10 m

From Prof. Alan C. Bovik


Scales of Imaging

From the gigantic…

28
10 m

The Great Wall

(of galaxies)

From Prof. Alan C. Bovik


Scales of Imaging

… to the everyday …

v
ideocam
era

1m

From Prof. Alan C. Bovik


Scales of Imaging

… to the tiny.

e
lec
tro
nmic
ros
cop
e -6
1
0m

From Prof. Alan C. Bovik


Visible Light Imaging
Key components of image formation
 Where will the image point appear?
 Intensity (value) of that point?
Image formation
• There are two parts to the image formation
process:

– The geometry of image formation, which determines


where in the image plane the projection of a point in the
scene will be located.

– The physics of light, which determines the brightness


of a point in the image plane as a function of
illumination and surface properties.
Digital Image Representation
• An image is a function
defined on a 2D
coordinate f(x,y).
• The value of f(x,y) is the
intensity.
• 3 such functions can be
defined for a color image,
each represents one
color component
• A digital image can be
represented as a matrix.

Sampling determines the number points.


Quantization determines the number Intensity levels .
Matrix Representation
183 160 94 153 194 163 132 165
183 153 116 176 187 166 130 169

179 168 171 182 179 170 131 167 
177 177 179 177 179 165 131 167 
178 178 179 176 182 164 130 171
 
179 180 180 179 183 169 132 169
179 179 180 182 183 170 129 173
 
180 179 181 179 181 170 130 169

Divide into
8x8 blocks
H=256

W=256 32x32
From [Gonzalez & Woods]
Imaging Outside the Visible Range of
the EM Spectrum
 X-Rays : Used in medical diagnostics and
computerized tomography (CT)

Chest X-ray Dental X-ray


One slice of CT of patient’s abdomen

See next slide for explanation


Computer Tomography (CT) ,also called
Computerized Axial Tomography (CAT):-

A ring of detectors encircles an object (patient), and an X


ray source, concentric with the detector ring, rotates
about the object.The X rays pass through the object and
are collected at the opposite end by the corresponding
detectors in the ring.Tomography consists of algorithms
that use these data to construct a SLICE image of the
object.
Motion of the object perpendicular to the ring of detectors
produces a set of such slices(3-D rendition of the object)
 Ultraviolet imaging (300nm): Used in industrial
applications, law enforcement, microscopy and
astronomy
 When UV light is directed upon a fluorescent material, it
gets converted into visible red light.

Fluorescence microscopy of cells Fluorescence microscopy of cells


 Infrared imaging (800nm+): Used in
satellite imaging, law enforcement, and fire
detection

Satellite image showing Infrared satellite imagery in the


water vapor near infrared band
 Multi spectral imaging: Used in weather
analysis

GOES image of North America


 Microwave images: Used in radar
applications
 Radio waves: Used in astronomy and
medicine (MRI)

MRI of patient’s shoulder


SYNTHETIC APERTURE RADAR
(SAR) IMAGE

SAR aerial image SAR satellite image


Acoustic Imaging
 Works by sending out sound waves at
various frequencies and then measuring the
reflected waves
 Used in biological and man-made systems,
geological systems, and medicine

Head and spine of baby Baby sucking its thumb Baby smiling
Electron Imaging
 Uses a focused beam of electrons (using magnetic lens) to
image a specimen. The interactions/effects of the electrons
with the specimen are imaged.
 Provides high magnification ( 200 thousand times)
 Two types: SEM (Scanning Electron Microscopy ) and TEM (Transmission Electron
Microscopy )

SEM image of mosquito Logic gate in microchip Brittlestar


MICROSCOPIC IMAGE

Pollen image Myocardium image


Gamma Ray Imaging
• In Nuclear medicine:- Inject into a patient radio
active isotope that emits gamma rays as it
decays. These emissions are collected by
Gamma Ray detectors.
• Used for locating sites of bone infections or
tumors.
• PET – similar to x-ray tomography. The patient
is given a radio active isotope that emits
positrons as it decays. When a positron meets a
electron both are annihilated and two gamma
rays are given off.
Positron Emission Tomography (PET) using
Radio Isotopes

BRAIN LUNG
IMAGE SEQUENCE (VIDEO)
Laser Imaging
 Produces narrow beam of light in visible, IR, or UV range of EM spectrum
 Used to create range images (depth maps)

Computer Generated Images


 Computers are used to generate images
 Used in medicine, education, arts, games,
military and commercial aviation
LASER RANGE FINDER
IMAGE

Perspective view of 3D laser image


Image Types
 Binary images: Simplest type of images,
which can take two values, typically black or
white, or “0” or “1”
 Gray scale images: One-color or
monochrome images that contains only
brightness information and no color
information
 Color images: 3 band monochrome images,
where each band corresponds to a different
color, typically red, blue and green or RGB
 Color pixel vector: Single pixel’s values for a color
image, (R,G,B)

 Multispectral Images: Images of many bands


containing information outside of the visible
spectrum
Multispectral Remote Sensing
False Color-IR Image
• Near-IR = Red Gun
• Red = Green Gun
• Green= Blue Gun

Input blue is filtered out.


Displaying a Remote Sensing Image
Image Acquisition
Sensors-
Sensors- Optical:
Optical: Types
Types

AREA ARRAY PIXEL SCANNER LINE SCANNER


Frame Camera Whisk broom
Push broom
Photographic Camera Optical Mechanical
Across track
RBV (TV) Camera Along track

Non scanning Scanning


Image digitization

• Sampling means measuring the value of an image at a finite number of points.


• Quantization is the representation of the measured value at the sampled point
by an integer.
Image Sampling and Quantization
Image digitization (cont’d)
Image quantization example
(small pixel sizes to big pixel sizes)
• 256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels
(4 bits/pixel)

• 8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1


bit/pixel)
Spatial and Gray Level Resolution

• Spatial resolution: • Gray level resolution:


– # of samples per unit – Number of bits per
length or area pixel
– Usually 8 bits
– DPI: dots per inch
specifies the size of an – Color image has 3
image planes to yield
individual pixel
8 x 3 = 24 bits/pixel
– If image size is kept – Too few levels may
constant, the size of cause false contour
pixel will affect spatial
resolution
Same Pixel Size, different Image Sizes
Same Image Size, Different Pixel Sizes
1 m Resolution Space Image
Size, Quantization Levels and Details

For the crowd image , it is N= Size of the image


ok even if k is reduced. K= bits per Pixel
For the face image, we
can’t reduce k below
certain value. Isopreference curves
Point Spread Function
• Consider a very small point
of light. If the visual system
had perfect optics the image
of this point on the retina
would be identical to the
original point of light.

• If the relative intensity of this


point of light were plotted as
a function of distance, on the
retina, such a plot should
look like the dashed, vertical,
green line.
• However, the eye's optics are not
perfect: so the relative intensity of the
point of light is distributed across the
retina as shown by the red curve. This
curve is called the "point spread
function" (PSF).
Image Confusing Image
Processing
??? Understanding

Computer
Graphics Image Interpretation

Computer Image Analysis


Vision
From Images to Knowledge
Image Image Processing Image

Numbers Computer Graphics Image

Image Image Analysis Numbers

Image Image Interpretation Knowledge


 Hierarchical image pyramid: Consists of
levels for processing of images
What is digital image
processing?
• Image understanding, image analysis, and
computer vision aim to imitate the process
of human vision electronically
– Image acquisition
– Preprocessing
– Segmentation
– Representation and description
– Recognition and interpretation
General procedures
• Two-level approaches
– Low level image processing. Very little
knowledge about the content or semantics of
images
– High level image understanding. Imitating
human cognition and ability to infer
information contained in the image.
Low level image processing

4x4 neighborhood
Origin (Ox,Oy)

Pixel Value

Pixel Region

Spacing (Sx) Spacing (Sy)


Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration Original Binary Image

DILATION EROSION
Low level image processing
• Image compression
• Noise reduction
• Edge extraction
• Contrast enhancement
• Segmentation
• Thresholding
• Morphology
• Image restoration
High level image understanding

• To imitate human cognition according to


the information contained in the image.
• Data represent knowledge about the
image content, and are often in symbolic
(word description) form.
• Data representation is specific to the
high-level goal.
Intensity Transformations and Spatial
Filtering

60
2 domains – for DIP
• Spatial Domain : (image plane)
– Techniques are based on direct manipulation of pixels
in an image
• Frequency Domain :
– Techniques are based on modifying the Fourier
transform of an image
• There are some enhancement techniques based
on various combinations of methods from these
two categories.
Good images
• For human visual
– The visual evaluation of image quality is a highly
subjective process.
– It is hard to standardize the definition of a good image.
• For machine perception
– The evaluation task is easier.
– A good image is one which gives the best machine
recognition results.
• A certain amount of trial and error usually is
required before a particular image enhancement
approach is selected.
Pixel Intensity Transformations

• Log Gray-level
Transform
• s = T(r) = c log(1+r)
Example: Contrast Stretching
Histogram equalization: Example

Original Equalized
Mask/Filter

• Neighborhood of a point (x,y)


can be defined by using a
square/rectangular (commonly
(x,y) used) or circular subimage
area, centered at (x,y)
• • The center of the subimage is
moved from pixel to pixel
starting at the top corner
Spatial Filtering

• use filter (can also be called as


mask/kernel/template or window)
• the values in a filter subimage are referred
to as coefficients, rather than pixel.
• our focus will be on masks of odd sizes,
e.g. 3x3, 5x5,…
• Center pixel of such odd masks are well
defined.
Spatial Filtering Process
• simply move the filter mask from point to
point in an image.
• at each point (x,y), the response of the
filter at that point is calculated using a
predefined relationship.
Where
R  w1 z1  w2 z 2  ...  wmn z mn W1,W2,…. Wmn are
mask values.
mn
  wi zi Z1,Z2, ….. Zmn are
neighborhood image
i i values.
3x3 Smoothing Linear Filters

box filter weighted average


the center is the most important and other
pixels are inversely weighted as a function of
their distance from the center of the mask
a b
Smoothing Example c d
e f

• a). original image 500x500 pixel


• b). - f). results of smoothing with
square averaging filter masks of
size n = 3, 5, 9, 15 and 35,
respectively.
• Note:
– big mask is used to eliminate small
objects from an image.
– the size of the mask establishes the
relative size of the objects that will
be blended with the background.
Median Filters
• replaces the value of a pixel by the median of
the gray levels in the neighborhood of that pixel
(the original value of the pixel is included in the
computation of the median)
• quite popular because for certain types of
random noise (impulse noise  salt and pepper
noise)
noise , they provide excellent noise-reduction
capabilities,
capabilities with less blurring than linear
smoothing filters of similar size.
Example : Median Filters

72
Result Laplacian mask
Laplacian mask implemented as an
extension to diagonal neighbors
Other implementation of Laplacian
masks

give the same result, but we have to keep in mind that


when combining (add / subtract) a Laplacian-filtered
image with another image. These are subtraction
masks.
Effect of Laplacian Operator
• as it is a derivative operator,
– it highlights gray-level discontinuities in an
image
– it deemphasizes regions with slowly varying
gray levels
Example

• a). image of the North pole of


the moon
• b). Laplacian-filtered image
with

1 1 1
1 -8 1
1 1 1

• c). Laplacian image enhanced


for display purposes
• d). image enhanced by
addition with original image
Edge Detection – Ex: Sobel Operator

• Looks for edges in both horizontal and vertical directions,


then combine the information into a single metric.
• The masks are as follows:

 1  2  1   1 0 1
y   0 0 0  x   2 0 2
 1 2 1    1 0 1

Edge Direction = tan 1  


y
Edge Magnitude = x2  y2
x
Image Enhancement in the
Frequency Domain
Background
• Fourier Series
– Any function that periodically repeats itself can be
expressed as the sum of sines and / or cosines of
different frequencies, each multiplied by different
coefficients.

2nx 2nx
f ( x )  ao  
n 1
(an cos
2L
 bnsin
2L
) , ( p  2 L)

1 L
a0   f ( x)dx,
2L L
1 L 2nx 1 L 2nx
an   f ( x) cos( )dx, bn   f ( x) sin( )dx
L  L 2L L  L 2L

f ( x)  c e
k  
n
j 2nx / 2 L
, ( p  2 L)

1 L
cn 
2L  L
f ( x ) e  j 2nx / 2 L dx
1-D DFT and its Inverse
• The 1-D FT and its Inverse
– FT / Inverse FT

F (u )   f ( x)e  j 2ux dx



f ( x)   F (u )e j 2ux
du

 
F (u, v)    f ( x, y )e  j 2 ( ux  vy )
dxdy
 
 
f ( x, y )    F (u, v)e j 2 ( ux  vy )
dudv
 
1-D DFT and its Inverse (cont’)

• example (1)

(2)

M=1024, (1)
A=1,
K=8

(2)

(1) The height of the spectrum Doubled


(2) The number of zeros doubled
2-D DFT and its Inverse

for a M  N image : f ( x, y )
M 1 N 1
1
F (u , v) 
MN

x 0 y 0
f ( x, y )e  j 2 ( ux / M  vy / N )

for u  0, 1, 2,  , M  1
v  0, 1, 2,  , N  1

M 1 N 1
f ( x, y )   F (u , v)e j 2 ( ux / M  vy / N )

u 0 v 0
for x  0, 1, 2,  , M  1
y  0, 1, 2,  , N  1
2-D DFT and its Inverse (cont’)

• magnitude or spectrum


F (u, v)  R (u, v)  I (u, v)
2 2
 1/ 2

• phase angle or phase spectrum


 I (u , v ) 
 (u , v )  tan 1
 R (u , v ) 
 

• power spectrum
2
P (u , v)  F (u , v)  R (u , v)  I (u , v)
2 2
2-D DFT and its Inverse (cont’)
• example
(a) Image of a 20 x 40 white rectangle on a black background of size
512 x 512 pixels.
(b) Centered Fourier spectrum shown after application of the log
transformation.
( 1) x y
f ( x, y )(1) x  y  F (u  M / 2, v  N / 2(1)
)

(a)
(b)
2k

k (2)
(1) center the spectrum
(2) separation of spectrum zeros
Filtering in the Frequency Domain
• some basic properties of the frequency
domain.

F (u , v) high frequency

low frequency
F (0,0)
Filtering in the Frequency Domain (cont’)

• example

(1) strong edges that run approximately at ±45˚


(2) the two white oxide protrusions
Filtering in the Frequency Domain (cont’)

• some basic filters and their properties

LPF

k
HPF
H (u , v)   H (u , v) ,
2
(k  filter height )
Applications
• Medicine
• Defense
• Meteorology
• Environmental science
• Manufacture
• Surveillance
• Crime investigation
Applications: Medicine

CT PET
(computed (Positron Emission PET/CT
Tomography) Tomography
Applications: Meteorology
Applications: Environmental
Science
Applications: Manufacture
Applications: Crime Investigation

Fingerprint enhancement
IMAGE MOSAICKING
• Combine several small images to generate a big
image
• Issue: seamless merging
• Features to consider
– Translation – obtain sliding vector and superimpose
– Rotation – find rotation angle, correct and then
superimpose
– Scaling – normalize and superimpose
– Different image parameters – matching problem
MOSAIC EXAMPLES
HUSSAIN SAGAR IMAGE
Optical/microwave data fusion
IRS-1D:Merged Data SAR+IRS-1D: Merged
Original Image
Contrast stretched Image
STATISTICAL METHOD -- DESTRIPED IMAGE (Top half)
Image To Edge Map
THANK YOU

You might also like