Module 2 - Full

Download as pdf or txt
Download as pdf or txt
You are on page 1of 137

ECT 385: Topics in digital

image processing
S5 Minor

Module 2 : Image enhancement

Reference Textbook: Digital Image Processing by Rafael C. Gonzalez and Richard E. Woods
Image Enhancement
■ Process an image so that the result will be more suitable than the original
image for a specific application.

■ The suitableness is up to each application.

■ A method which is quite useful for enhancing an image may not


necessarily be the best approach for enhancing another images

■ It accentuates or sharpens image features such as edges, boundaries, or


contrast to make a graphic display more helpful for display and analysis.

■ The enhancement doesn't increase the inherent information content of the


data, but it increases the dynamic range of the chosen features so that
they can be detected easily.
2
 Spatial domain enhancement methods:

 Spatial domain techniques are performed to the image plane itself and
they are based on direct manipulation of pixels in an image.

 Frequency domain enhancement methods:

 Techniques are based on modifying the Fourier transform of an image

 There are some enhancement techniques based on various


combinations of methods from these two categories.
Spatial Domain Enhancement
• Spatial domain processing
• g(x,y) = T[f(x, y)]
• f(x, y): input image
• g(x, y): output (processed) image
• T: operator
• Operator T is defined over a neighborhood of point (x, y)
Spatial Domain Enhancement
– Single-pixel operation (Intensity Transformation)
• Negative Image, contrast stretching etc.
– Neighborhood operations
• Averaging filter, median filtering etc.
– Geometric spatial transformations
• Scaling, Rotation, Translations etc
Point Processing (Single pixel processing)
■ Neighborhood = 1x1 pixel
■ g depends on only the value of f at (x,y)
■ T = gray level (or intensity or mapping)
transformation function
s = T(r)
■ Where
■ r = gray level of f(x,y)

■ s = gray level of g(x,y)

6
Basic Intensity transformation functions

 Linear transformation : Image Negatives


 Log Transformation
 Power-Law Transformations
 Piecewise-Linear Transformation Functions
1. Negative of an image
▪ An image with gray level in the range [0, L-1]
▪ where L = 2n ; n = 1, 2…
▪ Negative transformation :
▪ s = L – 1 –r
▪ S is the output intensity value
▪ L is the highest intensity levels
▪ r is the input intensity value

▪ Reversing the intensity levels of an image.

▪ Suitable for enhancing white or gray detail embedded in dark regions of an image,
especially when the black area dominant in size.

▪ Widely used in medical images


F (x, y) G (x, y)

0 50 255 205 0
100 75 200
S = L-1-r
0 100 50 255
205 = 255-50
r = F (x, y) 08/02/2022 s = G (x, y) 9
Example of Negative Image

Original mammogram Negative Image : gives


showing a small lesion a better vision to
of a breast analyze t h e i m a g e
2. Log Transformation

• s = c log(1 + r)
• c is constant
• It maps a narrow range of low intensity values in the input into a
wide range of output levels
• The opposite is true of higher values of input levels
• It expands the values of dark pixels in an image while compressing
the higher level values
• It compresses the dynamic range of images with large variations in
pixel values
• Inverse log transformation : Do opposite to the Log
Transformations; Used to expand the values of high pixels in an
image while compressing the darker-level values.
• Example of image with large dynamic range: Fourier spectrum image
• It can have intensity range from 0 to 106 or higher.
• We can’t see the significant degree of detail as it will be lost in the
display.

Fourier Spectrum with Result after apply the log


range = 0 to 1.5 x 10 6 transformation with c = 1,
range = 0 to 6.2
3. Power Law (Gamma) Transformations
• s = c rγ
• c and γ are both positive constants
• With fractional values (0<γ<1) of gamma map a narrow
range of dark input values into a wider range of output
values, with the opposite being true for higher values (γ
>1) of input levels.
• C=gamma=1 means it is an identity transformations.
• Power Law (Gamma) Transformations
Power-Law Transformations

 Variety of devices used for image capture, printing and display


respond according to a power law. The process used to correct this
power-law response phenomena is called gamma correction.
 Display systems would tend to produce images that are darker
than intended
 We need to preprocess the input image before inputting it into the
monitor by performing the inverse transformation
 Apply gamma correction on CRT (Television, monitor), printers,
scanners etc.
 Gamma value depends on device.
Gamma correction

• Cathode ray tube (CRT) devices


Monitor have an intensity-to-voltage
response that is a power function,
with γ varying from 1.8 to 2.5
Gamma  = 2.5 • The picture will become darker.
• Gamma correction is done by
correction

preprocessing the image before


inputting it to the monitor
Monitor

 =1/2.5 = 0.4
Another example : MRI
(a) a magnetic resonance image of
an upper thoracic human spine
with a fracture dislocation and
spinal cord impingement
■ The picture is predominately dark
■ An expansion of gray levels are
desirable needs  < 1
(b) result after power-law
transformation with  = 0.6, c=1
(c) transformation with  = 0.4
(best result)
(d) transformation with  = 0.3
(under acceptable level)
Effect of decreasing gamma
■ When the  is reduced too much, the
image begins to reduce contrast to the
point where the image started to have
very slight “wash-out” look, especially in
the background

22
Another example (a)image has a washed-out
appearance, it needs a
compression of gray levels
needs  > 1
(b) result after power-law
transformation with  = 3.0
(suitable)
(c)transformation with  = 4.0
(suitable)
(d) transformation with  = 5.0
(high contrast, the image
has areas that are too dark,
some detail is lost)
Piecewise-Linear Transformation Functions
• Contrast Stretching
• Low contrast images can result from
• poor illuminations.
• Lack of dynamic range in the imaging sensor, or even the wrong
setting of a lens aperture during image acquisition.
• It expands the range of intensity levels in an image so that it spans the
full intensity range of display devices.
• Contrast stretching is obtained by setting (r1,s1) = (rmin , 0)
and (r2,s2) = (rmax , L-1)
• Thresholding : A special case of stretching where (r1,
s1)=(m,0) and (r2,s2)=(m, L-1) where m is the mean
intensity of the image
Piecewise-Linear Transformation Functions
Piecewise-Linear Transformation Functions
• Intensity Level Slicing
• Highlighting specific range of intensities in an image.
• Enhances features such as masses of water in satellite imagery and
enhancing flaws in X-ray images.
• It can be Implemented two ways:
• 1) To display only one value (say, white) in the range of interest
and rests are black which produces binary image.
• 2) brightens (or darkens) the desired range of intensities but
leaves all other intensity levels in the image unchanged.
Piecewise-Linear Transformation Functions
• Intensity Level Slicing
Piecewise-Linear Transformation Functions
• Intensity Level Slicing
Piecewise-Linear Transformation Functions

• Bit Plane Slicing


• Pixels are digital numbers composed of bits.
• 256 gray scale image is composed of 8 bits.
• Instead of highlighting intensity level ranges, we could highlight the
contribution made to total image appearance by specific bits.
• 8-bit image may be considered as being composed of eight 1-bit
planes, with plane 1 containing the lowest- order bit of all pixels in
the image and plane 8 all the highest-order bits.
Piecewise-Linear Transformation Functions
• Bit Plane Slicing
• This transformation is useful In
determining the number of Visually
significant bits in an Image.
• Higher-order bits contain the majority
of the visually significant data
• The binary image for the 8th bit plane
of an 8-bit image can be obtained by
thresholding the input image with a
transformation function that maps to 0
intensity values between 0 and 127, and
maps to 1 values between 128 and 255.
Piecewise-Linear Transformation Functions
• Bit Plane Slicing
Another example
Piecewise-Linear Transformation Functions
• Bit Plane Slicing

• The reconstruction is done by multiplying the pixels of the nth plane by the constant
2n−1.
• This converts the nth significant binary bit to decimal.
• Each bit plane is multiplied by the corresponding constant, and all resulting planes are
added to obtain the grayscale image.
• To obtain Fig. 3.15(a), we multiplied bit plane 8 by 128, bit plane 7 by 64,
• and added the two planes.
Histogram ….

 Histogram of a digital image with gray levels in


the range [0,L-1] is a discrete function
h(rk) = nk
where
 rk : the kth gray level
 nk : the number of pixels in the image having gray
level rk
 h(rk) : histogram of a digital image with gray levels
rk
Normalized Histogram

 dividing each of histogram at gray level rk by the


total number of pixels in the image, n
p(rk) = nk / n
for k = 0,1,…,L-1

 p(rk) gives an estimate of the probability of


occurrence of gray level rk

 The sum of all components of a normalized


histogram is equal to 1
h(rk) or p(rk)

Example rk

Dark image
Components of
histogram are
concentrated on the
low side of the gray
scale.
Bright image
Components of
histogram are
concentrated on the
high side of the gray
scale.
32
Example
Low-contrast image
histogram is narrow
and centered toward
the middle of the
gray scale
High-contrast image
histogram covers broad
range of the gray scale
and the distribution of
pixels is not too far from
uniform, with very few
vertical lines being much
higher th3a4n the others
Histogram Equalization

 As the low-contrast image’s histogram is narrow and centered


toward the middle of the gray scale, if we distribute the
histogram to a wider range the quality of the image will be
improved
 We can do it by adjusting the probability density function of the
original histogram of the image so that the probability spread
equally
Histogram Equalization-Transformation
s = T (r ), 0  r  1
T(r) satisfies the following conditions:
(a) T(r) is single valued and monotonically increasing in the interval 0 ≤ r
≤1

(b) 0 ≤ T(r) ≤ 1 for 0≤r≤1

−1
r = T ( s ), 0  s  1
Transformation Function

 Thus transformation function is a probability


distribution function (PDF) of random variable r :
r
s = T ( r ) =  pr ( w )dw
0

where w is a dummy variable of integration

Note: T(r) depends on pr(r)


Proof that CDF is the required
transformation function:
For an image,
 Let
 pr(r) denote the PDF of random variable r
 ps (s) denote the PDF of random variable s
 If pr(r) and T(r) are known and T -1(s) satisfies inverse
condition then ps(s) can be obtained using a formula :
dr
ps(s) = pr(r)
ds
 The PDF of the transformed variable s is determined by the gray-
level PDF of the input image and by the chosen transformation
function
Discrete Transformation Function

 The probability of occurrence of gray level in an image is


approximated by

nk
pr ( rk ) = where k = 0 , 1, ..., L-1
n
 The discrete version of transformation
k
sk = T (rk ) =  pr (rj )
j =0
k nj
sk =  where k = 0, 1, ..., L-1
j =0 n
Example

No. of pixels
6
2 3 3 2
5
4 2 4 3 4
3 2 3 5 3

2 4 2 4 2

1 Gray level
4x4 image
0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9]
histogram
Example
Gray
0 1 2 3 4 5 6 7 8 9
Level(j)

No. of
0 0 6 5 4 1 0 0 0 0
pixels

n
j =0
j 0 0 6 11 15 16 16 16 16 16

6 11 15 16 16 16 16 16
k nj
s= 0 0 / / / / / / / /
j =0 n 16 16 16 16 16 16 16 16

3.3 6.1 8.4


sx9 0 0 9 9 9 9 9
3 6 8
Example
42

No. of pixels
6
3 6 6 3
5
8 3 8 6
4
6 3 6 9 3

3 8 3 8 2

1
Output image

0 1 2 3 4 5 6 7 8 9
Gray scale = [0,9] Gray level
Histogram equalization
Histogram Matching (Specification)

 Histogram equalization has a disadvantage which


is that it can generates only one type of output
image.

 With Histogram Specification, we can specify the


shape of the histogram that we wish the output
image to have.

 It doesn’t have to be a uniform histogram


Procedure

 Obtain the transformation function T(r) by


calculating the histogram equalization of the
input image
r
s = T ( r ) =  pr ( w )dw
0

 Obtain the transformation function G(z) by


calculating histogram equalization of the
desired density function
z
G ( z ) =  pz ( t )dt = s
0
Procedure

 Obtain the inversed transformation function G-1


z = G-1(s) = G-1[T(r)]

 Obtain the output image by applying the processed


gray-level from the inversed transformation
function to all the pixels in the input image
Example

Image is dominated by large, dark areas,


resulting in a histogram characterized by
a large concentration of pixels in pixels in
the dark end of the gray scale
Image of Mars moon
Histogram specification

Result image after


Transformation function for Histogram of the result histogram
equalization
histogram equalization image

 The histogram equalization doesn’t make the result image look better than the
original image.
 Consider the histogram of the result image, the net effect of this method is to
map a very narrow interval of dark pixels into the upper end of the gray scale of
the output image.
 As a consequence, the output image is light and has a washed-out appearance.
Solve the Problem
 Since the problem with the transformation function of the
histogram equalization was caused by a large concentration of
pixels in the original image with levels near 0
 A reasonable approach is to modify the histogram of that image
so that it does not have this property

Histogram Equalization Histogram Specification


Histogram Specification

(1) the transformation function G(z)


obtained from
k
G ( z k ) =  pz ( z i ) = sk
i =0

k = 0 ,1,2 ,..., L − 1
(2) the inverse transformation G-1(s)
Result Image and its Histogram

The output image’s histogram


Original image After applied the
histogram Notice that the output histogram’s
matching low end has shifted right toward
the lighter region of the gray scale
as desired.
Local Histogram Processing
• Histogram Processing methods discussed in the previous two
sections are Global, in the sense that pixels are modified by a
transformation function based on the intensity distribution
of an entire image.
• There are some cases in which it is necessary to enhance detail
over small areas in an image.
• This procedure is to define a neighborhood and move its center
pixel to pixel.
• At each location, the histogram of the points in the
neighborhood is computed and either a histogram equalization
or histogram specification transformation function is obtained.
• Map the intensity of the pixel centered in the neighborhood.
• Center of the neighborhood region is then moved to an
adjacent pixel location and the procedure is repeated.
Local Histogram Processing
• Example
• Global histogram equalization
Considerable enhancement of noise
• Local histogram equalization using 7x7 neighborhood
• Reveals (enhances) the small squares inside the dark
squares
• Contains finer noise texture
Local Histogram Processing
Image Subtraction

g(x,y) = f(x,y) – h(x,y)

■ enhancement of the differences between


images

54
Image Subtraction
■ a). original fractal image
■ b). result of setting the four
lower-order bit planes to zero
■ refer to the bit-plane slicing

the higher planes contribute
significant detail
■ the lower planes contribute more
to fine detail
■ image b). is nearly identical
visually to image a), with a very
slightly drop in overall contrast
due to less variability of the
gray-level values in the image.
■ c). difference between a). and b).
(nearly black)
■ d). histogram equalization of c).
(perform contrast stretching
transformation)

100
Image Averaging
■ consider a noisy image g(x,y) formed by
the addition of noise (x,y) to an original
image f(x,y)

g(x,y) = f(x,y) + (x,y)

56
Image Averaging
■ if noise has zero mean and be
uncorrelated then it can be shown that if

g (x, y) = image formed by averaging


K different noisy images

g (x, y) =
1
K
 g (x, y)
i
i=1
57
Image Averaging
■ then
1 2
 2
g ( x, y) =   ( x, y)
K
 2
g ( x, y) , 2
 ( x, y) = variances of g and 

if K increase, it indicates that the variability (noise) of the


pixel at each location (x,y) decreases.
58
Image Averaging
■ thus

E{g (x, y)} = f (x, y)

E{g (x, y)} = expected value of g


(output after averaging)

= original image f(x,y)

59
Image Averaging
■ Note: the images gi(x,y) (noisy images)
must be registered (aligned) in order to
avoid the introduction of blurring and
other artifacts in the output image.

60
Example
■ a) original image
■ b)image corrupted by
additive Gaussian noise
with zero mean and a
standard deviation of 64
gray levels.
■ c). -f). results of
averaging K = 8, 16, 64
and 128 noisy images
61
Geometric Operations
⚫ Geometric transformations modify the spatial arrangement of pixels in an
image.

⚫ called rubber-sheet transformations because they may be viewed as analogous to


“printing” an image on a rubber sheet, then stretching or shrinking the sheet according to
a predefined set of rules.

⚫ Examples: translating, rotating, scaling an image

Examples of Geometric
operations
Geometric Operations
❑ Geometric transformations of digital images consist of two basic operations:
1. Spatial transformation of coordinates.

2. Intensity interpolation that assigns intensity values to the spatially transformed

pixels

❑ The transformation of coordinates may be represented as

❑ where (x, y) are pixel coordinates in the original image and (x’,y’) are the corresponding
pixel coordinates of the transformed image.

❑ For example, the transformation (x’ ,y’ ) = (x/2 ,y/2) shrinks the original image to half its
size in both spatial directions
Affine Transformation
 An affine transformation maps variables (e.g. pixel intensity values
located at position in an input image) into new variables (e.g. in an
output image) by applying a linear combination of translation, rotation,
scaling operations.
 An affine transformation is equivalent to the composed effects of
translation, rotation and scaling, and shearing.
 The key characteristic of an affine transformation in 2-D is that it
preserves points, straight lines, and planes
 The general affine transformation is commonly expressed as below:
Image interpolation
 Interpolation is the process of using known data to estimate
values at unknown locations
 Interpolation is a basic tool used for image resizing(shrinking and
zooming)
 Three methods of interpolation are
 Nearest neighbour interpolation
 Bilinear interpolation use the four nearest neighbour to assign
the value
 Bicubic interpolation- involves 16 nearest neighbours of a
point
Nearest neighbour interpolation
 It assigns to each new location the intensity of its nearest neighbour in the original
image
 Most basic method; simple
 Requires the least processing time
 Only considers one pixel: the closest one to the interpolated point
 Has the effect of simply making each pixel bigger
 It has the tendency to produce undesirable artifacts, such as severe distortion of
straight edges
Bilinear interpolation

Bilinear interpolation
➢ Considers the closest 2x2 neighborhood of known pixel values surrounding the
unknown pixels
➢ Takes a weighted average of these 4 pixels to arrive at the final interpolated
values
➢ Results in smoother looking images than nearest neighborhood
➢ Needs more processing time

Figure: Case when all known pixel distances are equal. Interpolated value is simply
their sum divided by four.
Bilinear Interpolation
 How a 4x4 image would be interpolated to produce an 8x8 image?

f (x, y') = f (x, y + 1) + (1 −  ) f (x, y )


4 original pixel values

f (x + 1, y') = f (x + 1, y + 1) + (1 −  ) f (x + 1, y )

one interpolated
Along the y’ column we have
pixel value
f (x' , y') = f (x + 1, y') + (1 −  ) f ( x, y' )

Bilinear
interpolation

f (x' , y') =  (f (x + 1, y + 1) + (1 −  ) f (x + 1, y )) + (1 −  )(f (x, y + 1) + (1 −  ) f (x, y ))


Bilinear Interpolation
Bicubic

Bicubic interpolation
➢ One step beyond bilinear by considering the closest 4x4 neighborhood of known
pixels, for a total of 16 pixels
➢ Since these are at various distances from the unknown pixel, closer pixels are
given a higher weighting in the calculation
➢ Produces sharper images than the previous two methods.
➢ Good compromise between processing time and output quality Standard in many
image editing programs, printer drivers and in-camera interpolation
Bicubic interpolation
Interpolation method Principle
NN interpolation output pixel is assigned the value of the
closest pixel
Bilinear interpolation output pixel value is a weighted average of
pixels in the nearest 2-by-2 neighborhood
Bicubic interpolation output pixel value is a weighted average of
pixels in the nearest 4-by-4 neighborhood

 Bilinear method takes longer than nearest neighbour interpolation, and the bicubic method
takes longer than bilinear.

 The greater the number of pixels considered, the more accurate the computation is, so
there is a trade-off between processing time and quality.
Spatial Filtering
• A predefined operation that is performed between an image f and a filter kernel, w.

• The kernel is an array whose

• size defines the neighborhood of operation

• coefficients determine the nature of the filter.

• Other terms used to refer to a spatial filter kernel are mask, template, and window

• If operation is linear, then filter is called a linear spatial filter otherwise


nonlinear.
Mechanics of Spatial Filtering
Spatial Correlation & Convolution
➢ Correlation is the process of moving a filter mask over the image and
computing the sum of the products at each location.

➢ Convolution process is same except that the filter kernel is first rotated
by 180 degree.

➢ When the values of a kernel are symmetric about its center, correlation
and convolution yield the same result

➢ Correlation: function of displacement of the filter kernel relative to the


image
Smoothing Spatial Linear Filters
• Also called averaging filters or Lowpass filter.
• By replacing the value of every pixel in an image by the average of the
intensity levels in the neighborhood defined by the filter mask.
• Reduced “sharp” transition in intensities.
• Random noise typically consists of sharp transition.
• Edges also characterized by sharp intensity transitions, so averaging
filters have the undesirable side effect that they blur edges.
• Common filter kernels
• Box filter
• Weighted average filter
• If all coefficients are equal in filter than it is also called a box filter
Smoothing Spatial Linear Filters
• Weighted average mask
• Pixels are multiplied by different coefficients
• Center point is more weighted
• Strategy behind weighing center point the highest and then
reducing value of the coefficients as a function of increasing
distance from the origin is simply an attempt to reduce
blurring in the smoothing process.

• Intensity of smaller object blends with background.


Smoothing Linear Filter
Box filter Weighted average

Gaussian
Results of box filter
Results of Gaussian filter
Applications of low pass filter
▪ Using lowpass filtering and thresholding for region extraction
Applications of low pass filter
▪ Shading correction using lowpass filtering
• Consider the 2048 2048 × checkerboard image whose inner squares are of size 128 x 128 pixels.
• Figure (b) is the result of lowpass filtering the image with a 512 x 512 Gaussian kernel (four times
the size of the squares), K = 1, and s = 128 (equal to the size of the squares).
• This kernel is just large enough to blur-out the squares (a kernel three times the size of the
squares is too small to blur them out sufficiently).
Order-Statistic (Nonlinear) Filters
• Response is based on ordering (ranking) the pixels contained in the image
area encompassed by the filter, and then replacing the value of the center
pixel with the value determined by the ranking result.

• Best-known filter is median filter.

• Replaces the value of a center pixel by the median of the intensity


values in the neighborhood of that pixel.

• Used to remove impulse or salt-pepper noise

• white and black dots superimposed on an image

• The median, ξ, of a set of values is such that half the values in the set are
less than or equal to ξ and half are greater than or equal to ξ.
Order-Statistic (Nonlinear) Filters
• In order to perform median filtering at a point in an image,
• we first sort the values of the pixels in the neighborhood,
• determine their median,
• assign that value to the pixel in the filtered image corresponding to the center of
the neighborhood.
• For example, in a 3 x 3 neighborhood the median is the 5th largest value, in a 5 x 5
neighborhood it is the 13th largest value, and so on.
• The principal function of median filters is to force points to be more like their
neighbors.
• Isolated clusters of pixels that are light or dark with respect to their neighbors, and
whose area one-half the filter area, are forced by an m x m median filter to have the
value of the median intensity of the pixels in the neighborhood
Median Filter (Nonlinear)
Sharpening Spatial Filters
• Objective of sharpening is to highlight transitions in intensity.

• Uses in printing and medical imaging to industrial inspection and


autonomous guidance in military systems.

• Averaging is analogous to integration, so sharpening is analogous to


spatial differentiation.

• Image differentiation enhances edges and other discontinuities


(such as noise) and deemphasizes areas with slowly varying
intensities
Foundation

➢ Image sharpening by first- and second-order derivatives


➢ Derivatives are defined in terms of differences
➢ Requirement of first derivative
➢ Must be zero in flat areas
➢ Must be nonzero at the onset (start) of step and ramp
➢ Must be nonzero along ramps
➢ Requirement of second derivative
➢ Must be zero in flat areas
➢ Must be nonzero at the onset (start) of step and ramp
➢ Must be zero along ramps of constant slope
Foundation

f
( x) = f ( x + 1) − f ( x) first-order derivative
x
f
( x − 1) = f ( x) − f ( x − 1)
x

2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x) second-order derivative
x 2
Foundation
Foundation
• At the ramp
• First-order derivative is nonzero along the ramp
• Second-order derivative is zero along the ramp
• Second-order derivative is nonzero only at the onset and
end of the ramp
•At the step
• Both the first- and second-order derivatives are nonzero
• Second-order derivative has a transition from positive to
negative (zero crossing)

Foundation
• Edges in digital images often are ramp-like transitions in intensity
• the first derivative of the image would result in thick edges
because the derivative is nonzero along a ramp.
• On the other hand, the second derivative would produce a
double edge one pixel thick, separated by zeros.
Foundation
• From this, we conclude that the second derivative enhances fine
detail much better than the first derivative, a property ideally suited
for sharpening images.
• Also, second derivatives require fewer operations to implement than
first derivatives
• First-order derivatives generally have a stronger response to a gray-
level step.(better for edge detection)
• Second-order derivatives produce a double response at step changes
in gray level. (better for image enhancement)
Use of Second Derivatives for Enhancement

• Isotropic filters: rotation invariant


• Simplest isotropic second-order derivative operator: Laplacian

2 f 2 f
 f = 2 + 2
2
2-D Laplacian operation
x y

2 f
= f ( x + 1, y ) + f ( x − 1, y ) − 2 f ( x, y ) x-direction
x 2

2 f y-direction
= f ( x, y + 1) + f ( x, y − 1) − 2 f ( x, y )
y 2

 2 f ( x, y ) = [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1)] − 4 f ( x, y )
Use of Second Derivatives for Enhancement
4 neighbors 8 neighbors
negative center coefficient negative center coefficient

Laplacian
kernels

4 neighbors
8 neighbors
positive center coefficient
positive center coefficient
 Because the Laplacian is a derivative operator, it highlights sharp intensity transitions in
an image and de-emphasizes regions of slowly varying intensities.
 This will tend to produce images that have grayish edge lines and other discontinuities,
all superimposed on a dark, featureless background.
 Background features can be “recovered” while still preserving the sharpening effect of
the Laplacian by adding the Laplacian image to the original.
 It is important to keep in mind which definition of the Laplacian is used. If the
definition used has a negative center coefficient, then we subtract the Laplacian image
from the original to obtain a sharpened result.
Use of Second Derivatives for Enhancement

• Thus, the basic way in which we use the Laplacian for image sharpening is

• where f(x,y) and g(x,y) are the input and sharpened images, respectively. We let

c = -1 if the Laplacian kernels has negative center coefficients, and c = 1 if it is

positive

• Image enhancement (sharpening) by Laplacian operation

 f ( x, y ) −  2 f ( x, y ) if the center coefficient of the


 Laplacian mask is negative
g ( x, y ) = 
 f ( x , y ) +  2
f ( x, y ) if the center coefficient of the
 Laplacian mask is positive
Use of Second Derivatives for Enhancement
Original image

Laplacian image

Enhanced image
(original-Laplacian)
Unsharp Masking and Highboost Filtering
Subtracting an unsharp (smoothed) version of an image from the original image is
process that has been used since the 1930s by the printing and publishing industry to
sharpen images. This process, called unsharp masking, consists of the following steps:
1. Blur the original image.
2. Subtract the blurred image from the original (the resulting difference is called
the mask.)
3. Add the mask to the original

g mask ( x, y ) = f ( x, y ) − f ( x, y ) original image – blurred image

g ( x, y ) = f ( x, y ) + k  g mask ( x, y )
• When k=1, unsharp masking
• When k > 1, highboost filtering
Unsharp Masking and Highboost Filtering
Unsharp Masking and Highboost Filtering
Using First-Order Derivatives for Image Sharpening

• First derivatives in image processing is implemented


using the magnitude of the gradient

 g x   x 
f
f  grad( f )   g  =  f  gradient of f at (x,y)
 y   y 

M ( x, y ) = mag(f ) = [ g x2 + g 2y ]1/ 2
2 1/ 2
 f   f  
2
magnitude of gradient
=   +   
 x   y  
 

M ( x, y )  g x + g y Approximation of magnitude
of gradient by absolute values
Using First-Order Derivatives for Image Sharpening

3x3 region

Roberts cross-gradient
operators

Sobel operators
Example

-1 0 1 1*-1 +2*0+ 5*1+ 2*-2+…….+3*0 +4*1 =24

-2 0 2
-1 0 1
1 2 5 2
Gx 24
2 1 6 9
24.18 = 32 + 242
3 3 4 9
1 0 1 8
-1 -2 -1
0 0 0
1 2 1 -1*1 +2*-2 +5*-1 +….. +3*2 +4*1 =3

Gy
106
Use of First Derivatives for Enhancement

Optical image of Sobel gradient


contact lens
Example of Combining Spatial Enhancement
Methods

• want to sharpen the


original image and bring
out more skeletal detail.
• problems: narrow dynamic
range of gray level and
high noise content makes
the image difficult to
enhance

109
Example of Combining Spatial Enhancement
Methods

solve :

1. Laplacian to highlight fine detail


2. gradient to enhance prominent edges
3. gray-level transformation to increase
the dynamic range of gray levels

110
111
112
Frequency domain image enhancement
 Any spatial or temporal signal has an equivalent frequency representation

 What do frequencies mean in an image ?

 High frequencies correspond to pixel values that change rapidly across the image
(e.g. text, texture, leaves, etc.)

 Strong low frequency components correspond to large scale features in the image
(e.g. a single, homogenous object that dominates the image)

 We will investigate Fourier transformations to obtain frequency representations of an


image

 Discrete Fourier transform is the most important transform employed in image processing
applications

 Discrete Fourier transform is generated by sampling the basis functions of the continuous
transform, i.e., the sine and cosine functions
Discrete Fourier transform
 The Fourier transform decomposes a complex signal into a weighted sum
of sinusoids, starting from zero frequency to a high value determined by
the input function

 The base frequency or the fundamental frequency is the lowest dominant


frequency. All multiples of the fundamental frequency are known as
harmonics.

 A given signal can be constructed back from its frequency decomposition


by a weighted addition of the fundamental frequency and all the harmonic
frequencies
Discrete Fourier transform
Discrete Fourier transform
Frequency domain filtering fundamentals
To filter an image in the frequency domain:
1. Compute F(u,v) the DFT of the image
2. Multiply F(u,v) by a filter function H(u,v)
3. Compute the inverse DFT of the result
Frequency domain smoothing filters
 Smoothing(blurring) is achieved in the frequency domain by high- frequency attenuation;
that is, by lowpass filtering.
 Edges and sharp transitions in the gray-levels of an image contribute significantly to the high-
frequency content of its Fourier transform.
 Here, we consider 3 types of lowpass filters:
 Ideal lowpass filters
 Butterworth lowpass filters
 Gaussian lowpass filters
 These three categories cover the range from very sharp(ideal), to very
smooth(Gaussian) filtering.
 The Butterworth filter has a parameter called the filter order.
 For high order values, the Butterworth filter approaches the ideal filter.
 For low order values, Butterworth filter is more like a Gaussian filter.
 Thus, the Butterworth filter may be viewed as providing a transition between two “extremes”.
Frequency domain filtering
Ideal Low Pass Filter

08/02/2022 124
• A severe ringing effect is seen in
blurred images, which is a
characteristic of ideal filters.
• It is due to the discontinuity in
the filter transfer function

As cutoff frequency increases


blurring decreases
As cutoff frequency increases blurring decreases
Ringing effect
 A severe ringing effect is seen in blurred images, which is a characteristic of
ideal filters.
 It is due to the discontinuity in the filter transfer function
Butterworth low pass
Butterworth LPF filter
• The Butterworth lowpass filter is a type of
signal processing filter designed to have
as flat a frequency response as possible in
the passband.
• The transfer function of a Butterworth
lowpass filter of order n with cutoff
frequency at distance D0 from the origin is
defined as:
Smoothing Frequency Domain, Butterworth
Low-pass Filters
Butterworth Low Pass Filter (example)

2/8/2022 130
Gaussian filters

𝐷0 is the cutoff frequency


Gaussian Low Pass Filters (example)

2/8/2022 132
Image sharpening using frequency domain
filters(High pass filter)
Sharpening Fourier Domain Filters

2/8/2022 134
Sharpening Spatial Domain Representations

2/8/2022 135
Sharpening Fourier Domain Filters (Examples)

2/8/2022 136
Sharpening Fourier Domain Filters (Examples)

2/8/2022 137
Sharpening Fourier Domain Filters (Examples)

2/8/2022 138

You might also like