Chapter 4brev - Digital Image Processing

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 85

Digital Image Processing

DR. MANSOOR AHMAD HASHMI, FAST-NUCES, LAHORE,


E-MAIL: [email protected]
Pixel & Digital Number
• The Pixel or “Picture Element” is a data element having both spatial and spectral aspects.
• The spatial variable defines the apparent size of the resolution cell (i.e., the area on the ground
represented by the data values).
• The spectral variable defines the intensity of spectral response for that cell in a particular channel.
• The pixel is also referred to as resolution cell of a scene which is considered as the smallest area in a
scene, considered a unit of data.

• Spectral information: The information conveyed by the spectral response of individual resolution
cells in the scene.

• The position of any pixel is determined on an x-y coordinate system.


• Each pixel has a numerical value called a Digital Number (DN) that records the intensity of electro-
magnetic energy measured for the ground resolution cell represented by that pixel.
• Digital Numbers range from zero to some higher number on a gray scale depending upon the
quantization level of the sensor.
Pixel
The cells are sensed one after another along the line.
In the sensor, each cell is associated with a pixel that is
tied to a microelectronic detector

Pixel is a short abbreviation for Picture Element

a pixel being a single point in a graphic image

Each pixel is characterized


by some single value of radiation
(e.g., reflectance) impinging on
a detector that is converted by
the photoelectric effect into electrons
2Q - Q is bit of each pixel
Gray Scale
It is a sequence of gray tones ranging from black to white. Gray level value is the intensity value of
a pixel in a gray level image.
Histogram

It is the Graphical display of a set of data showing the frequency of occurrence


(along the vertical axis) of individual measurements or values (along the horizontal
axis); a frequency distribution.
Scene
In a passive remote sensing system, it is everything that occurs spatially or
temporally before the sensor, including the earth’s surface, the energy source, and
the atmosphere which the energy passes through as it travels from the source to
the earth and from the earth to the sensor.
Image
It is the pictorial representation of a scene recorded by a remote sensing system irrespective of the
wavelength or imaging device used to produce it.

It is commonly restricted to representations acquired by non-photographic methods.

A photograph is also an image that records wavelength of from 0.3 to 0.9 micrometers and which have
interacted with light sensitive chemicals in a photographic film.

The image can be described in terms of certain fundamental properties regardless of the wavelength at
which the image is recorded.

These common fundamental properties are;


1. Scale
2. Brightness
3. Contrast
4. Resolution

Tone and texture of the image are functions of the fundamental properties.
Image (CONTD.)

The image may be described in strictly numerical terms on a three-coordinate system


with x and y locating each pixel and z giving digital number (DN), which is displayed as
a gray scale intensity value.

In a remote sensing system, images are, in general, originally recorded in digital format
(e.g Landsat)

An image recorded initially on photographic film or prints may be converted into


digital format by a process known as digitizing (scanning).
Digital Image
• It is a numerical representation of the sampled field. Typically, the field represented is the radiance
of a scene viewed in some region of the electro-magnetic spectrum.

• A digital Image is an image f (x, y) which has been described both in spatial coordinates and in
brightness.

• Consider Digital Image as a matrix whose row and column indices identify a point in the image
and corresponding matrix element identifies the gray level at that point.

• A digital image can be constructed that describes Gravity or Magnetic Field Strength, Topographic
Relief, or computed variables such as thermal inertia.

• The digital image is generated by sampling and measuring the local field strength as a number of
points that are usually arranged in a rectilinear Pattern. The field strength measured at each of
these points is encoded as an integer.

• The digital image is actually an array of numbers which can be stored on magnetic tape or disk.
Digital Image (CONTD.)
 Digital Images: remote sensed images can also be represented in a
computer as arrays of pixels (picture elements), with each pixel
corresponding to a digital number, representing the brightness level
of that pixel in the image. In this case, the data are in a digital
format. These types of digital images are referred to as raster
images in which the pixels are arranged in rows and columns
Two prime approaches in the use of remote sensing

 Standard photo-interpretation of scene content


 Use of digital image processing and classification techniques
that are generally the mainstay of practical applications of
information extracted from sensor data sets
?
Image below, brighter portions relate to higher energy
levels
Landsat thematic mapper (TM)
Image Processing
 Computer-Assisted Scene Interpretation (CASI) ; also
called Image Processing
 The techniques fall into three broad categories:
 Image Restoration and Rectification
 Image Enhancement
 Image Classification
 There are a variety of CASI methods:
Contrast stretching, Band rationing, Band transformation,
Principal Component Analysis, Edge Enhancement,
Pattern Recognition, and Unsupervised and Supervised
Classification
Digital Image Processing

Is the manipulation of a digital image by computer and is


performed either to prepare an image for display and
interpretation, or to extract information from the image.
Data visualization
The images that we view are visual representations
of the digital output from the sensor
 8-bit gray shade image is the case when the
sensor output is converted to one of 256 gray
shades (0 to 255)
 24-bit color does the same except in shades or
red, green, and blue
Image Processing: Pixel Values

 Pixel Values: The magnitude of the electromagnetic


energy captured in a digital image is represented by
positive digital numbers.
 The digital numbers are in the form of binary digits (or
'bits') which vary from 0 to a selected power of 2

Image Type Pixel Value Color Levels


8-bit image 28 = 256 0-255
16-bit image 216 = 65536 0-65535
24-bit image 224 = 16777216 0-16777215  
Image Processing: Image Resolution

 Image Resolution: the resolution of a digital image is


dependent on the range in magnitude (i.e. range in
brightness) of the pixel value. With a 2-bit image the
maximum range in brightness is 22 = 4 values ranging
from 0 to 3, resulting in a low resolution image. In an 8-
bit image the maximum range in brightness is 28 = 256
values ranging from 0 to 255, which is a higher
resolution image
2-bit Image 8-bit Image
(4 grey levels) (256 grey levels)
Image Processing Procedures
 Image Restoration: most recorded images are
subject to distortion due to noise which degrades the
image. Two of the more common errors that occur in
multi-spectral imagery are striping (or banding) and
line dropouts
Image Processing Procedures

 Dropped Lines are errors that occur in the sensor


response and/or data recording and transmission which
loses a row of pixels in the image.
Procedure of Digital Image Processing
Image Interpretation
relies on one or both of these approaches:

Photointerpretation: the interpreter uses his/her knowledge and


experience of the real world to recognize scene objects (features, classes,
materials) in photo like renditions of the images acquired by aerial or
satellite surveys of the targets (land; sea; atmospheric; planetary) that
depict the targets as visual scenes with variations of gray-scale tonal or
color patterns (more generally, spatial or spectral variability that mirror
the differences from place to place on the ground)

machine-processing manipulations (usually computer-based) that analyze


and reprocess the raw data into new visual or numerical products, which
then are interpreted either by approach 1 or are subjected to appropriate
decision-making algorithms that identify and classify the scene objects into
sets of information
Image Interpretation

Elements (Keys) of Image Interpretation

The following eight elements are mostly used in image interpretation

1. Color
2. Tone
3. Texture
4. Pattern
5. Shape
6. Size
7. Shadow
8. Association
Color
Color (CONTD)
Color (CONTD)
Example of color composition
Sample of Color Composition from Digital
Data
Tone

• The continuous gray scale varying from white to black is called tone.
 
• In panchromatic photographs, any object will reflect its unique tone
according to the reflectance.
•  
• For example,  dry sand reflects white, while wet sand reflects black. In
black and white near infrared photographs, water is black and healthy
vegetation white to light gray.

• Tone denotes the spectral reflectance of the features


Texture

Texture is a group of repeated small patterns. For example,


homogenous grassland exhibits a smooth texture, coniferous
forest usually shows a coarse texture.
 
However, this will depend upon the scale of the photograph or
image
Pattern

Pattern is a regular, usually repeated, shape in respect to an


object.
 
For example, Rows of houses or a apartments, regularly-spaced
rice fields, interchange of highways, orchards and so on, can
provide information from their unique patterns.
Shape

 
The specific shape of an object, as it is viewed from above, will be imaged on
a vertical photograph.
 
Therefore, the shape from a vertical viewpoint should be known.
 
For example, the crown of a conifer tree looks like a circle, while that of a
deciduous tree has an irregular shape.
 
Airport, factories, and so on can also be identified by their shapes.
Size

A proper photo-scale (image resolution) should be selected depending on the


purpose of the interpretation.
 
The approximate size of an object can be measured by multiplying the length
of the image by the inverse of the photo-scale.
Shadow

Shadow is usually a visual obstacle for image interpretation.


 
Shadow can also give height information about a towed, tall building,
mountain ranges and others, as well as shape information from non- vertical
perspective such as the shape of a bridge.
 
 
Association

 
A specific combination of elements geographic characteristics and configuration
of the surrounding or the context of an object can provide the user with specific
information for image interpretation
 
 
Interpretation key for Forestry
A Sample of LANDSAT MSS Images’
Interpretation key
Classification

 Classification
  of a remotely- sensed data is used to assign corresponding levels
 (classes) in respect to groups with homogenous characteristics with the aim of
discriminating multiple objects from each other within the image.
 
Classification will be executed on the base of spectral or spectrally- defined
features, such as density, texture, etc., in the features space.
 
It can be said that classification divides the feature space into several classes
based on a decision rule.
Classification

 Classificationis probably the most informative means of


interpreting remote sensing data
 Theoutput from these methods can be combined with other
computer-based programs.
 The output can itself become input for organizing and
deriving information utilizing what is known as Geographic
Information Systems (GIS)
Concept of Classification of Remote Sensing Data
Image Classification

 In classifying features in an image we use the elements of visual


interpretation to identify homogeneous groups of pixels which
represent various features or land cover classes of interest. In
digital images it is possible to model this process, to some extent,
by using two methods: Unsupervised Classifications and
Supervised Classifications.
Unsupervised Classifications
this is a computerized method without direction from the analyst in which pixels
with similar digital numbers are grouped together into spectral classes using
statistical procedures such as nearest neighbor and cluster analysis. The resulting
image may then be interpreted by comparing the clusters produced with maps,
air photos, and other materials related to the image site.
Supervised Classification:
Procedure of Classification
Procedure of Classification (CONTD.)
Limitations to Image Classification:

have to be approached with caution because it is a complex


process with many assumptions.

In supervised classifications, training areas may not have


unique spectral characteristics resulting in incorrect
classification.

Unsupervised classifications may require field checking in


order to identify spectral classes if they cannot be verified by
other means (i.e. maps and air photos).
Image Type Pixel Value Color Levels

8-bit image 28 = 256 0-255

16-bit image 216 = 65536 0-65535

24-bit image 224 = 16777216 0-16777215  


Loss of Visual Information
Image Enhancement

 One of the strengths of image processing is that it gives us


the ability to enhance the view of an area by manipulating
the pixel values, thus making it easier for visual
interpretation.
 There are several techniques which we can use to enhance
an image, such as Contrast Stretching and Spatial
Filtering.
Image Enhancement
 Image Histogram: For every digital image the pixel value represents the magnitude of an
observed characteristic such as brightness level. An image histogram is a graphical representation
of the brightness values that comprise an image. The brightness values (i.e. 0-255) are displayed
along the x-axis of the graph. The frequency of occurrence of each of these values in the image is
shown on the y-axis.

8-bit image
(0 - 255 brightness levels)

Image Histogram
x-axis = 0 to 255
y-axis = number of pixels
Effects of Image Enhancement
Image Enhancement
Contrast Stretching: Quite often the useful data in a digital image populates only a
small portion of the available range of digital values (commonly 8 bits or 256 levels).
Contrast enhancement involves changing the original values so that more of the
available range is used, this then increases the contrast between features and their
backgrounds. There are several types of contrast enhancements which can be
subdivided into Linear and Non-Linear procedures.
Image Enhancement
 Linear Contrast Stretch: This involves identifying lower and upper bounds from the histogram (usually the
minimum and maximum brightness values in the image) and applying a transformation to stretch this range to fill
the full range.

Linear Stretch Equalized Stretch

 Equalized Contrast Stretch: This stretch assigns more display values (range) to the frequently
occurring portions of the histogram. In this way, the detail in these areas will be better enhanced
relative to those areas of the original histogram where values occur less frequently.
Linear Stretch
Linear Stretch Example:
Before Linear Stretch
After Linear Stretch

The linear contrast stretch enhances the contrast in the image with light toned
areas appearing lighter and dark areas appearing darker, making
visual interpretation much easier.
This example illustrates the increase in contrast in an image before (left) and after (right)
a linear contrast stretch.
Spatial Filtering
 Spatial filters are designed to highlight or suppress features in an image based on
their spatial frequency. The spatial frequency is related to the textural
characteristics of an image. Rapid variations in brightness levels ('roughness')
reflect a high spatial frequency; 'smooth' areas with little variation in brightness
level or tone are characterized by a low spatial frequency. Spatial filters are used
to suppress 'noise' in an image, or to highlight specific image characteristics.

 Low-pass Filters
 High-pass Filters
 Directional Filters
 etc
Filter is the one which are use to block unwanted things and allow
wanted things.

Like wise in electronic circuits there are filters like band pass, band
stop, low pass and high pass filters.

In a wave there are two type of frequency called high and low
frequency.

Low pass filters are used to filter low frequency and block high
frequency.

Whereas in high pass filter high frequency is filtered and low


frequency is blocked.

Filters are used to obtain only particular frequency and to minimize


noise.
Filtering an Image
 Image filtering is useful for many applications, including smoothing, sharpening,
removing noise, and edge detection. A filter is defined by a kernel, which is a small
array applied to each pixel and its neighbors within an image. In most applications,
the center of the kernel is aligned with the current pixel, and is a image domains.
 Within the square with an odd number (3, 5, 7, etc.) of elements in each dimension.
The process used to apply filters to an image is known as convolution, and may be
applied in either the spatial or frequency domain. On spatial domain, the first part of
the convolution process multiplies the elements of the kernel by the matching pixel
values when the kernel is centered over a pixel. The elements of the resulting array
(which is the same size as the kernel) are averaged, and the original pixel value is
replaced with this result. The CONVOL function performs this convolution process
for an entire image.
Filtering an Image
 Within the frequency domain, convolution can be performed by multiplying the FFT (Fast Fourier
Transform) of the image by the FFT of the kernel, and then transforming back into the spatial
domain. The kernel is padded with zero values to enlarge it to the same size as the image before the
forward FFT is applied. These types of filters are usually specified within the frequency domain
and do not need to be transformed. IDL's DIST and HANNING functions are examples of filters
already transformed into the frequency domain. See Windowing to Remove Noise for more
information on these types of filters.
 The following examples in this section will focus on some of the basic filters applied within the
spatial domain using the CONVOL function:
 Low Pass Filtering
 High Pass Filtering
 Directional Filtering
 Laplacian Filtering
Low Pass Filter

High Pass Filter


Spatial Filtering

 Low-pass Filters: These are used to emphasize large homogenous


areas of similar tone and reduce the smaller detail. Low
frequency areas are retained in the image resulting in a
smoother appearance to the image.

Linear Stretched Image Low-pass Filter Image


Spatial Filtering

 High-pass Filters: allow high frequency areas to pass


with the resulting image having greater detail resulting
in a sharpened image
Linear Contrast Stretch Hi-pass Filter
Spatial Filtering
 Directional Filters: are designed to enhance linear features such as roads,
streams, faults, etc. The filters can be designed to enhance features which are oriented
in specific directions, making these useful for radar imagery and for geological
applications. Directional filters are also known as edge detection filters.

Edge Detection
Lakes & Streams

Edge Detection
Fractures & Shoreline
Image Ratios

 Itis possible to divide the digital numbers of one image


band by those of another image band to create a third
image. Ratio images may be used to remove the
influence of light and shadow on a ridge due to the
sun angle. It is also possible to calculate certain indices
which can enhance vegetation or geology
Image
Sensor EM Spectrum Application
Ratio
Landsat TM Bands 3/2 red/green Soils
Landsat TM Bands 4/3 PhotoIR/red Biomass
Landsat TM Bands 7/5 SWIR/NIR Clay Minerals/Rock Alteration

For example:

Normalized Difference Vegetation Index (NDVI):


a commonly use vegetation index which uses the red and infrared bands
of the EM spectrum.
 
 
Image Ratio example: NDVI

NDVI image of Canada.


Green/Yellow/Brown
represent  decreasing
magnitude of the
vegetation index.
Data Visualization
Contrast enhancement or stretch reassigns the DN range that
corresponds to the 256 gray shades
Top row of images are ETM+ data with no enhancement and bottom row
consists of linear contrast stretches of the image DNs to the full 0-255 gray
shades
Data Visualization
Ability to quickly discern features is improved by using 3-band color
mixes

Image below assigns blue to


band 2, green to band 4, and
red to band 7
Vegetation is green
Surface water is blue
Playa is gray and white
(Playas are dry lakebeds)
Data Visualization

Changing the color assignment to red, green, and blue does not alter the surface
material only appearance of the image.

All images below show only combinations of bands 2, 4, and 7 of ETM+


Data Visualization
Other band combinations of the same data set bring out different features
(or in some cases lack there of)
All images below show only combinations of bands 2, 4, and 7 of ETM+
Multispectral display - CIR
• Visualize spectral content with 3- band
color composites

• Example: color infrared (CIR)


– red channel assigned to near IR
sensor band
– green channel assigned to red
sensor band
– blue channel assigned to green
sensor band
• vegetation appears red, soil appears yellow - grey,
water appears blue - black
Atmospheric Correction
THANK YOU!

You might also like