Remote Sensing Image Processing
Remote Sensing Image Processing
Remote Sensing Image Processing
Fundamental of remote sensing by George Joseph, 2005 (source NPTEL, IIT Kanpur)
Electromagnetic spectrum
Name Optical wavelength (a) Reflective portion (i) Visible (ii) Near IR
(iii) Middle IR
1.30 - 3.0
7.00 - 15.0
Microwave region
Each DN in this digital image corresponds to one small area of the visual image and provide the level of darkness or lightness of the area.
Higher the DN value, the brighter the area. Hence the zero value represents a perfect black. The maximum value represent perfect white and the intermediate values are shades of gray.
Landsat ETM+ image of Littleport in Cambridgeshire (England), acquired on 19 June 2000. Band combination 4,3,2
Pre-processing
Operation involved in preprocessing aims to correct degraded image acquired from sensor (raw image) to create a better presentation of original image. These operations are called pre-processing because they normally precede further manipulation and analysis of image data to extract specific information.
Radiometric correction
Are used to modify DN values in order to account for noise, that is, contributions to the DN that are a function NOT of the feature being sensed but due to: The intervening atmosphere The sun-sensor geometry The sensor itself errors and gaps
Radiometric correction
Missing lines Striping Illumination and view angle effects Sensor calibration Terrain effects
Source: earthobservatory.nasa.gov
Stripping or banding
Occurs due to non-identical detector response if a detector of a electromechanical sensor goes out of adjustment and provide reading less than or greater than other detectors fro same band over the same ground cover. Several methods are proposed to remove this error. Linear method-assume a linear relation between input and output Histogram matching- Histogram of each line is created. Stripes are characterized by distinct histogram.
Sensor calibration
necessary to generate absolute data on physical properties Reflectance Temperature Emissivity Backscatter Values provided by data provider / agency
Terrain effects
Cause differential solar illumination Some slopes receive more sunlight than others Magnitude of reflected radiance reaching the sensor Topographic slope and aspect introduces Radiometric distortion Bidirectional reflectance distribution function (BRDF) Require DEM and sun elevation for correction
BRDF
BRDF gives the reflectance of a target as a function of illumination geometry and viewing geometry. The BRDF depends on wavelength and is determined by the structural and optical properties of the surface, such as shadow-casting, multiple scattering, mutual shadowing, transmission, reflection, absorption and emission by surface elements, facet orientation distribution and facet density. The BRDF is needed in remote sensing for the correction of view and illumination angle effects
Radiometric correction
Atmosphere leads to selective scattering absorption and emission. Total radiance received at sensor depends on ground radiance (direct reflected) and scattered radiation from the atmosphere (path radiance). relationship between radiance received at the sensor (above atmosphere) and radiance leaving the ground
Ls at sensor radiance, H total downwelling radiance (incident energy), reflectance of target (albedo), T atmospheric transmittance, Lp atmospheric path radiance (wavelength dependent)
In solar reflection region, scattering is most dominant causing path radiance. Path radiance causes (1) Reduction in contrast due to masking effect, causing dark object appearing less dark and bright object less bright. and (2) adjacency affectatmospheric scattering may direct some radiation away from the sensor FOV-cause a decrease in spatial resolution of sensor. Two methods for correction: Dark object subtraction and using atmospheric models such as MODTRAN (http://modtran.org/), 6S (Second Simulation of a Satellite Signal in the Solar Spectrum; http://rtcodes.ltdri.org/)
2. When we want to compare upwelling radiance from a surface to some property of that surface in terms of physically based model. 3. When comparing satellite data acquired at different date as state of atmosphere changes from time to time.
4. Radiometric corrections for illumination, atmospheric influences, and sensor characteristics are done prior to distribution of data to the user.
Geometric correction
Geometric correction is the process of rectification of geometric errors introduced in the imagery during the process of its acquisition. It is the process of transformation of a remotely sensed image so that it has the scale and projection properties of a map. A related technique called registration is the fitting of the coordinate system of one image to that of a second image of the same area. Geocoding and georeferencing are the often-used terms in connection with the geometric correction process. The basic concept behind geocoding is the transformation of satellite images into a standard map projection so that image features can be accurately located on the earth's surface, and the image can be compared.
Scan Skew: The main source of geometric error in satellite data is satellite path orientation (non-polar).
The geometric correction process involves in 1. Identifying the image coordinates of several clearly identifiable points, called ground control points (or GCPs), in the distorted image (A - A1 to A4). 2. The true ground coordinates are obtained from a map (B - B1 to B4, or another image ), matching them to their true positions in ground coordinates This is called image-tomap or image to image registration.
To geometrically correct the original distorted image, resampling is used to determine the DN values of new pixel locations of the corrected image. The resampling process calculates the new pixel values from the original digital pixel values in the uncorrected image. Nearest neighbour, bilinear interpolation, and cubic convolution are three resampling methods. Nearest neighbour method uses the DN value from the original image which is nearest to the new pixel location in the corrected image.
Resampling methods
New pixel gets a value from the weighted average of 4 (2 x 2) nearest pixels;
smoother but synthetic
c. Cubic Convolution
(smoothest) New pixel DNs are computed from weighting 16 (4 x 4) surrounding DNs
http://www.geo-informatie.nl/courses/grs20306/course/Schedule/Geometric-correction-RS-new.pdf
Image Enhancement
Visual analysis and interpretation are often sufficient for many purpose to extract information from remote sensing data. Enhancement means altering the appearance of digital image so as to make it more informative for visual interpretation. The characteristics of each image in terms of distribution of pixel values will change from one area to another.
Image transformation and filtering are also used as image enhancement techniques.
For visual enhancement Changing image contrast is one of the important exercise.
Contrast is defined as the difference in colour that makes an object (or its representation in an image) distinguishable.
In raw imagery, the useful data often covers only a small portion of the available range of digital values (for example for 8 bits or 256 levels). Contrast enhancement involves changing the original values so that more of the available range is used, thereby increasing the contrast between targets and their backgrounds. For contrast enhancements concept of an image histogram is important. A histogram is a graphical representation of the brightness values that comprise an image. The brightness values (i.e. 0-255) are displayed along the x-axis of the graph. The frequency of occurrence of each of each DN values in the image is shown on the y-axis.
By manipulating the range of DN values represented by the histogram of an image, various contrast enhancement techniques can be applied. The simplest type is a linear contrast stretch. This involves identifying the minimum and maximum brightness values in the image (lower and upper bounds from the histogram) and applying a transformation to stretch this range to fill the full range.
Histogram equalization Histogram is transformed so that all pixels have same frequency along the whole range. This method expands some parts of the DN range at the expense of others by dividing the histogram into classes containing equal numbers of pixels
Piece wise linear stretch- when histogram is bi-model. In this approach a number of linear enhancement steps that expands the brightness ranges by using breakpoints are used.
Density slicing-combining DNs of different values within a specified range into a single value. This transforms the image from a continuum of gray tones into a limited number of gray or color tones reflecting the specified ranges in digital numbers. This is useful in displaying weather satellite information.
Filtering
Filtering include a set of image processing functions used to enhance the appearance of an image. Filter operations can be used to sharpen or blur images, to selectively suppress image noise, to detect and enhance edges, or to alter the contrast of the image.
Spatial filtering
Low pass filter -These are used to emphasize large homogenous areas of similar tone and reduce the smaller detail. Low frequency areas are retained in the image resulting in a smoother appearance to the image. Average, Median and majority filters
Spatial filtering
In spatial domain filter, a moving window of a set of pixels in dimension (i.e. 3X3 and 5X5) is passed over each pixel in image. Mathematical calculation using pixel value under the window is applied and central pixel of window is replaced by this value. This window is called Convolution kernel.
Edge detection filters highlight linear features, such as roads or field boundaries. These filters are useful in applications such as geology, for the detection of linear geologic structures (lineament). Are used to determine boundaries of homogenous regions in radar images. Roberts and Sobel filters (High pass filters) are Mostly used.
Actual
Median
Edge detection
High pass
Converted image
Image Transformation
A process through which one can re-express the information content of an Image. image transformations generate new images from two or more source images which highlight particular features or properties of interest, better than the original input images.
Various methods
Arithmetic Operations Principal component analysis Hue, saturation and intensity (HSI) transformation
Arithmetic operations
Image addition Image subtraction Image division (image ratioing) Image multiplication These operation are done on two or more coregistered images of same area. Division is most widely used operation for geological, ecological and agricultural applications.
More than 150 VIs are published in scientific literature so far. Only a small subset have substantial biophysical basis or have been systematically tested. Important VIs are: Normalized Difference Vegetation Index (NDVI) Simple Ratio Index (SR) Enhanced Vegetation Index (EVI) Atmospherically Resistant Vegetation Index (ARVI) Soil adjusted vegetation Index (SAVI)
NDVI NIR-R/NIR+R SR NIR/R EVI 2.5(NIR-R)/(NIR+6R-7.5B+1) ARVI (NIR-(2R-B))/(NIR+(2R-B)) SAVI (NIR-R)(1+L)/(NIR+R+L) Where, NIR=Near Infrared, R=red, B=Blue and L=soilbrightness dependent correction factor
PCA2
PCA3
HSI transform
Images are generally displayed in RGB colours (primary colours) An alternate to this is to hue, saturation and intensity system Hue refers to average wavelength of the light contributing to the colour Saturation specifies the purity of colour relative to gray Intensity relates to the total brightness of a colour. This is used for image enhancement
Image Classification
A process of assigning pixels or a group of pixels in an image to one of a number of classes. The output of image classification is a thematic map. A thematic map is a map that focuses on a specific theme or subject area (like land use/cover in remote sensing) Land cover is the natural landscape recorded as surface components: forest, water, wetlands, urban, etc. Land cover can be documented by analyzing spectral signatures of satellite and aerial imagery. Land use is the way human uses the landscape: residential, commercial, agricultural, etc. Land use can be inferred but not explicitly derived from satellite and aerial imagery.
Conventional multispectral classification techniques perform class assignments based only on the spectral signatures of a classification unit. Contextual classification refers to the use of spatial, temporal, and other related information, in addition to the spectral information of a classification unit in the classification of an image. Object based classification (based on segmentation techniques) A classification unit could be a pixel, a group of neighbouring pixels or the whole image
Classification Approaches
There are three approaches to pixel labeling 1. Supervised 2. Unsupervised 3. Semi-supervised
Training pixels
Classifier Used
Supervised classification
Requires training areas to be defined by the analyst in order to determine the characteristics of each class.
Supervised classifiers
Maximum likelihood Neural network Decision tree Kernel based sparse classifiers
wij
j k
w jk
Input Layer
Hidden Layer
Output Layer
Accuracy
With ETM+ dataset using 7 classes Classification accuracy
Classifier Maximum likelihood Accuracy (%)
Unsupervised classifier
Unsupervised classification, searches for natural groups of pixels, called clusters, present within the data by means of assessing the relative locations of the pixels in the feature space. An algorithm is used to identify unique clusters of points in feature space, which are then assumed to represent unique land cover class. Number of clusters (i.e. classes) are defined by user. These are automated procedures and therefore require minimal user interaction.
Unsupervised classifiers
The most popular clustering algorithms used in remote sensing image classification are: 1. ISODATA, a statistical clustering method, and 2. the SOM (self organising feature maps), an unsupervised neural classification method.
ISODATA
Iterative Self-Organizing Data Analysis Technique
Parameters you must enter include: N - the maximum number of clusters that user wants (depends on his knowledge about area) T - a convergence threshold, which is the maximum percentage of the pixels whose class values are allowed to be unchanged between iterations M - the maximum number of iterations to be performed.
ISODATA Procedure
N arbitrary cluster means are established, The image is classified using a minimum distance classifier A new mean for each cluster is calculated The image is classified again using the new cluster means Another new mean for each cluster is calculated The image is classified again.
After each iteration, the algorithm calculates the percentage of pixels that remained in the same cluster between iterations When this percentage exceeds T (convergence threshold), the program stops or If the convergence threshold is never met, the program will continue for M iterations and then stop.
Semi-supervised classification
Use small number of labeled training data to label large amount of unlabeled data.
Because collection of training data is expensive
Semi-
Nearby points are likely to have the same label. Similar data should have the same class label. Two points that are connected by a path going through high density regions should have the same label.
Image/Photo Interpretation
An act of examining images for the purpose of identifying objects and judging their significance. Depending upon the instruments employed for data collection one can interpret a variety of images such as aerial photographs, scanner, thermal and radar imagery. Even a digitally processed imagery requires image interpretation.
Shape
Shape refers to the general form or outline of an individual object. Man made features have specific shapes A railways is readily distinguishable from a road or a canal as its shape consists of long straight tangents and gentle curves as opposed to curved shape of a highway.
Size
Length, width, height, area, volume of the object. It is a function of image scale.
Tone/colour
Tone of an object refers to relative brightness or colour in an image. One of the fundamental element to differentiate between different objects. It is qualitative measure No feature has a constant tone. Tone vary with the reflectivity of the object, the weather, the angle of light on an object and moisture content of the surface. The sensitivity of the response of tone to all the aforementioned variables makes it a very discriminating factor. Slight changes in the natural landscape are more easily comprehended because of tonal variations.
Shadow
A shadow provides information about the object's height, shape, and orientation (e.g. tree species);
Patterns
Similar to shape, the spatial arrangement of objects (e.g. row crops vs. pasture) is also useful to identify an object and its usage. Spatial phenomenon such structural pattern of an object in an image may be characteristic of artificial as well as natural objects such as parceling (plot of land) patterns, land use, geomorphology of tidal marshes or shallows, land reclamation, erosion gullies, tillage, plant direction ridges of sea waves, lake districts, nature terrain etc.
Texture
Frequency of tonal change in particular area of an image A qualitative characteristics and generally refers as rough or smooth. Texture involves the total sum of tone, shape pattern and size, which together give the interpreter an intuitive feeling for the landscape being analyzed.
Association
Associating the presence of one object with another, or relating it to its environment, can help identify the object (e.g. industrial buildings often have access to railway sidings; nuclear power plants are often located beside large bodies of water). For example white irregular patches adjacent to sea referees to sand.
Site
Location of an object amidst certain terrain characteristics shown by the image may exclude incorrect conclusions e.g., site of an apartment building is not acceptable in a swamp or a jungle
Time
Temporal characteristics of a series of photographs can be helpful in determining the historical change of an area (e.g. looking at a series of photos of a city taken in different years can help determine the growth of suburban neighborhoods.
Idealization
Classification of an object by means of specific knowledge, within a known category, upon its detection in the image. Process of separating a set of similar objects and involves drawing boundary lines. Separation of different group of objects and deducing their significance based on converging of evidence Establishment of the identity of objects delineated by analysis Standardization of representation of what is actually seen in imagery.
Applications
DRASTIC, an overlay and index method developed for the Environmental Protection Agency (EPA) by the American Water Well Association (Aller et al., 1987) is a widely used model. This model assesses contamination potential of an area to pollution by combining key factors influencing the solute transport. The original DRASTIC Index (DIorg) was calculated using the most important hydrogeologic factors that affect the potential for groundwater pollution. The DRASTIC Index does not provide absolute answers; it only differentiates highly vulnerable areas from less vulnerable areas. This model does not include soil structure in the model.
The landuse data used in this study was obtained from Landsat 5 Thematic Mapper (TM) 1992. TM images from two seasons, spring and summer, were used in this study. The image was classified into 4 level I classifications, such as urban, forests, water and agriculture. Then the images were further classified for agricultural crops.
In Atmospheric Environment 42 (2008) 78237843 by Randall V. Martin Global observations are now available for a wide range of species including aerosols, tropospheric O3, tropospheric NO2, CO, HCHO, and SO2.
HCHO (Formaldehyde) columns are closely related to surface VOC ( Volatile Organic Compounds) emissions since HCHO is a high-yield intermediate product from the oxidation of reactive non methane VOCs.
Hyperspectral Remote Sensing of Water Quality Parameters for Large Rivers in the Ohio River Basin
Naseer A. Shafique, Florence Fulk, Bradley C. Autrey, Joseph Flotemersch
Compact Airborne Spectrographic Imager (CASI) data was used. In situ water samples were collected and a field spectrometer was used to collect spectral data directly from the river.
Horizontal scanning from Lyon near a tunnel. We can observe a huge concentration of particles at the intersection of many roads.
Mapping of heavy metal pollution in stream sediments using combined geochemistry, field spectroscopy, and hyperspectral remote sensing
The aim of this study is to derive parameters from spectral variations associated with heavy metals in soil and to explore the possibility of extending the use of these parameters to hyperspectral images and to map the distribution of areas affected by heavy metals on HyMAP data. Variations in the spectral absorption features of lattice OH and oxygen on the mineral surface due to the combination of different heavy metals were linked to actual concentrations of heavy metals.
Location map and HyMAP image of the Rodalquilar area, SE Spain. (a) Locations of sampling points along the studied main stream, showing the five sections. (b) HyMAP image acquired in 2004.
Soil Mapping
Soil survey and mapping using remote sensing, Tropical Ecology 43(1): 61-74, 2002 M.L.MANCHANDA, M.KUDRAT & A.K.TIWARI
Impact of industrialization on forest and non-forest Assessing impact of industrialization in terms of LULC in a dry tropical region (Chhattisgarh), India using remote sensing data and GIS over a period of 30 years Environ Monit Assess (2009) 149:371376
Multi-sensor data fusion for the detection of underground coal fires, X.M. Zhang, C.J.S. Cassells & J.L. van Genderen, Geologie en Mijnbouw 77: 117127, 1999.