Geoprocessing 2011 3 10 30026
Geoprocessing 2011 3 10 30026
Geoprocessing 2011 3 10 30026
Abstract—When triangulating RGB aerophotograph, if effective methodology has been proposed to extract water
automatically and randomly selected matching pass points area from RGB aerophotograph. In order to supply such a
unfortunately locates into water areas, these points, limited by gap, we put forth a methodology in this paper to extract
their inaccuracy, will decrease the precision of triangulation. water area from RGB aerophotograph.
Therefore, extraction of water area beforehand is conducive to With the enhancement of image resolution,
eliminate water falling pass points and guarantee the quality of aerophotographs contain more abundant space information
aerotriangulation. A new methodology to extract water area as well as geometric and textual features, which create a
from RGB aerophotograph is put forth in this paper. favorable condition for extracting objects via chromatic and
Procedure initiates by segment ing the whole aerophotograph
segmenting
textual analysis. W. Ma and B. S. Manjunath had used
segments.. Subsequently, compute
into homogenous and united segments
chromatic and textural features of every segment and compare
textural analysis to create a texture thesaurus for browsing
segment’’s features to sampled water segments
each segment segments’’ features. large aerophotographs and achieved relatively good retrieval
Finally extract those segments whose chromatic and textural effect [3]. Other trials include W. Niblack’s querying image
segmentss’ as water areas. This
features are similar to sampled segment using color, texture and shape [4]. All these researches
methodology has a relatively obvious merit in effectiveness and manifest a fact that chromatic and textural analyses are
generality. effective in extracting and retrieving objects.
Enlightened by chromatic and textural analysis, we
Keywords
Keywords——CIELAB
CIELAB;; chromatic analysis
analysis;; textual analysis
analysis;; propose a new series of procedures to extract water areas
segmentation;; ISODAT
watershed segmentation ISODATAA from RGB aerophotographs. Procedure initiates by
segmenting the whole aerophotograph into homogenous and
I. INTRODUCTION united regions. Subsequently, compute chromatic and
Aerotriangulation is a key step in aerophotogrammetry. textural features of every region and then compare each
The basic goal of aerotriangulation is to compute all region’s feature to sampled water regions’ features. Finally
elements of exterior orientation and pass points of a region extract those segments whose chromatic and textural features
by block adjustment. The accuracy of matching points on are similar to sampled segments’ as water areas. This
aerophotograph directly relates to the precision of block methodology has a relatively obvious merit in effectiveness
adjustments and the final production of aerotriangulation. and generality.
However, water areas, influenced by wind force and gravity, Water areas on aerophotographs are usually homogenous
are usually in irregular motion. If automatically and and united; therefore water regions after segmentation
randomly selected matching pass points (a few pass points always are homogenous and united. We have no need to care
selected beforehand to compute the exterior orientation about the accuracy in segmentation of other objects because
elements) unfortunately falls into water areas, these points, they contribute little to water extraction. We choose
limited by their inaccuracy, will decrease the precision of watershed segmentation algorithm (Section Ⅲ ) because it
triangulation. Therefore, extraction of water areas can help attain satisfactory segmentation result. The
beforehand is conducive to eliminate water falling pass homogenous and united characteristic of water region also
points and guarantee the quality of aerotriangulation. In this provides pleasing condition to conduct chromatic analysis
paper, we concentrate on extracting water areas from RGB (Section Ⅱ ) and textural analysis (Section Ⅳ ). Chromatic
aerophotograph and mark them. and textural features of every region will be added to its own
Multispectral aerophotograph and remote sensing image feature vector (Section Ⅴ ). Water appears differently on
can synthesize information of different spectrums to extract photographs. In order to automatically recognize water areas,
water region. J. Deng operated different bands of SPOT-5 we need to sample water regions of all appearances
images to extract water area [1]. H. Xu modified MNDWI beforehand and classify them. ISODATA algorithm (Section
value to extract water area from multispectral remotely Ⅴ ) is employed to classify those samples into several
sensed data [2]. In contrast to multispectral data, RGB categories. If the distance is within a threshold between a
aerophotograph (every pixel of the image is consisted of red, certain region’s feature vector and any one of those
green and blue values) has only three bands and little extra
spectrum information. Consequently, until recently, no
categories’, then this segment will be identified and extracted a stage when only the tops of the dams are visible above the
as water area. water lines. These dam boundaries correspond to the divide
RGB aerophotograph consists of three channels (Red, lines of the watersheds. Therefore, they are boundaries
Green, and Blue) and they are closely pertinent to each other. extracted by a water shed segmentation algorithm [7].
We need to integrate them into one channel in order to
conduct image segmentation and textural analysis. In this II. CIELAB COLOR SPACE AND CHROMATIC ANALYSIS
paper, we utilize CIELAB color space to integrate those There are two steps to transform RGB color space to
three channels, which decreases their pertinence as well as CIELAB color space [5]. The first step is to apply the
provides prerequisites for image segmentation and textural following matrix to convert RGB color space to CIEXYZ
analysis later on. color space.
Any color is unable to be both green and red or both blue
and yellow. This principle prompts the CIELAB color space.
CIELAB color space inherits the XYZ standard color system. ⎡ X ⎤ ⎡ 0.412453 0.357580 0.180423⎤ ⎡ R ⎤
CIELAB indicates distinctions between light and dark, red ⎢ Y ⎥ = ⎢ 0.212671 0.715160 0.072169⎥ ⋅ ⎢ G⎥ (1)
and green, and blue and yellow by using three axes: L*, a*, ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
b*. The central vertical axis represents lightness (L*) whose ⎣⎢ Z ⎦⎥ ⎣⎢ 0.019334 0.119193 0.950227⎥⎦ ⎢⎣ B ⎦⎥
value ranges from 0 (black) to 100 (white). On each axis the
Subsequently, (X*, Y*, Z*) need to be calculated.
value runs from positive to negative. On a-a' axis, positive
values indicate amounts of red while negative indicate When X / X n , Y / Yn , Z / Z n > 0.008856 ,
amounts of green. On the b-b' axis, yellow is positive and
blue is negative. For both axes, zero is neutral [5]. X= 3
X / Xn (2)
“Texture” refers to the arrangement or characteristic of
the constituent elements of anything. A texture feature is a Y= 3
Y / Yn (3)
value, computed from the image of a region, which
quantifies gray-level variation within the region. According Z= 3
Z / Zn (4)
to Kenneth [6], a texture feature value is irrelevant to When X / X n , Y / Yn , Z / Z n ≤ 0.008856 ,
region’s position, orientation, size, shape and brightness
(average gray level). Therefore, the shape and size of X* = 7.787 ⋅ (X / X n ) + 0.138 (5)
segmented area do not interfere with the result of chromatic Y* = 7.787 ⋅ (Y / Yn ) + 0.138 (6)
and textural analysis.
According to Julesz [7] and his deduction, human texture Z* = 7.787 ⋅ (Z / Zn ) + 0.138 (7)
discrimination is based on the second order statistic of image
intensities, which gave rise to the emergence of a popular (X n , Yn , Z n ) = (0.312779, 0.329184, 0.358037) represents
textural descriptor: co-occurrence matrix. A co-occurrence the white reference point, which indicates a completely matte
matrix counts the exact times of different grey level pairs of white body.
pixels, separated by a certain distance (one pixel, two pixels, Then L*, a*, b* can be calculated through following
etc.). The (i, j) element of the co-occurrence matrix P for a equations:
certain region is the number of times, divided by M, that L = 116 ⋅ Y * −16 (8)
gray levels i and j occur in two pixels separated by that a* = 500 ⋅ (X*-Y*) (9)
distance and lying along that direction in the region, where
b* = 200 ⋅ (Y * − Z*) (10)
M is the number of pixel pairs contributing to P. The P
The application of CIELAB color space in this paper
matrix is N by N, where the gray scale has N shades of gray
[6]. can be briefly illustrated using diagramⅠ.
Watershed segmentation is an approach based on the DIAGRAM Ⅰ APPLICATION OF CIELAB COLOR SPACE
concept of morphological watersheds. In such a
“topographic” interpretation, every pixel’s gray level
denotes its “height”. For a particular regional minimum, the
set of points at which a drop of water, if placed at the
location of any of those points, would fall with certainty to a
single minimum is called the catchment basin. And the
points at which a water drop would be equally likely to fall
to more than one such minimum constitute crest lines on the
topographic surface and termed watershed lines. Assume
that a hole is punched in each regional minimum and that
the entire topography is flooded from below by letting water The L* component of CIELAB color space can be used
rise through the holes at a same rate. When the rising water in watershed segmentation (Section Ⅲ). L* component can
from different catchment basins is about to merge, a dam is also be used in textual analysis (Section Ⅳ ). We use
built to prevent the merging. The flooding will finally reach equations mentioned above to convert every pixel on an
aerophotograph from RGB color space to CIELAB color In Fig. 1, (b) shows a correct and clear boundary between
space. Then form a gray image by using the L* component water area and non-water area, while result (a) displays an
of every pixel’s CIELAB color space. Although the gray inaccurate boundary. This manifests effectiveness and
photograph is no longer quantified 256 gray levels, L* correctness of the L* component in segmenting RGB
component brings unexpected results in chromatic analysis aerophotographs.
and textural analysis later on.
Segmented areas usually have similar color space and IV. TEXTURAL ANALYSIS
we can use the average a*, b* of all pixels in a region to In our methodology, we replace traditional 256 scale gray
represent its chromatic feature. Experiments have with the L* component of CIELAB color space. We attain
demonstrated that chromatic feature between non-water two advantages by this substitute. First of all, the RGB
areas and water areas is obvious, which means a*, b* can be components of aerophotograph has been integrated into one
employed to chromatically describe a region. We utilize a* L* component, which ranges from 0 to 100; in addition, the
and b* as parameters to describe a segmented region dimension of co-occurrence has been decreased from
chromatically in feature vector (Section Ⅴ). If regions have 256*256 to 101*101. Traditionally, we reduce quantization
similar chromatic feature with sampled water areas, they are level of the input data (e.g., from 8-bit data with values from
possibly water areas we want to extract. 0-255 to 5-bit data with values from 0 to 31) when creating
co-occurrence matrices so that the matrices will not become
III. IMAGE SEGMENTATION
too large. But this method usually causes texture information
In our methodology, we replace the 256 gray level with loss, especially detailed information. Taking advantage of
L* component of CIELAB color space. Using this the L* component of CIELAB color space, the
replacement, we could form a gray picture by synthesizing co-occurrence matrices will decrease its dimension with
three channels of RGB aerophotographs. Compared to relatively little textural information loss.
simply using average gray level of three channels to form a Haralick proposed 14 features based on co-occurrence
gray photograph, the usage of L* component can decrease matrices [8]. The angular second-moment feature (ASM) is a
fragments after segmentation and attain a relatively clear measure of homogeneity of the image.
and accurate edge. 2
ASM = ∑∑ p ( i, j) (11)
i j
(a)
(b)
(a) Figure 5. Extraction Result of Trial 4:(a) is the orginal RGB
aerophotograph. (b) is attained using the CCV generated in trial 1 when
ratio is set 1.2.
ACKNOWLEDGMENT
Special thanks to Professor Yongjun Zhang’s guide in
this research and the cooperation of our team members.
REFERENCES
VII. CONCLUSION
It is innovative to combine watershed segmentation,
CIELAB color space and chromatic and textural analysis as
well as ISODATA classification together to extract water
from RGB aerophotograph. Through numerous
experimental comparisons, we discover that ratio set
between 1.1 and 1.8 is optimal for water extraction.
However, it is undeniable that ripples or ship transportation
will cause a few fragments after segmentation, which put
sand in the wheel of chromatic analysis and textural analysis
because feature values of those fragments are not accurate
and not representative. Although merging those fragments
into united and homogenous areas is a good solution to this
problem, it demands much more time and manual
involvements. This is what we could do to improve the
result of water extraction later on.