Steerable Wavelet Based Fusion Model For High-Resolution Remotely Sensed Image Enhancement

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

May 12, 2010 17:39 RPS : Trim Size: 8.50in x 11.

00in (IEEE) icfcc2010-lineup˙vol-3: F1563

Steerable Wavelet Based Fusion Model for High-resolution Remotely Sensed Image
Enhancement

Wang Jian Peng Xiangguo, Zhou Feng, ZhaoYindi


School of Environment and Spatial Informatics School of Environment and Spatial Informatics
Xuzhou, China Xuzhou, China
[email protected] [email protected]

Abstract—High resolution image shows more detail Traditional fusion schemes include IHS[3],PCA[4][5], and
information in Panchromatic(PAN) band and more spectral so on. The multi-resolution fusion models have been widely
information in multi-spectral(MS) band. This paper aims at used recently for image fusion as the pyramid structure
developing a new Steerable Wavelet Transform-Intensity Hue responds well to human visual system. However, the
Saturation(STWT-IHS) fusion model considering the texture orthogonal wavelets is the lack of translation invariance
detail information of different direction of PAN band. The IHS especially in two-dimensional signals. The “steerable”
transform of the multi-spectral image firstly gives the pyramid wavelet overcomes the drawbacks by transforming
Intensity, Hue and Saturation bands. Thereafter, the PAN with a tight frame, which decompose signal with a class of
band and Intensity band were decomposed into different scales
arbitrary orientation filters generated by linear combination
including direct-based detail scales.Then,the fused Intensity
band is obtained by reconstruction with high frequency
of a set of basis filters. [6]. A new image fusion model is
components of PAN band and low frequency components of proposed by integrating steerable wavelet transform theory
Intensity image and IHS inverse transform generates the final and IHS theory and the experimental verification of the
fused MS image. model to high-resolution image is given in the paper.
After presenting STWT principle, the fusion technique
scheme is advanced based STWT and IHS theory. PAN band
II. THEORETICAL PRINCIPLE
and MS band1,2,3 of QUICKBIRD are used to access the
quality of the fusion model. After selecting appropriate A. Wavelet Transform(WT)
decomposition level for the model, the fusion scheme gives A decomposition framework can be given with wavelet
fused image, which is compared with generally used fusion transform to decompose images into a number of new
algorithms (IHS, Mallat and SWT). The objectives of image images, each one of them with a different degree of
fusion include enhancing the visibility of the image and resolution[7]. The continuous wavelet transform of a one-
improving the spatial resolution and the spectral information
dimensional function f ( x ) ∈ L ( R ) can be expressed as[8] .
2
of the original image. For assessing quality of an image after
fusion, information entropy and standard deviation are +∞

applied to assess spatial details of the fused images and W f ( a , b ) =< f ,ψ a ,b >= ∫
−∞
f ( x )ψ a ,b dx (1)
correlation coefficient, bias index and warping degree for
measuring distortion between the original image and fused
image in terms of spectral information. For all the tested fusion
The wavelet base functions ψ a ,b ( x) are dilations and
algorithms, better result is obtained when STWT-IHS is used. translations of the Mother Wavelet ψ ( x ) :
Keywords- Steerable Wavelet; Image Enhancement 1 x−b
ψ a ,b ( x ) = ψ( ) (2)
a a
I. INTRODUCTION
Where a, b ∈ R . a is the dilation or scaling factor, and
With the development of remote sensing, spatial and
b is called the translation factor. For every scale a and
spectral resolution and temporal resolution of images are
greatly improved,[1]. The remote sensing image is available location b, the wavelet coefficient W f (a, b) represents the
from different remote sensors and how to make the best of information contained in f ( x ) at that scale and position.
the information from these multi-source and multi-resolution
images is one of the significant issues in application of The original signal can be exactly reconstructed from the
remote sensing[2]. Image fusion effectively enhance the wavelet coefficients by:
available information with these data, and supply a basic for ∞ +∞
1 da
target recognition and information extraction. f ( x) =
Cψ ∫ ∫W
0 −∞
f ( a , b )ψ a ,b ( x )db
a2
(3)
So far, a lot of image fusion schemes between high-
resolution panchromatic image and low-resolution multi- Where Cψ is the normalizing factor of the Mother
spectral image have been proposed by academician.
Wavelet.

c
978-1-4244-5824-0/$26.00 2010 IEEE V3-512
May 12, 2010 17:39 RPS : Trim Size: 8.50in x 11.00in (IEEE) icfcc2010-lineup˙vol-3: F1563

The discrete approach of the wavelet transform can be defined in the Fourier domain, where the subbands are polar-
carried out with two different algorithms, Mallat algorithm separable. The frequency tiling of the single-stage subband
and á trous algorithm.The Multiresolution analysis based on decomposition is shown in figure 1. For the case of
G G
Mallat’s algorithm uses an orthonormal basis, but the
transform is not shift-invariant, which can cause problem in n=1.
{ Bk (ω ) k = 0,1}
are band pass oriented filters,
H 0 (ω )
G
applications[7]. More details on Mallat algorithm realization L1 (ω )
is a non-oriented high-pass filter and is a
and its application to image enhancement are widely narrowband low-pass filter[10].
discussed[7,8,9].
B. ‘à trous’ algorithm
In contrast to Mallat’s Algorithm, the á trous algorithm
FA PAN
allows a shift-invariant discrete wavelet decomposition and it IA IH
H

is a nonorthogonal, redundant oversampled transform. All F[•] PAN


V
PAN D

the approximation images obtained by applying this IV ID

decomposition have the same number of columns and rows


as the original image. Each stage of the transformation splits
the input signal into the detail coefficient di(n),and the
approximation coefficient ci(n) which serve as input for the PAN A PAN H

next decomposition level:


d ( n ) = ∑ g (2 i ⋅ k ) ⋅ c ( n − k )
PAN V PAN D
(4)
i +1 i
k

ci +1 ( n ) = ∑ h (2 i ⋅ k ) ⋅ ci ( n − k ) (5) Figure 1. Image fusion scheme based on á trous algorithm


k

The system diagram is illustrated in Figure 2. The


The decomposition start with c0(n)=f(n). The filter g(2i·k) steerable wavelet scheme includes two parts, decomposition
and h(2i·k) at level i are obtained by inserting appropriate and synthesis. The left side of the scheme decompose the
number of zeros between the filter taps of the prototype image into lowpass and highpass bands with steerable filter
filters g(k) and h(k).
The reconstruction of the input signal is performed by the L0 H0
and . The lowpass band continues to break down
inverse SIDWT
B0 ," , BN
c i ( k ) = ∑ h ( 2 i ⋅ k − n ) ⋅ c i + 1 ( k ) into a set of bandpass subbands and lower
n (6) L
lowpass subband 1 .The lower lowpass subband is
+ ∑
n
g ( 2 i ⋅ k − n ) ⋅ d i + 1 ( k ) subsampled by a factor of 2 along the x and y directions. The
recursive structure is realized by repeating the shaded area to
~ ~ Li
Where g (2 ⋅ k ) and h (2 ⋅ k ) denote reconstruction filters.
i i
(i denotes the preset decomposition scale). At the same
An extension of the decomposition scheme to 2d images time, the right side of the diagram gives the synthesis part of
follows by the usual tensor product formulation. the steerable wavelet scheme. The synthesized image is
The actual improved image fusion methodology can be reconstructed by upsampling the lower lowpass subband by
developed: In the first step a IHS transform is applied to the bactor of 2 and adding up with the set of bandpass
RGB image and the obtained I band and PAN band are subbands and the highpass subbands. For i=1, the
decomposed with á trous algorithm by at specific levels. reconstruction in the frequency domain is:
Salient spatial information is included in detail components  G
and spectral information in approximation components. Then { G 2
X (ω ) = H 0 (ω ) (7)
the details of PAN band and the approximation components G 2 G 2 n G 2 ⎫ G
of I band are used to reconstruct the fused I band. The fused + L0 (ω ) ( L1 (ω ) + ∑ Bk (ω ) ) ⎬ X (ω ) + a.t.
RGB image can be given by inverse IHS transform with then k =0 ⎭
new I band and H,S band. Fiugre 1. shows the schematic Where a .t . indicates the aliasing terms. To ensure
diagram of the basic structure of ‘à trous’ algorithm based L
new I band fusion scheme. Different fusion operator F[•] elimination of the aliasing terms, the 1 filter should be
define different selection rule to give different fusion result. constrained to have a zero response for frequencies higher
This paper use the approximation scale of PAN band for the ω ω
fused wavelet coefficients directly[10]. than π /2in both x and y and the transfer function of the
system should be equal to one in order to avoiding amplitude
C. S teerable Wavelet distortion[7,11].
The steerable pyramid is a particular realization of
recursive multi-scale transforms. The decomposition can be

[Volume 3] 2010 2nd International Conference on Future Computer and Communication V3-513
May 12, 2010 17:39 RPS : Trim Size: 8.50in x 11.00in (IEEE) icfcc2010-lineup˙vol-3: F1563

algorithm, better results are obtained when the new fusion


model is used to perform the fusion experience[12].
H0
B1
B0
B0 L1
ωx
B1

Figure 2. Idealized depiction of the single-stage first derivative(i.e.,two


orientation band) steerable pyramid transform.

H 0 ( −ω ) H 0 (ω )

L0 ( −ω ) B0 ( −ω ) B0 (ω ) L0 (ω )

B1 ( −ω ) B1 (ω )
Figure 4. Image fusion scheme based on steerable wavelet transformation

B K ( −ω ) B K (ω )
IV. EXPERIMENTAL RESULTS
L1 ( −ω ) 2↓ 2↑ L1 (ω ) The QUIKBIRD image of Xuzhou taken on 20
Dec.2005 is selected for testifying the proposed fusion
Figure 3. The steerable wavelet decomposition and synthesis scheme algorithm. A 1024*1024 pixels are subseted for testing.
Before the fusion process, the MS bands was interpolated
into 0.6m pixel same as the PAN band. The decomposition
D. Image fusion scheme based on STWT scale is set to 2 as the resolution of PAN band is 2.4m.
This paper proposed a IHS-STWT based high resolution Firstly PAN band and intensity band obtained with IHS
image fusion model(Figure 2). The MS band images are transform from MS band 3(R),2(G),1(B) are decomposed
firstly co-registered and resized to the panchromatic image into two scales with four orientations by STWT given by
with an interpolation method. Thereafter the Figure 1. Then one high frequency component, one low
Intensity(I),Hue(H), and Saturation(S) image can be obtained frequency and 8 orientation band of the two scales are
by IHS transform to the multi-spectral RGB image. The I given. We use substitute the low band of PAN band with
band and PAN image are fused based on Steerable wavelet low band of the Intensity band to get fused Intensity band.
transform theory and a new I band will be generated to Finally fused MS image can be gained by IHS inverse
perform IHS inverse transform. The steerable pyramid transform.
performs a polar-separable decomposition in the frequency It is obvious shown in Figure 3. that different spatial
domain, thus allowing independent representation of scale texture is different in the selected four decomposition
and orientation. The proposed algorithm show advantage orientations(from B10 to B13) with respect to same tested
compared to Mallat based algorithm and ‘ trous’ based region. And different detail textures are demonstrated on
algorithm. two levels(e.g. comparison between B10 band B20). The
high frequency band(H1) of level 1 contains more detail
III. QUANTITY EVALUATION OF THE FUSED IMAGE texture information than all the images and the L2 image
The evaluation methods include visual evaluation and obvious includes all more low frequency informations.
quantity evaluation.The standard of qualitative evaluation is Figure 4. shows the fused I band with different images.
used for measurement of maintaining the multi-spectral All the spatial resolutions are enhanced but it is hard to tell
characteristic information and high resolution spatial the difference among the fused I bands with different fusion
information. For assessing quality of an image after fusion, schemes by visual evaluation. However the quantitative
information entropy and standard deviation are applied to evaluation given by Table1. shows the STWT-HIS fusion
assess spatial details of the fused images and correlation algorithm has the best result and Mallat HIS is the worst
coefficient, bias index and warping degree for measuring among the three multi-resolution fusion algorithms. That is
distortion between the original image and fused image in the spatial information is enhanced to the fused I band
terms of spectral information. For all the proposed fusion image for IHS inversion to RGB.

V3-514 2010 2nd International Conference on Future Computer and Communication [Volume 3]
May 12, 2010 17:39 RPS : Trim Size: 8.50in x 11.00in (IEEE) icfcc2010-lineup˙vol-3: F1563

Mallat IHS SWT-IHS STWT-IHS


SHE 5.3789 5.3866 5.3949
STD 68.8973 68.2759 67.1562
COC 0.8846 0.8945 0.9161
DEI 0.4408 0.4241 0.3841
DII 22.7784 21.5879 19.2109
Figure 5. shows the obviously improvement of both
spatial or spectral characteristics after fusion. It can be
concluded by comparing the RGB composite of any local
region(e.g. Figure 6.) that the MALLAT-IHS fusion model
gives the worst visual effect than other two multi-scale
fusion models. It is hard to tell which is better between
SWT-IHS and STWT-IHS fusion model.
The quantitative indices values resulting from the
comparison of the merged RGB band images obtained when
applying the different fusion methods described and the
original RGB image by means of the associated fusion
algorithms are given in Table 2.
Shannon entropy(SHE) and Standard deviation(STD)
values are applied to describe the details of spatial
information of an image. Larger SHE and lower STD values
imply that the fusion method used results a better reflection
of spatial information of the original image when increasing
the spatial resolution. From Table 2, the value of Shannon
Entropy(SHE) obtained by STWT-IHS transformation
fusion algorithm is larger than other three algorithms of the
three bands, which indicates that STWT-IHS has the most
information quantity. Also, It can be observed the value of
Standard deviation(STD) obtained by STWT-IHS
transformation fusion method is the least. All the evaluation
factors show that the Mallat based fusion algorithm is the
worst in all the bands.
To measure spectral quality of the different fusion
methods, the values of the spectral Correlation
coefficient(COC), Deviation index(DEI) and Distorting
index(DII) of the merged images using Mallat-IHS, SWT-
IHS and STWT-IHS are displayed in Table 1.The
differences in spectral information between the merged
RGB images and the original image can be determined by
Figure 5. The description on different scale of same region of PAN band these parameters. The lower the value of DEI and DII, the
with STWT decomposition(two scales and four orientation )
better the spectral quality of the merged image,but contrary
is the parameter of COC. Table 2 shows the value of COC
using STWT-IHS is lager and the values of DEI and DII are
smaller than Mallat-IHS and SWT-IHS, so we can conclude
that the spectral quality of merged images obtained by
STWT-IHS is much better than the other three.

Figure 6. Intensity band comparison with the fusion algorithm


Figure 7. Visual analysis of the color compositions (from left to right are
QB321, IHS,MALLAT,SWT,STWT)
TABLE I. QUANTITATIVE EVALUTION OF DISTINCT I BAND FUSION
SCHEMES

[Volume 3] 2010 2nd International Conference on Future Computer and Communication V3-515
May 12, 2010 17:39 RPS : Trim Size: 8.50in x 11.00in (IEEE) icfcc2010-lineup˙vol-3: F1563

PAN band and MS band can give more spatial and spectral
information and the visual and quantitative evaluation
shows the proposed fusion algorithm is the best of the four
tested fusion models. The presented method can also be
applied to high resolution image from other sensors and to
other different application objectives.

ACKNOWLEDGMENT
This work is supported in part by the Specialized
(a)IHS fusion (b)Mallat-IHS fusion Research Fund for the Doctoral Program of Higher
Education of China (No.200802901516, 200802900501) and
in part by National Natural Science Foundation of China (No.
40774010 and 40904004) and the Project Sponsored by the
Scientific Research Foundation for the Returned Overseas
Chinese Scholars, State Education Ministry.

REFERENCES
[1] Zhong Z Y and Chen Y, “On application of wavelet transformation to
multisource information fusion,” Acta geodaetica et cartographica
sinica,vol.31,2002,pp.56 -60 .
(c) SWT-IHS fusion (d) STWT-IHS fusion
[2] Wang H H,Peng J X and Wu W, “Remote Sensing image fusion using
Figure 8. Local region of the fused RGB composite image wavelet packet transform,” Journal of Image and Graphics,
vol.7,no.9,2002,pp.932-936.
[3] Pohl C. and Van Genaderen J.L., “Multisensor image fusion in
TABLE II. QUANTITATIVE EVALUATION OF FUSION SCHEMES remote sensing:concepts,methods and applications,” Int.J.Remote
Sensing,vol.19,no.5,1998,pp.823-854.
RGB- IHS Mallat- STWT-
SWT-IHS [4] Maria G.A.,José L.S. and Raquel G.C., “Fusion of Multispectral and
image IHS IHS
SHE panchromatic images using improved IHS and PCA mergers based on
Band1(B) 3.6244 5.2160 5.3088 5.3182 5.3238 Wavelet decomposition,” IEEE Transactionson Geoscience and
remote sensing, vol.42,no.6,pp.1291-1299.
Band2(G) 4.0252 5.3144 5.3923 5.4010 5.4099
Band3(R) 4.0757 5.2955 5.3753 5.3830 5.3891 [5] Pohl C.,Van Genaderen J.L. “Multisensor image fusion in remote
sensing:concepts,methods and applications,” Int.J.Remote Sensing,
STD vol.19, no.5,1998, pp.823-854.
Band1(B) 64.8315 70.1017 70.5725 69.9478 68.8558
[6] W. T. Freeman and E. H. Adelson, “The design and use of steerable
Band2(G) 65.9355 69.3329 70.5725 69.9986 69.0055
filters,” IEEE Trans. on PAMI, vol. 13, Sept. 1991, pp. 891-906.
Band3(R) 62.5792 67.1118 68.2436 67.6258 66.4250
[7] Jorge Núňez, Xavier Otazu, Octavi Fors and Albert Prades, “Multi-
COC
resolution-based image fusion with additive wavelet decomposition,”
Band1(B) 0.7814 0.8925 0.9022 0.9222 IEEE Transactions on geoscience and remote sensing,
Band2(G) 0.7576 0.8939 0.9035 0.9239 Vol.37,no.3,1999,pp.1204-1211.
Band3(R) 0.7237 0.8707 0.8810 0.9045 [8] M.gonzález-audicana, X.otazu, O. fors and A.Seco, “Comparison
DEI between Mallat’s and the ‘á trous’ discrete wavelet transform based
Band1(B) 0.5214 0.3043 0.2892 0.2589 algorithms for the fusion of multi-spectral and panchromatic
Band2(G) 0.9168 0.4504 0.4343 0.3931 images.International journal of remote sensing,”
vol.26,no.3,2005,pp.595-614.
Band3(R) 1.0441 0.5016 0.4835 0.4407
[9] Yocky,D.A., “Image merging and data fusion by means of the
DII discrete two-dimensional wavelet transform,” Journal of the optical
Band1(B) 31.5076 21.9252 20.7449 18.4342 society of America,vol.12, 1995,pp.1834-1841.
Band2(G) 33.7126 22.3920 21.2247 18.8410 [10] Unser,M., “Texture Classification and Segmentation using Wavelet
Band3(R) 35.1400 23.9138 22.7004 20.2596 Frames,” IEEE Trans. Image Processing,vol.4, no.9,1995 ,pp.1549-
1560.
V. CONCLUSION [11] Anestis Karasaridis and Eero Simoncelli, “A filter design technique
for steerable pyramid image transforms,” Pro.ICASSP-96,May 7-
It is necessary to develop appropriate fusion models for 10,Atlanta,GA.
high-resolution image information enhancement. A image [12] Wang J, Zhang J X, Liu Z J and Gao J X, “High Resolution Image
fusion model based on STWT and IHS theory is firstly Merging Based on EMD,” Journal of Remote Sensing, vol. 11,
Jan.2007,pp. 55-61.
presented in the paper. The fusion between QUICKBIRD

V3-516 2010 2nd International Conference on Future Computer and Communication [Volume 3]

You might also like