Academia.eduAcademia.edu

A Critical Comparison Among Pansharpening Algorithms

2015, IEEE Transactions on Geoscience and Remote Sensing

In this paper state-of-the-art and advanced methods for multispectral pansharpening are reviewed and evaluated on two very high resolution datasets acquired by IKONOS-2 (four bands) and WorldView-2 (eight bands). The experimental analysis allows us to highlight the performances of the two main pansharpening approaches (i.e. component substitution and multiresolution analysis).

A CRITICAL COMPARISON OF PANSHARPENING ALGORITHMS G. Vivone1 , L. Alparone2 , J. Chanussot3,4 , M. Dalla Mura3 , A. Garzelli5 , G. Licciardi 3 , R. Restaino1 , L. Wald6 1 Department of Information Engineering, Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Italy. 2 Department of Information Engineering (DINFO), University of Florence, Italy. 3 4 5 GIPSA-Lab, Grenoble Institute of Technology, France. Faculty of Electrical and Computer Engineering, University of Iceland. Department of Information Engineering and Mathematical Sciences, University of Siena, Italy. 6 Center Observation, Impacts, Energy, MINES ParisTech, France. ABSTRACT 2. A CRITICAL REVIEW OF FUSION METHODS In this paper state-of-the-art and advanced methods for multispectral pansharpening are reviewed and evaluated on two very high resolution datasets acquired by IKONOS-2 (four bands) and WorldView-2 (eight bands). The experimental analysis allows us to highlight the performances of the two main pansharpening approaches (i.e. component substitution and multiresolution analysis). Index Terms— Fusion, Pansharpening, Remote Sensing. 1. INTRODUCTION Physical limits of optical imaging devices impose a tradeoff between the achievable spatial and spectral resolutions. This entails that the PANchromatic (PAN) image has no spectral diversity, while a MultiSpectral (MS) image exhibits a lower spatial resolution than the PAN and hence it contains less spatial details. Pansharpening is a data fusion process, whose goal is to enhance the spatial resolution of the MS data by including the spatial details contained in the PAN image. Most pansharpening methods proposed in the literature follow a general protocol, which is composed by two operations: 1) extract from the PAN image the high-resolution geometrical details of the scene that are not present in the MS image; 2) incorporate such spatial information into the lowresolution MS bands (interpolated to meet the spatial scale of the PAN image) by properly modeling the relationships between the MS bands and the PAN image. This paper aims at providing a critical comparison among classical pansharpening approaches applied to two different datasets. The credited Wald protocol is used for the assessment procedure and some useful guidelines for the comparison are given. 978-1-4799-5775-0/14/$31.00 ©2014 IEEE 191 Most recent studies [1] divide the principal image fusion methods into two main classes, according to the way the details are extracted from the PAN image (see Fig. 1). Component Substitution (CS) techniques extract the spatial details by a pixelwise difference between the PAN image and a nonzero-mean component obtained from a spectral transformation of the MS bands, without any spatial filtering of the former. They are referred to as CS methods, since the described process is equivalent to the substitution of such a component with the PAN image followed by reverse transformation to produce the sharpened MS bands [2]. The techniques belonging to the MRA class employ linear spaceinvariant digital filtering of the PAN image in order to extract the spatial details that will be added to the MS bands [3]. In both cases the injection of spatial details into the interpolated MS bands may be weighed by gains different for each band and possibly varying at each pixel. 3. EXPERIMENTAL RESULTS In this paper we follow the validation protocol for data fusion assessment at reduced scale proposed in [4]. This procedure uses the available MS image as the reference for validating the pansharpening algorithms, which are performed on a spatially degraded version of the original datasets. Despite the questionable assumption of scale invariance, this procedure allows the use of several reliable quality indexes. More in detail we report the values of a classical spectral quality index, the Spectral Angle Mapper (SAM), and two indexes for vector valued images, accounting for both spectral and spatial quality: the Q2n [5] and ERGAS [6]. The optimal values are 0 for the SAM and ERGAS and 1 for Q2n . Two datasets of 300 × 300 pixels have been employed. The first (China dataset) was acquired over the Sichuan region, China, by IKONOS. This sensor captures four bands IGARSS 2014 MS MS Upsampling to Pan scale wk Upsampling to Pan scale Estimation of N weights Computation of intensity Band-dependent MTF filtering Pan I Computation of N gains + MTF Pan MSF - d gk Computation of N gains +d MSF gk (a) (b) Fig. 1: Flowcharts of the two main pansharpening approaches: (a): based on spectral combination of bands, without filtering the Pan image (component/projection substitution); (b): based on filtering the Pan image (MultiResolution Analysis). Table 1: Quantitative results: on the left China dataset, on the right Rome dataset. China Reference EXP PCA IHS Brovey BDSD GS GSA PRACS HPF SFIM Indusion ATWT AWLP ATWT-M3 MTF-GLP MTF-GLP-HPM MTF-GLP-CBD Q4 1 0.7398 0.8578 0.7308 0.7314 0.8869 0.8500 0.8756 0.8793 0.8704 0.8730 0.8043 0.8791 0.8830 0.8198 0.8787 0.8819 0.8780 SAM(◦ ) 0 4.4263 3.5433 4.9892 4.4263 2.9123 3.5304 2.9889 3.1514 3.2533 3.2031 3.9059 3.0786 2.9424 4.3388 3.0387 3.0041 2.9673 Rome Reference EXP PCA IHS Brovey BDSD GS GSA PRACS HPF SFIM Indusion ATWT AWLP ATWT-M3 MTF-GLP MTF-GLP-HPM MTF-GLP-CBD ERGAS 0 3.8471 2.6715 3.5766 3.1722 2.4124 2.7982 2.5521 2.5745 2.6156 2.5778 3.2846 2.5178 2.4073 3.3357 2.5106 2.4624 2.5067 in the visible and near infrared spectrum with spatial resolution of 4 × 4 meters and a panchromatic channel with spatial resolution of 1 × 1 m. The second dataset, acquired over Rome, Italy and named Rome dataset, was collected by the WorldView-2 sensor and is composed of a panchromatic channel and eight MS bands, with resolution of 0.5 m and 2 m, respectively. Thus, in both the cases the resolution ratio between PAN and MS is 4. We compared several pansharpening methods. Namely, the Fast IHS (indicated as IHS) [7], Brovey Transform (Brovey), Principal Component Analysis (PCA), Gram Schmidt (GS), Gram Schmidt Adaptive (GSA) [2], Band-Dependent Spatial-Detail (BDSD) [8] and Partial Replacement Adaptive CS (PRACS) [9] belonging to the CS class. Within the MRA group we selected High Pass Filtering (HPF), Box-Window High Pass Modulation, also called Smoothing Filter-based Intensity Modulation (SFIM) [10], Generalized Laplacian Pyramid [11] with Modulation Transfer Function (MTF) matched filter (MTF-GLP) [12], Generalized Lapla- 192 Q8 1 0.7248 0.8169 0.7439 0.7487 0.8762 0.8335 0.8907 0.8878 0.8889 0.8950 0.8030 0.9013 0.9011 0.8379 0.9016 0.9092 0.8940 SAM(◦ ) 0 4.9263 5.2153 5.1455 4.9263 4.8717 4.8592 4.1415 4.6678 4.2813 4.0874 5.1415 4.1117 4.5146 5.1042 4.0957 3.8871 4.1125 ERGAS 0 5.4171 4.4128 4.1691 4.1407 3.8619 4.0144 3.4062 3.6768 3.5459 3.3979 4.8864 3.3237 3.3572 4.3684 3.2982 3.1005 3.3479 cian Pyramid with MTF-matched filter and Context-Based Decision injection scheme (MTF-GLP-CBD) [13], Gaussian MTF-matched filter [12] with HPM injection model (MTFGLP- HPM) [14, 15], Decimated Wavelet Transform using additive injection model (Indusion) [16], Additive A Trous Wavelet Transform (ATWT) [15], A Trous Wavelet Transform using the Model 3 (ATWT-M3) [3] and Additive Wavelet Luminance Proportional (AWLP) [17]. In the following, EXP indicates the MS image interpolated at the PAN scale by using a polynomial kernel with 23 coefficients [11]. The two classes of methods show complementary spectral and spatial features, as can be noted from the quantitative results reported in Tab. 1 and from a visual inspection of details of the two datasets (see Figs. 2 and 3). CS approaches yield fused products with accurate spatial details, but often showing spectral distortions, whereas MRA methods typically preserve the spectral content better, but may produce poorer results in terms of spatial enhancement. The best solutions given by are MRA methods designed for im- proving the spatial quality (e.g., based on MTF-like filtering) and CS methods in which the combination of bands is aimed at matching the spectral response of the PAN image, thereby preserving the spectral information of the MS original in the pansharpened product (e.g., GSA, BDSD, PRACS). Spectral matching, however, becomes critical as the number of bands increases, as shown by the superior performances achieved by MRA on WorldView-2 data. 4. CONCLUSIONS A comparison of several pansharpening methods on two very high resolution datasets has been presented. The validation, performed according to Wald’s protocol at reduced scale, highlights the characteristics of the two main classes of methods, based on component substitution and multiresolution analysis, respectively. Future developments shall include the full-scale validation of fusion methods according to the QNR protocol [18] and a more detailed discussion of results. 5. REFERENCES [1] S. Baronti, B. Aiazzi, M. Selva, A. Garzelli, and L. Alparone, “A theoretical analysis of the effects of aliasing and misregistration on pansharpened imagery,” IEEE J. Sel. Top. Signal Process., vol. 5, no. 3, pp. 446–453, Jun. 2011. [8] A. Garzelli, F. Nencini, and L. Capobianco, “Optimal MMSE Pan sharpening of very high resolution multispectral images,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 1, pp. 228–236, Jan. 2008. [9] J. Choi, K. Yu, and Y. Kim, “A new adaptive component-substitution-based satellite image fusion by using partial replacement,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 1, pp. 295–309, Jan. 2011. [10] J. G. Liu, “Smoothing Filter Based Intensity Modulation: A spectral preserve image fusion technique for improving spatial details,” Int. J. Remote Sens., vol. 21, no. 18, pp. 3461–3472, Dec. 2000. [11] B. Aiazzi, L. Alparone, S. Baronti, and A. Garzelli, “Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis,” IEEE Trans. Geosci. Remote Sens., vol. 40, no. 10, pp. 2300–2312, Oct. 2002. [12] B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “MTF-tailored multiscale fusion of highresolution MS and Pan imagery,” Photogramm. Eng. Remote Sens., vol. 72, no. 5, pp. 591–596, May 2006. [13] L. Alparone, L Wald, J. Chanussot, C. Thomas, P. Gamba, and L. M. Bruce, “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S DataFusion Contest,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3012–3021, Oct. 2007. [2] B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution pansharpening through multivariate regression of MS+Pan data,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3230–3239, Oct. 2007. [14] J. Lee and C. Lee, “Fast and efficient panchromatic sharpening,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 1, pp. 155–163, Jan. 2010. [3] T. Ranchin and L. Wald, “Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation,” Photogramm. Eng. Remote Sens., vol. 66, no. 1, pp. 49–61, Jan. 2000. [15] G. Vivone, R. Restaino, M. Dalla Mura, G. Licciardi, and J. Chanussot, “Contrast and error-based fusion schemes for multispectral image pansharpening,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 5, pp. 930–934, May 2014. [4] L. Wald, T. Ranchin, and M. Mangolini, “Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images,” Photogramm. Eng. Remote Sens., vol. 63, no. 6, pp. 691–699, Jun. 1997. [5] A. Garzelli and F. Nencini, “Hypercomplex quality assessment of multi/hyper-spectral images,” IEEE Geosci. Remote Sens. Lett., vol. 6, no. 4, pp. 662–665, Oct. 2009. [6] L. Wald, Data Fusion: Definitions and Architectures — Fusion of Images of Different Spatial Resolutions, Les Presses del l’École des Mines, Paris, France, 2002. [7] T. M. Tu, P. S. Huang, C. L. Hung, and C. P. Chang, “A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery,” IEEE Geosci. Remote Sens. Lett., vol. 1, no. 4, pp. 309–312, Oct. 2004. 193 [16] M. M. Khan, J. Chanussot, L. Condat, and A. Montanvert, “Indusion: Fusion of multispectral and Panchromatic images using induction scaling technique,” IEEE Geosci. Remote Sens. Lett., vol. 5, no. 1, pp. 98–102, Jan. 2008. [17] X. Otazu, M. González-Audı́cana, O. Fors, and J. Núñez, “Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 10, pp. 2376–2385, Oct. 2005. [18] L. Alparone, B. Aiazzi, S. Baronti, A. Garzelli, F. Nencini, and M. Selva, “Multispectral and panchromatic data fusion assessment without reference,” Photogramm. Eng. Remote Sens., vol. 74, no. 2, pp. 193– 200, Feb. 2008. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) Fig. 2: China Dataset: (a) Reference Image; (b) EXP; (c) PCA; (d) IHS; (e) Brovey; (f) BDSD; (g) GS; (h) GSA; (i) PRACS; (j) HPF; (k) SFIM; (l) Indusion; (m) ATWT; (n) AWLP; (o) ATWT-M3; (p) MTF-GLP; (q) MTF-GLP-HPM; (r) MTF-GLP-CBD. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r) Fig. 3: Rome Dataset: (a) Reference Image; (b) EXP; (c) PCA; (d) IHS; (e) Brovey; (f) BDSD; (g) GS; (h) GSA; (i) PRACS; (j) HPF; (k) SFIM; (l) Indusion; (m) ATWT; (n) AWLP; (o) ATWT-M3; (p) MTF-GLP; (q) MTF-GLP-HPM; (r) MTF-GLP-CBD. 194