Image Features Extraction and Fusion Based On Joint Sparse Representation
Image Features Extraction and Fusion Based On Joint Sparse Representation
Image Features Extraction and Fusion Based On Joint Sparse Representation
5, SEPTEMBER 2011
Abstract—In this paper, a novel joint sparse representa- of the source image in some bases [such as the discrete Fourier
tion-based image fusion method is proposed. Since the sensors transform (DFT), the mother wavelet and cosine bases of the
observe related phenomena, the source images are expected to discrete cosine transform (DCT)] are used as image features in
possess common and innovation features. We use sparse coeffi-
the transform domain method. The fusion rules mainly include
cients as image features. The source image is represented with
the common and innovation sparse coefficients by joint sparse “choose-max” and “weighted average.” The wavelet-based
representation. The sparse coefficients are consequently weighted image fusion method “wavelet maxima” [4] selects the wavelet
by the mean absolute values of the innovation coefficients. Further- coefficients with the largest activity level at each pixel location.
more, since sparse representation has been significantly successful Mahbubur Rahman et al. [2] calculate the weighted mean of
in the development of image denoising algorithms, our method wavelet coefficients of the source image for image fusion,
can carry out image denoising and fusion simultaneously, while and the weights depend on the contrast of the source images.
the images are corrupted by additive noise. Experiment results
show that the performance of the proposed method is better than
The authors in [5] and [6] use the mean absolute values of the
that of other methods in terms of several metrics, as well as in the ICA coefficients to determine the weights, and calculate the
visual quality. weighted mean of the ICA coefficients for image fusion. The
Index Terms—Features extraction, image fusion, joint sparse
image fusion method based on sparse representation [7] fuses
representation, K-SVD. sparse coefficients with the “choose-max” rule.
In the “choose-max” rule, only the features with the largest
activity level are transferred into the fused image and other
I. INTRODUCTION features are discarded. The fused image with this rule tends to
MAGE fusion is a process of combining several source be oversharp and less smooth. In the “weighted average” rule,
I images captured by multiple sensors into a fused image
that contains all important information. Multisensor data often
all input images are taken into consideration, and some images
with more information contribute more than others. Usually,
presents complementary information about the region surveyed, the fused images using the “weighted average” rule have better
so image fusion provides an effective method to enable compar- performance in a particular aspect [8]. However, this rule has
ison and analysis of such data [1]. Image fusion is widely used a serious drawback which is difficult to overcome. The input
in remote sensing for combining high-resolution panchromatic images represent the same region with different sensors, so
and low-resolution multispectral images, in medical diagnosis they can be expected to possess some correlation. Each image
for obtaining images having both soft issue and hard issue contains common and innovation features [2], [9], [10]. With
information, and in target recognition for combining visible the “weighted average” rule, the fused image is constructed
and infrared images [2]. by the weighted mean of both the common and innovation
From the 1980s to now, there have been many well-known features. In the fused image, the ratio of weights between each
fusion algorithms proposed. Most of these methods consist of innovation and the common features drops. So the innovation
three steps, including extracting image features, fusing these features in the fused image appear less visible than in the source
features with a certain rule, and constructing a fused image. images. A detailed state is given in Section II. Intuitively, it
Nikolov et al. [3] proposes a classification of image fusion al- is seen that the common and innovation features should be
processed separately.
gorithms into spatial domain and transform domain techniques.
The spatial domain method uses the source image itself (or par- In this paper, inspired by distributed compressed sensing [11],
tial image) as image features while the transform coefficients we propose a novel image fusion method based on joint sparse
representation (JSR). It can extract and separate the common
and innovation features of source images and fuse these features
Manuscript received October 14, 2010; revised December 24, 2010; accepted
January 20, 2011. Date of publication February 04, 2011; date of current version
separately. Furthermore, since sparse representation has strong
August 17, 2011. The associate editor coordinating the review of this manuscript ability to denoise, the proposed algorithm can simultaneously
and approving it for publication was Prof. Michael Elad. carry out denoising and fusion of the images while corrupted by
N. Yu is with the Electronic and Information Engineering College, Dalian
University of Technology, Dalian 116024, China, and also with the Electronic
additive noise [12]. The rest of the paper is organized as follows.
and Information Engineering College, Northeast Petroleum University, Daqing Section II gives a detailed state of image fusion with different
163318, China (e-mail: [email protected]). fusion rules. Section III presents the description of the proposed
T. Qiu, F. Bi, and A. Wang are with the Electronic and Information En- method, whereas Section IV contains experimental results ob-
gineering College Dalian University of Technology, Dalian 116024, China
(e-mail: [email protected]). tained by using the proposed method and a comparison with the
Digital Object Identifier 10.1109/JSTSP.2011.2112332 state-of-the-art methods.
1932-4553/$26.00 © 2011 IEEE
YU et al.: IMAGE FEATURES EXTRACTION AND FUSION BASED ON JOINT SPARSE REPRESENTATION 1075
TABLE I
AVERAGE VALUES OF THE CORRELATIONS BETWEEN THE LOCAL NEIGHBORING PIXELS OF THE SOURCE IMAGES
II. IMAGE FUSION WITH THE “CHOOSE-MAX” AND come the drawback, the common and innovation features must
“WEIGHTED AVERAGE” RULES be separated before calculating the weighted mean of the source
The image fusion process consists of three main steps: image features. JSR is verified to be a good method to solve
extracting features of each input image, following a specific this problem, while the source images are sparse or compress-
rule, combining them into composite features, and constructing ible [13].
a fused image. III. FUSION OF NOISY IMAGES WITH JOINT
Let represent the source image and repre- SPARSE REPRESENTATION
sent image features. is a function of , that can be written as
Let the pixels of the ideal image to be fused be corrupted by
(1) an additive zero-mean white and homogeneous Gaussian noise
with known variances . The measured image is thus
In the spatial domain image fusion, is the th source image
itself (or partial image), while in the transform domain image (5)
fusion, is the coefficient sequence of source image in some
bases. With the “choose-max” rule, the fused image can be con- The authors in [13] for the first time gave the term “Joint
structed from the features with largest activity level. This rule is Sparsity”—the sparsity of the entire signal ensemble. Three
prone to make the fused image oversharp and less smooth, and joint sparsity models (JSMs) that apply in different situations
a lot of information is discarded, because only the features with are presented: JSM-1 (sparse common component innova-
the largest activity level are transfer into the fused image. The tions), JSM-2 (common sparse supports) and JSM-3 (nonsparse
“weighted average” rule is an extension of the “choose-max” common sparse innovations) [10], [11]. JSM-1 is more
rule. Let represent the corresponding weight, suitable to solve the problem of image fusion. Because of
then with the “weighted average” rule, the fused image can existing of joint sparsity among the source images, we can use
be obtained by JSR to extract the common and innovation features, then com-
bine these features separately for image fusion. The proposed
method starts with training the overcomplete dictionary from
(2) the entire signal ensemble by K-SVD [14]. Then, the common
and innovation coefficients are found and denoised simulta-
Since the sensors presumably observe related phenomena, the neously by JSR, and the fused coefficients are evaluated from
ensemble of signals they acquire can be expected to possess these denoised coefficients by using a suitable rule. Finally, the
some joint structure, or correlation [10], [11]. Table I shows the fused coefficients are inverse transformed to obtain the fused
average values of the correlations obtained from the local neigh- image.
boring pixels using a 8 8 window for a few source images. It
is seen from Table I that a significant correlation exists among A. Dictionary Training With K-SVD
the source images. There are two methods to choose an overcomplete dictionary.
Because of the correlation among the ensemble of images, The first one is using a prespecified transform matrix, such as
the features of each image are generated as combination of two overcomplete wavelets, curvelets, contourlets, and short-time
components: the common component , which is present in Fourier transforms, which leads to simple and fast algorithms
all images, and the innovation component , which is unique for the evaluation of the sparse representation. The success of
to each image. So such dictionaries depends on how suitable they are to describe
the signals sparsely. The second one is designing dictionaries
(3) based on training, such as PCA [15], MOD [16], and K-SVD
[14], considered having the potential to outperform commonly
The (2) can be expressed as
the fixed dictionaries. The K-SVD algorithm has been widely
recognized in [17]–[19], so in this paper we propose to learn a
(4) dictionary with K-SVD. Since the K-SVD dictionary learning
process has in it a noise rejection capability (see experiments
According to (1), (3), and (4), between the innovation and reported in [14]), one can get a clear dictionary from the noisy
common components of image features, the ratio of the weights images.
is smaller than 1 (equal to 1 only if other weights are 0) in the Let represent the corrupted image. By sliding
fused image while the ratio is 1 in the source image. So the in- window technique, each image is divided into
novation features in the fused image are more blurred than in patches .
the source image. In addition, the weight should not be de- Each patch is ordered lexicographically as vector . As-
termined by . It should be determined by . In order to over- sume that the vectors responding to all the patches in image
1076 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 5, SEPTEMBER 2011
are constituted into one matrix and . Then, (9) can be simplified, as follows:
The overcomplete dictionary is trained from the
joint-matrix (shown in Fig. 1), by approximating the solution (10)
of (6) with the K-SVD algorithm:
subject to (6) The estimator for denoising the sparse coefficients is built by
solving
C. Reconstruction
.. Image fusion is the process of detecting salient features in the
.. and .
. source images and fusing these details to a synthetic image. In
order to account for the contributions from all possible image to
YU et al.: IMAGE FEATURES EXTRACTION AND FUSION BASED ON JOINT SPARSE REPRESENTATION 1077
Fig. 2. Separation of the common and innovation features using multisensor images 051 and UnCamp corrupted with = [20 15]
; . (a1), (b1), (a2), (b2) the
source images, (c1) and (c2) the images with common features, (d1) and (d2) the images with innovation features of (a1), and (a2), (e1) and (e2) the images with
innovation features of (b1) and (b2).
be fused, the final denoised estimated of the sparse coefficient Then, (12) can be simplified as follows:
vector of the fused image patch is written as
(12)
(15)
where and are the
weight factors. Assume , then is The fused image matrix can be reconstructed by
the primary feature and others are the subsidiary features. The
ratio of the weights between the primary and common features (16)
should be 1, so is set as . It seems that
the projection that forms the result of fusion will have some Finally, we can transform the matrix to the image patches
scale deformation. However, because of sparsity of the coeffi- and synthesize the fused image by aver-
cient vector, we can prove our method will not result in scale aging the image patches in the same order they were selected
deformation. A detailed proof is given in the Appendix. during the analysis step.
In [7] and [8], the mean absolute value ( -norm) of each
patch (arranged in a vector) in the transform domain has been IV. EXPERIMENTS
calculated as an activity indicator. Following the method in [7]
In this section, the proposed fusion method is compared with
and [8], we can use the -norm of the innovation coefficient
three state-of-the-art methods, including discrete wavelet trans-
vector as the activity indicator in each patch:
form (DWT) [4], the ICA [8], and sparse representation (SR)-
(13) based [7] algorithms. In the DWT method, a five-level decom-
position is used and the fusion is performed by selecting the co-
The weights should emphasize sources that feature more efficients with maximum absolute values. The ICA method are
intense activity, as represented by : trained using 10 000 training patches selected randomly from a
set of images with similar content, and 40 out of the most signif-
(14) icant bases obtained by training are selected using the PCA al-
gorithm. The SR applies the fixed dictionary—an overcomplete
1078 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 5, SEPTEMBER 2011
= [20 15]
Fig. 3. Visual comparison of the performance of the fusion methods using multisensor images 051 corrupted with ; . (a) and (b) The source images.
(c) Fused image, DWT fusion method. (d) Fused image. ICA Fusion method. (e) Fused image, SR fusion method. (f) Fused image, the proposed method.
Fig. 4. Visual comparison of the performance of the fusion methods using IR and visible images UnCamp corrupted with = [20 15]
; . (a) input IR image.
(b) Input visible image. (c) Fused image, DWT fusion method. (d) Fused image, ICA fusion method. (e) Fused image, SR fusion method. (f) Fused image, the
proposed method.
separable version of the DCT dictionary and the sparse coeffi- of all the test sets are obtained by adding zero mean Gaussian
cients are estimated by OMP. Our method trains an overcom- noise sequences to the gray images. Fig. 3 shows the multisensor
plete dictionary of size 64 500 by K-SVD, and the maximum images 051 of size 256 256, and their fused outputs using the
number of iterations is set to 100. For a fair comparison, the DWT, ICA, SR and the proposed one. As to the performance of
source images are divided into small patches of size 8 8 using image fusing, visual (subjective) comparison between methods
sliding window technique in the ICA, SR, and JSR methods. The indicates that our method is superior to the DWT, ICA, and SR
Piella metric [22] and Petrovic metric [23] are used to evaluate fusion methods and at same time, the noise has been removed
the performance of these fusion methods. These metrics are cal- significantly. In Fig. 3, it is clear that the fine details from the
culated using the fused images and the corresponding noise-free source images (such as lawn borders and the flowers) are far
source images. The closer to 1 the Piella metric and Petrovic better transferred into the fused image in the proposed method
metric are, the better the fusion result will be. All the experi- than in other methods. In addition, the trees in the top left corner
ments are implemented in Matlab 7.9.0 and on a Pentium (R) of the image is visually more pleasing in the fused image ob-
2.5-GHz PC with 2-GB RAM. tained by the proposed method whereas all the other algorithms
First, we give results concerning the experiments that have have transferred the blurring caused by noise.
been conducted using three sets of representative images from Fig. 4 depicts the two input images from the TNO UN Camp
the Image Fusion Server [24]. The noisy versions of the images image sequence, and their fused outputs using the DWT, ICA,
YU et al.: IMAGE FEATURES EXTRACTION AND FUSION BASED ON JOINT SPARSE REPRESENTATION 1079
Fig. 5. Visual comparison of the performance of the fusion methods using medical images corrupted with = [20 15]
; . (a) Input computed tomography (CT)
image. (b) Input magnetic resonance (MR) image. (c) Fused image, DWT fusion method. (d) Fused image, ICA fusion method. (e) Fused image, SR fusion method.
(f) Fused image, the proposed method.
TABLE II
PERFORMANCE OF IMAGE FUSION METHODS BY THE STANDARD FUSION METRICS
SR and the proposed one. The size of images is 320 240. clear than that by our method. The fused images by our method
It is evident that the proposed method maintains the image can distinguish the soft and hard tissues easily, and reserve the
features while reducing the noise significantly when compared details and edges completely. The proposed method provides
with other methods. For example, the walking person in the IR the result with the best visual appearance.
image is better transferred into the fused image and the roof of The corresponding objective measures the Piella metric and
the house is clearer in the proposed method compared to other Petrovic metric between the source and fused images of four
state-of-the-art methods. Fig. 5 shows two medical images methods are listed in Table II. The highest quality measures
CT and MR, and their fused images obtained using the DWT, obtained over all methods are indicated by the values in bold.
ICA, SR methods, along with the proposed method. The size It can be seen from this table that the fusion performance
of images is 256 256. It can be seen the proposed method degrades for all the methods with increasing strength of the
is less influenced by noise compared to DWT, ICA, and SR noise. From Table II, we also see that, our method provides
methods. The structure of bones of fused images by ICA and higher values of the Piella metric and Petrovic metric compared
DWT is clear, but the contrast is reduced to some extent and with other methods. The Petrovic metric evaluates the amount
their edges are not easily distinguished. For DWT, their edges of edge information that is transferred from input images to
are less smooth. The method SR has the better result than ICA the fused image and the Piella metric evaluates the correlation,
and DWT, but the structure of bones of fused images is less luminance distortion and contrast distortion simultaneously.
1080 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 5, SEPTEMBER 2011
TABLE III
MEAN VALUE OF THE METRIC OVER 20 PAIRS OF IMAGES
TABLE IV
MEAN VALUE OF CPU TIME OF THREE METHODS
Thus our method not only acquires the fused image of high V. CONCLUSION
quality but also guarantees the correlation between the source In this paper, we have presented a novel image fusion method
and fused images. based on joint sparse representation. The proposed method can
To further confirm the effectiveness of the proposed method, overcome the drawback of the “weighted average” fusion rule.
20 pairs of multisensor images 001–020 from Image Fusion We extract common and innovation features of source images
Server are fused by ICA, SR, and the proposed method, as simultaneously by joint sparse representation, making use of
shown in Fig. 6. In addition, the images are added Gaussian the joint-sparsity of all source images. The common and in-
noise and . The fused images are novation features are fused separately, and the sparse coeffi-
evaluated by the Piella metric and Petrovic metric, listed in cients are consequently weighted by the mean absolute values
Table III. The values demonstrate that the proposed method is of the innovation coefficients. The experimental results show
effective and superior to other methods. that the proposed scheme has better fusion performance than
Table IV reports the average computation (CPU) time of the the state-of-the-art algorithms.
20 pairs of images required by the three methods. The average
CPU time of JSR is longer than that of ICA and SR. Although APPENDIX
sparse coding need less time in the JSR method than in the SR In this Appendix, we want to prove our algorithm will not re-
method, the trained dictionary spends more time than the fixed sult in scale deformation. As discussed in Section III, the sparse
dictionary. So the JSR is slightly slower than the SR for image coefficients of the fused image are obtained by
fusion. With ever growing computational capabilities, compu-
tational cost may become secondary in importance to the im-
proved performance.
YU et al.: IMAGE FEATURES EXTRACTION AND FUSION BASED ON JOINT SPARSE REPRESENTATION 1081
TABLE V
VALUES RANGE OF (t) WITH DIFFERENT VALUES OF (t) AND (t)
REFERENCES
where and can be calculated by (9). For purposes of
[1] N. Cvejic, T. Seppänen, and S. J. Godsill, “A nonreference image fu-
analysis, we consider the situation of fusion of two images and sion metric based on the regional importance measure,” IEEE J. Sel.
replace -norm with -norm. Then the (9) is approximated by Topics Signal Process., vol. 3, no. 2, pp. 212–220, Apr. 2009.
[2] S. M. Mahbubur Rahman, M. Omair Ahmad, and M. N. S. Swamy,
“Contrast-based fusion of noisy images using discrete wavelet trans-
form,” IET Image Process., vol. 4, no. 5, pp. 374–384, 2010.
[3] S. Nikolov, D. Bull, and N. Canagarajah, “Wavelets for image fusion,”
in Wavelets in Signal and Image Analysis. Norwell, MA: Kluwer,
2001.
subject to (17) [4] H. Li, B. S. Manjunath, and S. K. Mitra, “Multisensor image fusion
using the wavelet transform,” Graph. Models Image Process., vol. 57,
no. 3, pp. 235–245, 1995.
Assume , , , , , ,and rep- [5] N. Mitianoudis and T. Stathaki, “Optimal contrast correction for ICA-
resent the th elements of , , , based fusion of multimodal images,” IEEE Sens. Journal, vol. 8, no.
12, pp. 2016–2026, Dec. 2008.
, and . According to (12), (15), and (17) [6] N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal
can be calculated by image fusion using ICA bases,” IEEE Sens. J., vol. 7, no. 5, pp.
743–751, 2007.
[7] B. Yang and S. Li, “Multifocus image fusion and restoration with
sparse representation,” IEEE Trans. Instrum. Meas., vol. 59, no. 4, pp.
884–891, Apr. 2010.
then (18) [8] N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image
fusion schemes using ICA bases,” Inf. Fusion, vol. 8, no. 2, pp.
or 131–142, 2007.
[9] E. Cand’es, X. Li, Y. Ma, and J. Wright, Robust Principal Compo-
nent Analysis? preprint, 2009 [Online]. Available: http://watt.csl.illi-
nois.edu/ ~perceive/matrix-rank/references.html
[10] D. Baron, M. Duarte, M. Wakin, S. Sarvotham, and R. Baraniuk,
then (19) “Distributed compressive sensing,” in Proc. Sens., Signal, Inf. Process.
Workshop, 2008 [Online]. Available: http://arxiv.org/abs/0901.3403.
[11] M. Wakin, M. Duarte, S. Sarvotham, D. Baron, and R. Baraniuk, “Re-
where covery of jointly sparse signals from few random projections,” in Proc.
Workshop Neural Inf. Process. Syst. (NIPS), Vancouver, BC, Canada,
Dec. 2005, pp. 1435–1442.
[12] Y. Deng, D. Li, X. Xie, and K. Lam, “Partially occluded face comple-
tion and recognition,” in Proc. Int. Conf. Image Process. (ICIP), Cairo,
Egypt, Nov. 2009, pp. 4145–4148.
[13] M. Duarte, S. Sarvotham, D. Baron, M. Wakin, and R. Baraniuk,
“Distributed compressed sensing of jointly sparse signals,” in Proc.
subject to (20) Asilomar Conf. Signals, Syst., Comput., Pacific Grove, CA, Nov. 2005,
pp. 1537–1541.
1082 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 5, NO. 5, SEPTEMBER 2011
[14] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for Tianshuang Qiu received the B.S. degree from
designing overcomplete dictionaries for sparse representation,” IEEE Tianjin University, Tianjin, China, in 1983, the
Trans. Signal Process., vol. 54, no. 11, pp. 4311–4322, Nov. 2006. M.S. degree from Dalian University of Technology,
[15] I. T. Jolliffe, Principal Component Analysis. New York: Springer, Dalian, China, in 1993, and the Ph.D. degree from
2002. Southeastern University, Nanjing, China, in 1996,
[16] K. Engan, S. O. Aase, and J. H. Husøy, “Multi-frame compression: all in electrical engineering.
Theory and design,” in Proc. EURASIP Signal Process., 80, 2000, no. He was a Research Scientist at Dalian Institute of
10, pp. 2121–2140. Chemical Physics, Chinese Academy of Sciences,
[17] M. Elad and M. Aharon, “Image denoising via sparse and redundant Dalian, China, from 1983 to 1996. He served on the
representations over learned dictionaries,” IEEE Trans. Image Process., Faculty of Electrical Engineering, Dalian Railway
vol. 15, no. 12, pp. 3736–3745, Dec. 2006. Institute, in 1996. After that, he conducted his
[18] M. Protter and M. Elad, “Image sequence denoising via sparse and post-doctoral research in the Department of Electrical Engineering, Northern
redundant representations,” IEEE Trans. Image Process., vol. 18, no. Illinois University, DeKalb. He is currently a Professor in the Faculty of
1, pp. 27–35, Jan. 2009. Electronic Information and Electrical Engineering, Dalian University of
[19] Q. Zhang and B. Li, “Discriminative K-SVD for dictionary learning in Technology. His research interests include non-Gaussian and non-stationary
face recognition,” in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recog- signal processing, radio frequency signal processing, and biomedical signal
nition, San Francisco, CA, 2010, pp. 2691–2698. processing.
[20] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition
by basis pursuit,” SIAM Rev., vol. 43, no. 1, pp. 129–159, 2001.
[21] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal
matching pursuit: Recursive function approximation with applications Feng Bi received the B.S. degree from Liaoning
to wavelet decomposition,” in Conf. Rec. 27th Asilomar Conf. Signals, University, Shenyang, China, in 1995 and the M.S.
Syst. Comput., 1993, vol. 1, pp. 40–44. degree from Liaoning Normal University, Dalian,
[22] G. Piella and H. Heijmans, “A new quality metric for image fusion,” China, in 2005. He is currently working towards the
in Proc. Int. Conf. Image Process. (ICIP), Barcelona, Spain, 2003, pp. Ph.D. degree in signal and information processing at
173–176. Dalian University of Technology, Dalian, China.
[23] C. Xydeas and V. Petrovic, “Objective image fusion performance mea- He is currently an Assistant Professor in Eastern
sure,” Electron. Lett., vol. 36, no. 4, pp. 308–309, Feb. 2000. Liaoning University, Shenyang, China. His research
[24] The Image Fusion Server [Online]. Available: http://www.imagefu- interests are in non-Gaussian and non-stationary
sion.org/ signal processing, biomedical signal processing, and
blind signal processing.
Nannan Yu received the B.S. and M.S. degrees from Aiqi Wang received the B.S. degree from the De-
the Harbin Institute of Technology, Harbin, China, partment of Mathematics, Liaoning Normal Univer-
in 2003 and 2006, respectively. She is currently sity, Dalian, China, in 2002 and the M.S. degree from
working towards the Ph.D. degree in signal and the Department of Mathematics, Dalian University
information processing from Dalian University of of Technology, in 2005. He is currently pursuing the
Technology, Dalian, China. Ph.D. degree at the School of Electronic and Informa-
She is currently an Assistant Professor at Northeast tion Engineering, Dalian University of Technology.
Petroleum University, Daqing, China. Her research His research interests include pattern recognition,
interests are in image fusion, biomedical signal pro- medical image processing and computer vision.
cessing, and sparse representation.