Multi-Focus Image Fusion For Extended Depth of Field: Wisarut Chantara and Yo-Sung Ho
Multi-Focus Image Fusion For Extended Depth of Field: Wisarut Chantara and Yo-Sung Ho
Multi-Focus Image Fusion For Extended Depth of Field: Wisarut Chantara and Yo-Sung Ho
Table 1: Objective evaluation of the extended depth of field image (non-reference fusion metrics)
than those of the spatial frequency method and the proposed REFERENCES
method. However, it is hard to tell the difference between [1] Xiangzhi Bai, Yu Zhang, Fugen Zhou, and Bindang Xue. 2015.
the results of the spatial frequency method and the proposed Quadtree-based multi-focus image fusion using a weighted focus-
measure. Inf. Fusion 22 (March 2015), 105–118.
method by subjective evaluation. Hence, the paper applies [2] Rajenda Pandit Desale and Sarita V. Verma. 2013. Study and
some non-reference fusion metrics such as Feature Mutual analysis of PCA, DCT & DWT based Image Fusion Techniques. In
Int. Conf. on Signal Processing, Image Processing and Pattern
Information (FMI) [3] and Petrovics metric (𝑄𝐴𝐵/𝐹 ) [12] Recognition. IEEE, Coimbatore, India, 66–69.
are then introduced and employed. These evaluation criteria [3] Mohammad Haghighat and Masoud Amirkabiri Razian. 2014. Fast-
metrics are calculated without respect to the reference im- FMI: non-reference image fusion metric. In Proceedings of 8th
Int. Conf. on Application of Information and Communication
ages. FMI measures the amount of information that the fused Technologies. IEEE, Astana, Kazakhstan, 1–3.
image contains the source images, while 𝑄𝐴𝐵/𝐹 measures [4] Wei Huang and Zhongliang Jing. 2007. Evaluation of focus mea-
the relative amount of edge information that is transferred sures in multi-focus image fusion. Pattern Recognition Letters
28, 4 (March 2007), 493–500.
from the source into the fused image. The higher the FMI or [5] Chu-Hui Lee and Zheng-Wei Zhou. 2012. Comparison of image
𝑄𝐴𝐵/𝐹 value, the better the fused image performance. The fusion based on DCT-STD and DWT-STD. In Proceedings of
the International MultiConference of Engineers and Computer
comparison results are summarized in Table 1. Scientists (IMECS 2012). Hong Kong, 720–725.
The above two evaluation criteria are then applied to [6] Shutao Li, James T. Kwok, and Yaonan Wang. 2001. Combination
evaluate the four fusion methods in Fig. 3 and 4, the de- of images with diverse focuses using the spatial frequency. Inf.
Fusion 2, 3 (Sept. 2001), 169–176.
tailed quantitative results are given in Table 1. From Table [7] Shutao Li and Bin Yang. 2008. Multifocus image fusion using
1, we can observe that the values of all quality indices of the region segmentation and spatial frequency. Image and Vision
proposed method are larger than those of pixel averaging, Computing 26, 7 (July 2008), 971–979.
[8] Lytro. [n. d.]. The Lytro camera. http://www.lytro.com
DWT-averaging, DWT-Maximum, and the conventional spa- [9] Nirav Patel. 2013. lfptools. http://github.com/nrpatel/lfptools/
tial frequency methods, which means the proposed algorithm [10] Raytrix. [n. d.]. The Raytrix camera. http://www.raytrix.de
[11] Nianyi Wang, Yide Ma, and Weilan Wang. 2014. DWT-based
can effectively combine sharp parts of the original image to multisource image fusion using spatial frequency and simplified
the fused image, and yield superior quality than conventional pulse coupled neural network. Journal of Multimedia 9, 1 (Jan.
methods. 2014), 159–165.
[12] C.S. Xydeas and Vladimir Petrovic. 2000. Objective image fusion
performance measure. Electronics Letters 36, 4 (Feb. 2000),
4 CONCLUSIONS 308–309.
ACKNOWLEDGMENTS
This work was supported by the National Research Founda-
tion of Korea (NRF) Grant funded by the Korean Government
(MSIP) (No. 2011-0030079).
ICIMCS’18, August 17-19,2018, Nanjing, China W. Chantara et al.
(d) (e)
Figure 3: Comparison of ”Cup”: (a) Pixel Avg., (b) DWT-Avg., (c) DWT-Max, (d) SF, (e) Proposed method.
(d) (e)
Figure 4: Comparison of ”Stuff”: (a) Pixel Avg., (b) DWT-Avg., (c) DWT-Max, (d) SF, (e) Proposed method.