Novel Algorithm For Multifocus Image Fusion: Integration of Convolutional Neural Network and Partial Differential Equation
Novel Algorithm For Multifocus Image Fusion: Integration of Convolutional Neural Network and Partial Differential Equation
Novel Algorithm For Multifocus Image Fusion: Integration of Convolutional Neural Network and Partial Differential Equation
Abstract. This paper presents a novel method for Multifocus image fusion that combines
anisotropic diffusion PDE filtering and convolutional neural network (CNN) feature extraction.
The proposed method aims to preserve image edges and details while reducing noise through the
utilization of anisotropic diffusion PDE filtering. Additionally, a CNN architecture with ReLU
activation function is employed for feature extraction. The method is evaluated on a dataset
of Multifocus images and compared with traditional and CNN-based approaches, demonstrating
superior performance in terms of visual quality and quantitative metrics, such as Normalized Mutual
Information, Phase Congruency-based metric, and Structural Similarity-based metric. Furthermore,
we aim to enhance our approach by incorporating machine learning techniques to optimize the
parameters of the image fusion algorithm. By automatically adjusting these parameters, we strive
to achieve the most reliable and accurate outcomes.
1 Introduction
Image fusion is the process of combining multiple images of the same scene to produce
a single image that contains more complete and detailed information than any of
the individual images [21, 14]. This is useful in many applications, such as medical
imaging [20, 30, 22, 32], satellite imaging [9, 29], and surveillance [8, 23]. Multifocus
image fusion is a specific type of image fusion that aims to combine multiple images
of the same scene that have been captured at different focus distances, resulting
in a fused image with sharp and clear details [6, 1]. One of the key challenges in
Multifocus image fusion is preserving the sharpness of the features while reducing
the blur and noise [31]. Partial differential equation (PDE) based methods have
been widely used to achieve this goal. PDE filters, such as anisotropic diffusion
[4], can effectively preserve the edges and details of the images while smoothing out
******************************************************************************
https://www.utgjiu.ro/math/sma
180 G. Trivedi & R. Sanghavi
the noise [26]. Machine learning (ML) has been applied to the field of image fusion
with promising results. The use of ML in image fusion is typically done using deep
learning (DL) techniques, such as convolutional neural networks (CNNs). These
techniques have been shown to be effective in a wide range of image fusion tasks,
such as multi-focus image fusion, multi-modal image fusion, and multi-spectral image
fusion [24]. One of the main advantages of using ML in image fusion is the ability
to learn features from the input images that are important for the fusion task.
This can lead to more accurate and robust fusion results compared to traditional
methods that rely on hand-crafted features. Additionally, ML-based methods are
capable of handling large amounts of data, which is important for many image fusion
applications.Convolutional neural networks (CNNs) have also been used for image
fusion. CNNs can learn the features of the input images and produce a fused image
that preserves the important information from both images [3, 10]. In this paper, we
propose a novel PDE-CNN based image fusion method that combines the advantages
of both PDE filters and CNNs. The proposed method applies an anisotropic diffusion
PDE filter to the input images to preserve the edges and details, followed by a CNN
to extract features from the PDE filtered images and fuse them to produce the final
fused image.
2 Overview of PDE
In image fusion, partial differential equations (PDEs) can be used to enhance the
quality of the input images before they are fused. PDEs are mathematical equations
that describe the evolution of a function over time or space. They can be used to
model various image processing tasks such as image restoration, enhancement, and
segmentation [25, 15]. One popular PDE-based method for image enhancement
is the Perona-Malik equation [16, 17], which is a non-linear diffusion equation that
enhances edges and details in images while preserving the overall structure. Another
example is the anisotropic diffusion, which is similar to the Perona-Malik equation,
but it is based on the gradient of the image intensity [19]. In the case of Multifocus
image fusion, PDEs can be used to enhance the edges and details in the input
images, which can improve the performance of the fusion algorithm. By enhancing
the input images, PDEs can make it easier for the fusion algorithm to identify the
regions of the images that should be merged to create the final fused image. PDEs
can also help to reduce the noise and blur in the input images, which can improve
the overall quality of the fused image. Additionally, PDE-based methods are also
used to enhance the features of the input images before they are passed through a
CNN to improve the performance of the CNN. It is worth noting that PDEs are not
the only way to enhance the images before fusion and other methods can be used
such as filtering methods, histogram equalization, etc.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 181
∂u u(i + 1, j) − u(i − 1, j)
=
∂x 2
(2.2)
∂u u(i, j + 1) − u(i, j − 1)
=
∂y 2
where ∂u ∂u
∂x and ∂y are the partial derivatives of the image in the x and y directions,
respectively. The diffusion coefficient c is defined as,
( )
−||∇(u)||2
k2 t
c=e (2.3)
At a Third stage of Edge Preservation- Limit the intensity change of each pixel to
a small value when the gradient of the image intensity is high, corresponding to
an edge. This helps to preserve the sharpness of the edges in the image which is
calculated by Laplacian as follows. The Laplacian of the image is approximated
using finite differences ,
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
182 G. Trivedi & R. Sanghavi
These equations form the basis for the anisotropic diffusion filter algorithm, which
iteratively updates the image intensity values to achieve the desired level of diffusion.
Where diffusion filter helps to remove noise and small variations in intensity while
preserving the edges and details of the image.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 183
4 Proposed Method
In the case of the proposed PDE-CNN based image fusion method, the input to the
CNN is the PDE filtered multifocus images.The CNN processes these images through
multiple layers, including convolutional layers, activation layers, and pooling layers,
to extract relevant features from the images. These features are then used to fuse the
images and produce the final fused image. The convolutional layers in a CNN consist
of filters that slide over the input image and perform element-wise multiplication
and summation to produce feature maps. The activation layers apply a non-linear
activation function, such as the rectified linear unit (ReLU), to the feature maps
to introduce non-linearity into the network. The pooling layers reduce the spatial
dimensions of the feature maps while preserving the most important information.
(1) Pre-process input images using equations 2.1, 2.2, 2.3, and 2.4.
(2) For each CNN layer, the output is given by: y = f (W ∗ x + b), where y is the
output, f is the activation function, W are the learnable filters, x is the input,
and b is the bias term.
(3) The activation function can be ReLU (f (x)) = max(0, x). Max pooling is used
to reduce spatial dimensions.
(4) Train a machine learning model, such as a neural network, to optimize the
weighting coefficients (wA andwB ) based on the input images and the desired
output.
(5) Using a dataset of input image pairs (A, B) and their corresponding ground
truth fused images (C) for training.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
184 G. Trivedi & R. Sanghavi
(6) Proposed The machine learning model will take the pre-processed input images
(A, B) as inputs and output the optimized weighting coefficients (wA , wB ) for
each pixel.
(7) The fusion rule for pixel (x, y) in the output fused image C is:
C(x, y) = wA (x, y) ∗ A(x, y) + wB (x, y) ∗ B(x, y),
where wA(x, y) and wB(x, y) are the weighting coefficients for images A and
B at pixel (x, y), respectively.
(8) The coefficients are determined using the decision map.
D(x, y) 1 − D(x, y)
wA (x, y) = , and wB (x, y) =
D(x, y) + 1–D(x, y) D(x, y) + (1 − D(x, y))
(9) The decision map is defined as: D(i, j) = M (i, j)/∇(u).
(10) The Final decision map is divided into clear, fuzzy, and transition regions based
on the weight of image fusion M (i, j), where clear regions have M (i, j) = 1,
fuzzy regions have M (i, j) = 0, and transition regions have 0 < M (i, j) < 1.
(11) The fused image is obtained from D(i, j) from thresholding.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 185
Figure 3: Comparison of fusion algorithms with the proposed method of Two clock
source Images. (a) right side blurred image,(b) left side blurred image. depicts the
result of the fusion algorithm using(c) Gaussian Filtering (GF), (d) Discrete Wavelet
Transform (DWT), and (f) the proposed method.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
186 G. Trivedi & R. Sanghavi
Figure 4(a) and (b) display the right and left focused images of the Plant and
clock image, respectively. These blurred images serve as the input data for the fusion
algorithms under evaluation.
Subfigure 4 (c), (d), and (f) showcase the results obtained from three different
fusion algorithms: Gaussian Filtering (GF), Discrete Wavelet Transform (DWT),
and our proposed method, respectively. These panels visually compare the effective-
ness of each algorithm in preserving image details, enhancing contrast, and reducing
artifacts.
Upon visual inspection, it is apparent that our proposed method image (f) out-
performs the other algorithms in terms of image clarity, sharpness, and fidelity to the
original Plant image. The proposed method demonstrates superior fusion capability
by effectively combining the information from both input images while minimizing
distortion and loss of information.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 187
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
188 G. Trivedi & R. Sanghavi
where M I(I1 , I2 ) is the mutual information between the source images I1 and
I2 , and H(I) is the entropy of image I .
where F is the fused image, Fi is the i-th source image, n is the number of
source images, and P C(F, Fi ) is the phase congruency measure between F and
Fi .
where F is the fused image, Fi is the i-th source image, n is the number of
source images, and CB(F, Fi ) is the contrast-based measure between F and
Fi .
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 189
Table 1: Results from an objective evaluation of several approaches for ‘plant clock’
picture fusion
Methods QM I QP C QW QCB
GF 8.04789 0.7077 0.6231 0.73914
DWT 8.6234 0.6745 0.5879 0.7436
CNN 9.1123 0.7025 0.6741 0.7493
CNN-PDE 9.1512 0.7146 0.6723 0.7423
From the table 1, it is observed that the CNN-based methods (CNN and CNN-
PDE) generally achieve higher scores across all evaluation metrics compared to tra-
ditional methods such as GF and DWT. This suggests that CNN-based approaches
are more effective in capturing and preserving important features during the fusion
process.
Additionally, the PDE-CNN method shows slightly improved performance over
the standard CNN approach, indicating the beneficial impact of post-processing
using partial differential equations.
Table 2: Results from an objective evaluation of several approaches for ‘Two clocks’
picture fusion
Methods QM I QP C QW QCB
GF 8.0421 0.7101 0.61452 0.7458
DWT 8.1243 0.6789 0.5699 0.7396
CNN 9.0912 0.7002 0.6213 0.7489
CNN-PDE 9.1248 0.7796 0.6278 0.7768
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
190 G. Trivedi & R. Sanghavi
Figure 6 depicts a bar chart comparing different methods based on their average
values computed using a sliding window of size 3×3. The chart provides a visual rep-
resentation of the performance of each method across various quality measurement
metrics.
The comparison enables a quick assessment of how each method fares in terms of
accuracy and consistency in computing average values. By observing the differences
in bar heights,the proposed fusion methodology outperforms previous methods such
as GF, DWT, and CNN in terms of evaluation result-based quality metrics such as
QM I , QP C , QW , andQCB
Figure 7 illustrates a comparison of implemented approaches for calculating av-
erage values effectively while considering the spatial context within using a sliding
window of size 5 × 5. The figure presents the results obtained from different meth-
ods applied to the dataset, showcasing their performance in terms of accuracy and
efficienc.A local patch size of 5 × 5 is used in this experiment. The results provide
strong evidence that fusion methodologies such as CNN and the proposed approach
have generated promising evaluation scores compared to the other approaches such
as DWT and GF, which reported the lowest scores.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 191
Our experimental results demonstrate that this method can effectively enhance
the resolution and quality of fused images compared to traditional methods.
6 Conclusion
In conclusion, this study has presented a comprehensive analysis of various image
fusion methods and their performance evaluation using objective quality metrics.
Through experimentation and analysis, we have demonstrated the effectiveness of
different fusion algorithms in enhancing image quality and preserving important
visual information.
The results obtained from the objective evaluation provide valuable insights into
the strengths and weaknesses of each method, enabling researchers to make informed
decisions when selecting the most appropriate technique for specific applications.
Furthermore, the comparison of average values using sliding windows of different
sizes has shed light on the impact of window size on the accuracy and consistency
of the fusion process.
Overall, this research contributes to the advancement of image fusion techniques
by providing a systematic evaluation framework and benchmarking methodology. It
lays the groundwork for future research endeavors aimed at developing more robust
and efficient fusion algorithms for various imaging applications.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
192 G. Trivedi & R. Sanghavi
As the field of image fusion continues to evolve, there remains ample opportunity
for further exploration and innovation. Future work may focus on exploring novel
fusion strategies, optimizing parameter settings, and integrating advanced machine-
learning techniques to further enhance fusion performance and address emerging
challenges in image processing and computer vision.
References
[1] U. Ali, I.H. Lee, M.T. Mahmood, Incorporating structural prior for depth reg-
ularization in shape from focus, Computer Vision and Image Understanding,
227 (2023), article 103619. https://doi.org/10.1016/j.cviu.2022.103619.
[2] Yin Chen, Rick S. Blum, A new automated quality assessment algorithm for
image fusion, Image and Vision Computing, 27, no. 10 (2009), 1421-1432. DOI:
10.1016/j.imavis.2007.12.002.
[3] C. Cheng, T. Xu, X.-J. Wu, MUFusion: A general unsupervised image fu-
sion network based on memory unit, Information Fusion, 92 (2023), 80–92.
10.1016/j.inffus.2022.11.010.
[5] R.C. Gonzalez, R.E. Woods, S.L. Eddins, Digital image processing using MAT-
LAB, Gatesmark Publishing, 2020.
[8] C.-G. Im, D.-M. Son, H.-J. Kwon, S.-H. Lee, Tone Image Classification and
Weighted Learning for Visible and NIR Image Fusion, Entropy, 24, no. 10
(2022), article 1435. 10.3390/e24101435.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 193
[10] L. Li, C. Li, X. Lu, H. Wang, D. Zhou, Multi-focus image fusion with convo-
lutional neural network based on Dempster-Shafer theory, Optik, 272 (2023)
article 170223. 10.1016/j.ijleo.2022.170223.
[11] S. Liu, W. Peng, W.g Jiang, Y.Yang, J. Zhao, Y.Su, Multi-focus image fusion
dataset and algorithm test in real environment, Frontiers in Neurorobotics, 16
(2022). doi.org/10.3389/fnbot.2022.1024742.
[12] Yu Liu, Lei Wang, Juan Cheng, Chang Li, Xun Chen, Multi-focus image fusion:
A survey of the state of the art, Information Fusion, 24 (2020), 71–91.
[15] X. Pan, Q. Zhao, and J. Liu, Edge extraction and reconstruction of tera-
hertz image using simulation evolutionary with the symmetric fourth order
partial differential equation, Optoelectronics Letters, 17 (3) (2023), 187–192.
doi.org/10.3389/fnbot.2022.1024742.
[16] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffu-
sion, Proceedings of IEEE Computer Society Workshop on Computer Vision,
November 1987, 16–22.
[17] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffu-
sion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, no.
7 (1990), 629–639, 10.1109/34.56205.
[19] J. Sliz and J. Mikulka, Advanced image segmentation methods using partial
differential equations: A concise comparison, 2016 Progress in Electromagnetic
Research Symposium (PIERS), IEEE, 2016.
[20] G.J. Trivedi, R. Sanghvi, Medical Image Fusion Using CNN with Automated
Pooling, Indian Journal Of Science And Technology, 15, no. 42 (2022), 2267–74.
10.17485/ijst/v15i42.1812.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
194 G. Trivedi & R. Sanghavi
[21] G. Trivedi and R. Sanghvi, Fusesharp: A multi-image focus fusion method using
discrete wavelet transform and unsharp masking, Journal of Applied Mathemat-
ics & Informatics, 41 (5) (2023), 1115–1128. DOI: 10.14317/jami.2023.1115. Zbl
07793714.
[22] G. Trivedi and R. Sanghvi, Optimizing image fusion using modified princi-
pal component analysis algorithm and adaptive weighting scheme, International
Journal of Advanced Networking and Applications, 15, no. 01 (2023), 5769–
5774. 10.35444/ijana.2023.15103.
[23] G. Trivedi and R. Sanghvi, Hybrid Model for Infrared and Visible Im-
age Fusion, Annals of the Faculty of Engineering Hunedoara, 21, no. 3
(2023), 167–173. https://www.proquest.com/scholarly-journals/hybrid-model-
infrared-visible-image-fusion/docview/2867370935/se-2.
[24] G. Trivedi and R. Sanghvi, Novel approach to multi-modal image fusion using
modified convolutional layers, Journal of Innovative Image Processing, 5 (3)
(2023), p. 229. DOI: 10.36548/jiip.2023.3.002.
[26] G. Trivedi and R. Sanghvi, Automated multimodal fusion with PDE preprocess-
ing and learnable convolutional pools, ADBU-Journal of Engineering Technol-
ogy, 13, no. 1 (2024), p. 0130104066.
[29] G. Xiao, D.P. Bavirisetti, G. Liu, X. Zhang, Image Fusion, Springer, 2020.
10.1007/978-981-15-4867-3.
[30] G. Zhang, R. Nie, J. Cao, L. Chen, Y. Zhu, FDGNet: A pair feature differ-
ence guided network for multimodal medical image fusion, Biomedical Signal
Processing and Control, 81 (2023), article 104545. 10.1016/j.bspc.2022.104545.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 195
[32] T. Zhou, Q. Li, H. Lu, Q. Cheng, X. Zhang, GAN review: Models and
medical image fusion applications, Information Fusion, 91 (2023), 134–148.
10.1016/j.inffus.2022.10.017.
Gargi J Trivedi
The Charutar Vidyamandal University,
Department of Applied Science & Humanities,
G H Patel College of Engineering & Technology,
Vallabh Vidhyanagar-388120, India.
e-mail: [email protected]
Rajesh Sanghvi
The Charutar Vidyamandal University,
Department of Applied Science & Humanities,
G H Patel College of Engineering & Technology,
Vallabh Vidhyanagar-388120, India.
e-mail: [email protected]
License
This work is licensed under a Creative Commons Attribution 4.0 International Li-
cense.
Received: July 2, 2023; Accepted: April 2, 2024; Published: April 19, 2024.
******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma