Novel Algorithm For Multifocus Image Fusion: Integration of Convolutional Neural Network and Partial Differential Equation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Surveys in Mathematics and its Applications

ISSN 1842-6298 (electronic), 1843-7265 (print)


Volume 19 (2024), 179 – 195

NOVEL ALGORITHM FOR MULTIFOCUS IMAGE


FUSION: INTEGRATION OF CONVOLUTIONAL
NEURAL NETWORK AND PARTIAL
DIFFERENTIAL EQUATION
Gargi J Trivedi and Rajesh Sanghvi

Abstract. This paper presents a novel method for Multifocus image fusion that combines
anisotropic diffusion PDE filtering and convolutional neural network (CNN) feature extraction.
The proposed method aims to preserve image edges and details while reducing noise through the
utilization of anisotropic diffusion PDE filtering. Additionally, a CNN architecture with ReLU
activation function is employed for feature extraction. The method is evaluated on a dataset
of Multifocus images and compared with traditional and CNN-based approaches, demonstrating
superior performance in terms of visual quality and quantitative metrics, such as Normalized Mutual
Information, Phase Congruency-based metric, and Structural Similarity-based metric. Furthermore,
we aim to enhance our approach by incorporating machine learning techniques to optimize the
parameters of the image fusion algorithm. By automatically adjusting these parameters, we strive
to achieve the most reliable and accurate outcomes.

1 Introduction
Image fusion is the process of combining multiple images of the same scene to produce
a single image that contains more complete and detailed information than any of
the individual images [21, 14]. This is useful in many applications, such as medical
imaging [20, 30, 22, 32], satellite imaging [9, 29], and surveillance [8, 23]. Multifocus
image fusion is a specific type of image fusion that aims to combine multiple images
of the same scene that have been captured at different focus distances, resulting
in a fused image with sharp and clear details [6, 1]. One of the key challenges in
Multifocus image fusion is preserving the sharpness of the features while reducing
the blur and noise [31]. Partial differential equation (PDE) based methods have
been widely used to achieve this goal. PDE filters, such as anisotropic diffusion
[4], can effectively preserve the edges and details of the images while smoothing out

2020 Mathematics Subject Classification:34A08; 54H30; 47H10; 47H20; 68U10; 94A08


Keywords: Partial Differential Equation (PDE), Machine learning (ML), Convolutional neural
network (CNN), Image fusion (IF), Multifocus images (MF).

******************************************************************************
https://www.utgjiu.ro/math/sma
180 G. Trivedi & R. Sanghavi

the noise [26]. Machine learning (ML) has been applied to the field of image fusion
with promising results. The use of ML in image fusion is typically done using deep
learning (DL) techniques, such as convolutional neural networks (CNNs). These
techniques have been shown to be effective in a wide range of image fusion tasks,
such as multi-focus image fusion, multi-modal image fusion, and multi-spectral image
fusion [24]. One of the main advantages of using ML in image fusion is the ability
to learn features from the input images that are important for the fusion task.
This can lead to more accurate and robust fusion results compared to traditional
methods that rely on hand-crafted features. Additionally, ML-based methods are
capable of handling large amounts of data, which is important for many image fusion
applications.Convolutional neural networks (CNNs) have also been used for image
fusion. CNNs can learn the features of the input images and produce a fused image
that preserves the important information from both images [3, 10]. In this paper, we
propose a novel PDE-CNN based image fusion method that combines the advantages
of both PDE filters and CNNs. The proposed method applies an anisotropic diffusion
PDE filter to the input images to preserve the edges and details, followed by a CNN
to extract features from the PDE filtered images and fuse them to produce the final
fused image.

2 Overview of PDE

In image fusion, partial differential equations (PDEs) can be used to enhance the
quality of the input images before they are fused. PDEs are mathematical equations
that describe the evolution of a function over time or space. They can be used to
model various image processing tasks such as image restoration, enhancement, and
segmentation [25, 15]. One popular PDE-based method for image enhancement
is the Perona-Malik equation [16, 17], which is a non-linear diffusion equation that
enhances edges and details in images while preserving the overall structure. Another
example is the anisotropic diffusion, which is similar to the Perona-Malik equation,
but it is based on the gradient of the image intensity [19]. In the case of Multifocus
image fusion, PDEs can be used to enhance the edges and details in the input
images, which can improve the performance of the fusion algorithm. By enhancing
the input images, PDEs can make it easier for the fusion algorithm to identify the
regions of the images that should be merged to create the final fused image. PDEs
can also help to reduce the noise and blur in the input images, which can improve
the overall quality of the fused image. Additionally, PDE-based methods are also
used to enhance the features of the input images before they are passed through a
CNN to improve the performance of the CNN. It is worth noting that PDEs are not
the only way to enhance the images before fusion and other methods can be used
such as filtering methods, histogram equalization, etc.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 181

2.1 Anisotropic diffusion


Anisotropic diffusion is a technique used in image processing to preserve the edges
and details of an image while smoothing out the noise or small variations in intensity.
This is achieved by applying a partial differential equation (PDE) filter to the image
[28].The PDE filter models the diffusion of heat over time, where the heat flow is
guided by the gradient of the image intensity. In other words, the filter smooths
the intensity in regions with low gradient but stops diffusion in regions with high
gradient, which correspond to edges. The following steps form the basis for the
anisotropic diffusion filter algorithm, which iteratively updates the image intensity
values to achieve the desired diffusion level.
First Initialize and Define the parameters for the anisotropic diffusion filter, such as
the time step, number of iterations, and the coefficients that control the flow of heat
in different directions.
Secondly for Diffusion - Compute the intensity change of each pixel in the image
based on the intensity of its neighboring pixels, guided by the gradient of the image
intensity. Repeat this step for multiple iterations until the desired level of smoothing
is achieved. Which is calculated as below.
∂u
= ∇ · (∇(u)) (2.1)
∂t
where u is the image intensity, t is the time, c is the diffusion coefficient, ∇· is diver-
gence, and ∇(u)is the gradient of the image approximated using finite differences.

∂u u(i + 1, j) − u(i − 1, j)
=
∂x 2
(2.2)
∂u u(i, j + 1) − u(i, j − 1)
=
∂y 2

where ∂u ∂u
∂x and ∂y are the partial derivatives of the image in the x and y directions,
respectively. The diffusion coefficient c is defined as,
( )
−||∇(u)||2
k2 t
c=e (2.3)

At a Third stage of Edge Preservation- Limit the intensity change of each pixel to
a small value when the gradient of the image intensity is high, corresponding to
an edge. This helps to preserve the sharpness of the edges in the image which is
calculated by Laplacian as follows. The Laplacian of the image is approximated
using finite differences ,

lapu = u(i + 1, j) + u(i − 1, j) + u(i, j + 1) + u(i, j − 1) − 4u(i, j) (2.4)

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
182 G. Trivedi & R. Sanghavi

These equations form the basis for the anisotropic diffusion filter algorithm, which
iteratively updates the image intensity values to achieve the desired level of diffusion.
Where diffusion filter helps to remove noise and small variations in intensity while
preserving the edges and details of the image.

3 Overview of Convolutional neural network (CNNs)


Convolutional neural network (CNNs) techniques are effective in a wide range of
image fusion tasks, such as multi-focus image fusion, multi-modal image fusion, and
multi-spectral image fusion. One of the main advantages of using CNNs in image
fusion is the ability to learn features from the input images that are important for
the fusion task. This can lead to more accurate and robust fusion results compared
to traditional methods that rely on hand-crafted features [27]. Additionally, CNN-
based methods are capable of handling large amounts of data, which is important
for many image fusion applications. Feature extract the architecture of the CNN
would typically consist of the following layers: Input layer: This layer takes in the
input images, which can be multiple images with different resolutions. Convolutional
layers: These layers are used to extract features from the input images. They consist
of a set of filters that are convolved with the input images to produce feature maps.
Pooling layers: These layers are used to reduce the spatial dimensions of the feature
maps, which helps to reduce the computational cost and also makes the model more
robust to small changes in the image. Fully connected layer: This layer is used to
combine the features from the convolutional and pooling layers into a single fused
image. Output layer: This layer produces the final fused image as output. ion using
a Convolutional Neural Network (CNN) is a process in which the input image is
processed through multiple layers of a neural network to extract important features
or representations of the image. The extracted features are then used for tasks such
as image classification, object detection, and image fusion.

Figure 1: Process Flow of CNN

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 183

4 Proposed Method
In the case of the proposed PDE-CNN based image fusion method, the input to the
CNN is the PDE filtered multifocus images.The CNN processes these images through
multiple layers, including convolutional layers, activation layers, and pooling layers,
to extract relevant features from the images. These features are then used to fuse the
images and produce the final fused image. The convolutional layers in a CNN consist
of filters that slide over the input image and perform element-wise multiplication
and summation to produce feature maps. The activation layers apply a non-linear
activation function, such as the rectified linear unit (ReLU), to the feature maps
to introduce non-linearity into the network. The pooling layers reduce the spatial
dimensions of the feature maps while preserving the most important information.

Figure 2: Block Diagram of PDE- CNN

(1) Pre-process input images using equations 2.1, 2.2, 2.3, and 2.4.

(2) For each CNN layer, the output is given by: y = f (W ∗ x + b), where y is the
output, f is the activation function, W are the learnable filters, x is the input,
and b is the bias term.

(3) The activation function can be ReLU (f (x)) = max(0, x). Max pooling is used
to reduce spatial dimensions.

(4) Train a machine learning model, such as a neural network, to optimize the
weighting coefficients (wA andwB ) based on the input images and the desired
output.

(5) Using a dataset of input image pairs (A, B) and their corresponding ground
truth fused images (C) for training.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
184 G. Trivedi & R. Sanghavi

(6) Proposed The machine learning model will take the pre-processed input images
(A, B) as inputs and output the optimized weighting coefficients (wA , wB ) for
each pixel.
(7) The fusion rule for pixel (x, y) in the output fused image C is:
C(x, y) = wA (x, y) ∗ A(x, y) + wB (x, y) ∗ B(x, y),
where wA(x, y) and wB(x, y) are the weighting coefficients for images A and
B at pixel (x, y), respectively.
(8) The coefficients are determined using the decision map.
D(x, y) 1 − D(x, y)
wA (x, y) = , and wB (x, y) =
D(x, y) + 1–D(x, y) D(x, y) + (1 − D(x, y))
(9) The decision map is defined as: D(i, j) = M (i, j)/∇(u).
(10) The Final decision map is divided into clear, fuzzy, and transition regions based
on the weight of image fusion M (i, j), where clear regions have M (i, j) = 1,
fuzzy regions have M (i, j) = 0, and transition regions have 0 < M (i, j) < 1.
(11) The fused image is obtained from D(i, j) from thresholding.

5 Result and Discussion


We evaluated our method on a dataset of multifocus images [12, 13] and compared its
performance to that of several state-of-the-art methods [18, 28, 5]. Furthermore, each
of the quality metrics captures a different aspect of the fused image quality, such as
the amount of information inherited from the source images(QM I ), the preservation
of salient features(QP C ), the level of structural information preserved(QW ), and the
level of agreement with human perception(QCB ). The results show that our method
produces fused images with higher resolution, better edge preservation, and less
artifacts compared to the other methods.

5.1 Experimental Setup


The experiments were conducted on a system equipped with an Intel Core i7 CPU
and 16GB RAM. The software environment utilized was MATLAB 2021a. This
dataset comprises images captured from various focuses, such as foreground and
background, left and right, which are publicly available and accessed on June 2023.
Before analysis, the images underwent pre-processing using an anisotropic diffusion
PDE filter to eliminate noise and other distortions. The parameters were set as
∆T (timestep) = 0.125, c(conductance) = 3, k(iterations) = 20, and λ(lambda) =
0.25, controlling the level of smoothing. Features were implemented using a local
patch size of 3 × 3.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 185

5.2 Objective Analysis

Figure 3: Comparison of fusion algorithms with the proposed method of Two clock
source Images. (a) right side blurred image,(b) left side blurred image. depicts the
result of the fusion algorithm using(c) Gaussian Filtering (GF), (d) Discrete Wavelet
Transform (DWT), and (f) the proposed method.

Figure 3 presents a comparative analysis of different fusion algorithms alongside


the proposed method for Clock Images. The figure consists of three subfigures
labeled (a), (b), and (c), each depicting a specific stage or result of the fusion
process.
Subfigure 3 (a) and (b) display the right and left-side focused images that serve as
the input data for the fusion algorithms under evaluation.(c), (d), and (f) showcase
the results obtained from three different fusion algorithms: Gaussian Filtering (GF)
[28], Discrete Wavelet Transform (DWT) [18], and the proposed method, respec-
tively. The fusion results are visually compared to assess the effectiveness of each
algorithm in preserving image details, enhancing contrast, and reducing artifacts.
Upon visual inspection, it is evident that the proposed method (subfigure (f))
from 3 outperforms the other algorithms in terms of image clarity, sharpness, and
fidelity to the original Clocks Image. The proposed method demonstrates superior
fusion capability by effectively combining the information from both input images

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
186 G. Trivedi & R. Sanghavi

while minimizing distortion and loss of information.

Figure 4: Comparative illustration showcasing the performance of various fusion al-


gorithms alongside our proposed methods of Clock and Plant. (a) right-side blurred
image (b) left-side blurred image, showcases the output of the fusion algorithm em-
ploying(c) Gaussian Filtering (GF), (d) Discrete Wavelet Transform (DWT), (f)
proposed method.

Figure 4(a) and (b) display the right and left focused images of the Plant and
clock image, respectively. These blurred images serve as the input data for the fusion
algorithms under evaluation.
Subfigure 4 (c), (d), and (f) showcase the results obtained from three different
fusion algorithms: Gaussian Filtering (GF), Discrete Wavelet Transform (DWT),
and our proposed method, respectively. These panels visually compare the effective-
ness of each algorithm in preserving image details, enhancing contrast, and reducing
artifacts.
Upon visual inspection, it is apparent that our proposed method image (f) out-
performs the other algorithms in terms of image clarity, sharpness, and fidelity to the
original Plant image. The proposed method demonstrates superior fusion capability
by effectively combining the information from both input images while minimizing
distortion and loss of information.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 187

Overall, the comparative analysis presented in Figure 4 highlights the advan-


tages of our proposed fusion method over existing techniques, thereby validating its
potential for applications requiring high-quality image fusion.
Figure 5 depicts the experimental results obtained from investigating the use of
Optimal Weights (OW) in a sliding 3 × 3 and 5 × 5 window. The figure illustrates
the performance of the proposed method across different window sizes, providing
insights into its effectiveness in image-processing tasks.

Figure 5: Experiment with Optimal Weights (OW ) in a Sliding 3×3and5×5W indow

By analyzing the results presented in Figure 5, it is possible to assess the impact


of varying window sizes on the performance of the proposed method. The use of
a sliding window allows for localized analysis and adaptation of weights, enabling
finer adjustments based on the characteristics of the underlying image content.
Through visual examination of the experimental outcomes, it becomes apparent
how the choice of window size influences the performance of the OW method. Larger
window sizes, such as 5 × 5, may provide a broader contextual context for weight
optimization, potentially leading to better fusion results in quality and performance.

5.3 Fusion Evaluation


Image fusion evaluation is challenging due to the absence of a reference image, so
various metrics have been proposed. We selected four metrics from four categories
to comprehensively evaluate our proposed method. These metrics cover factors like
information inheritance, feature preservation, structural information preservation,
and human perception. Using these metrics allowed us to objectively evaluate our
method and compare it with other state-of-the-art methods.
(1) Normalized Mutual Information (QM I ) is an information theory-based metric
that measures the amount of information in the fused image that is inherited
from the source images. It measures the similarity between the fused image
and the source images [2]. The formula for QM I is:
2M I(I1 , I2 )
QM I =
(H(I1 ) + H(I2 ))

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
188 G. Trivedi & R. Sanghavi

where M I(I1 , I2 ) is the mutual information between the source images I1 and
I2 , and H(I) is the entropy of image I .

(2) Phase Congruency-based metric(QP C ) is a feature-based metric that measures


the quality of the fused image in terms of the image’s salient features, such
as edges and corners. It is based on the phase congruency measure, which
captures the phase information of an image [11]. The formula for QP C is:
n
1∑
QP C = P C(F, Fi )
n
i=1

where F is the fused image, Fi is the i-th source image, n is the number of
source images, and P C(F, Fi ) is the phase congruency measure between F and
Fi .

(3) Structural Similarity-based metric(QW )is a structural similarity-based metric


that measures the level of structural information of source images that is pre-
served in the fused image. It measures the similarity between the fused image
and the source images in terms of the image’s structure [7]. The formula for
QW is:
QW = SSIM (F, Iavg )
where F is the fused image and Iavg is the average of the source images.
SSIM (F, Iavg ) is the structural similarity index measure between F and Iavg .

(4) Contrast-based metric(QCB )is a human perception-based metric that utilizes


the major features in the human visual system model. It measures the contrast
between the fused image and the source images [7]. The formula for QC Bis:
n
1∑
QCB = (CB(F, Fi ))
n
i=1

where F is the fused image, Fi is the i-th source image, n is the number of
source images, and CB(F, Fi ) is the contrast-based measure between F and
Fi .

Experimental results are performed on more than 10 image pairs.


Table 1 presents the results obtained from an objective evaluation of various
methods for fusing the ’plant clock’ picture. The evaluation metrics used include
QM I (Mutual Information), QP C (Pearson Correlation Coefficient), QW (Wavelet-
based Quality Index), and QCB (Contourlet-based Quality Index).The results demon-
strate the performance of each fusion method in terms of their ability to preserve
image quality, maintain structural integrity, and retain relevant information from
the original images.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 189

Table 1: Results from an objective evaluation of several approaches for ‘plant clock’
picture fusion
Methods QM I QP C QW QCB
GF 8.04789 0.7077 0.6231 0.73914
DWT 8.6234 0.6745 0.5879 0.7436
CNN 9.1123 0.7025 0.6741 0.7493
CNN-PDE 9.1512 0.7146 0.6723 0.7423

From the table 1, it is observed that the CNN-based methods (CNN and CNN-
PDE) generally achieve higher scores across all evaluation metrics compared to tra-
ditional methods such as GF and DWT. This suggests that CNN-based approaches
are more effective in capturing and preserving important features during the fusion
process.
Additionally, the PDE-CNN method shows slightly improved performance over
the standard CNN approach, indicating the beneficial impact of post-processing
using partial differential equations.

Table 2: Results from an objective evaluation of several approaches for ‘Two clocks’
picture fusion
Methods QM I QP C QW QCB
GF 8.0421 0.7101 0.61452 0.7458
DWT 8.1243 0.6789 0.5699 0.7396
CNN 9.0912 0.7002 0.6213 0.7489
CNN-PDE 9.1248 0.7796 0.6278 0.7768

Table 2 presents the results of an objective evaluation conducted on various


methods for fusing images of ’Two clocks’. The evaluation metrics utilized include
QM I (Mutual Information), QP C (Pearson Correlation Coefficient), QW (Wavelet-
based Quality Index), and QCB (Contourlet-based Quality Index).
The methods compared in the table are Gaussian Filtering (GF), Discrete Wavelet
Transform (DWT), Convolutional Neural Network (CNN), and CNN with Pre-
Processing using Partial Differential Equations (PDE-CNN).
Across the evaluation metrics, the table demonstrates the performance of each
fusion method in terms of its ability to maintain image quality, preserve structural
details, and capture relevant information from the original images.
Observing the results, it is noted that both CNN and PDE-CNN methods out-
perform traditional methods such as GF and DWT in terms of all evaluation metrics.
This indicates the effectiveness of CNN-based approaches in capturing and preserv-
ing important features during the fusion process.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
190 G. Trivedi & R. Sanghavi

Furthermore, the PDE-CNN method shows significant improvement over the


standard CNN approach, particularly in terms of QP C and QCB metrics. This
suggests that post-processing using partial differential equations enhances the overall
quality of the fused images.

Figure 6: Comparison of Implemented approaches of average values using a sliding


window of 3 × 3

Figure 6 depicts a bar chart comparing different methods based on their average
values computed using a sliding window of size 3×3. The chart provides a visual rep-
resentation of the performance of each method across various quality measurement
metrics.
The comparison enables a quick assessment of how each method fares in terms of
accuracy and consistency in computing average values. By observing the differences
in bar heights,the proposed fusion methodology outperforms previous methods such
as GF, DWT, and CNN in terms of evaluation result-based quality metrics such as
QM I , QP C , QW , andQCB
Figure 7 illustrates a comparison of implemented approaches for calculating av-
erage values effectively while considering the spatial context within using a sliding
window of size 5 × 5. The figure presents the results obtained from different meth-
ods applied to the dataset, showcasing their performance in terms of accuracy and
efficienc.A local patch size of 5 × 5 is used in this experiment. The results provide
strong evidence that fusion methodologies such as CNN and the proposed approach
have generated promising evaluation scores compared to the other approaches such
as DWT and GF, which reported the lowest scores.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 191

Figure 7: Comparison of Implemented approaches of average values using sliding


window of 5 × 5

Our experimental results demonstrate that this method can effectively enhance
the resolution and quality of fused images compared to traditional methods.

6 Conclusion
In conclusion, this study has presented a comprehensive analysis of various image
fusion methods and their performance evaluation using objective quality metrics.
Through experimentation and analysis, we have demonstrated the effectiveness of
different fusion algorithms in enhancing image quality and preserving important
visual information.
The results obtained from the objective evaluation provide valuable insights into
the strengths and weaknesses of each method, enabling researchers to make informed
decisions when selecting the most appropriate technique for specific applications.
Furthermore, the comparison of average values using sliding windows of different
sizes has shed light on the impact of window size on the accuracy and consistency
of the fusion process.
Overall, this research contributes to the advancement of image fusion techniques
by providing a systematic evaluation framework and benchmarking methodology. It
lays the groundwork for future research endeavors aimed at developing more robust
and efficient fusion algorithms for various imaging applications.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
192 G. Trivedi & R. Sanghavi

As the field of image fusion continues to evolve, there remains ample opportunity
for further exploration and innovation. Future work may focus on exploring novel
fusion strategies, optimizing parameter settings, and integrating advanced machine-
learning techniques to further enhance fusion performance and address emerging
challenges in image processing and computer vision.

Authors’ contributions. All authors are equally contributed to perform this


research.

References
[1] U. Ali, I.H. Lee, M.T. Mahmood, Incorporating structural prior for depth reg-
ularization in shape from focus, Computer Vision and Image Understanding,
227 (2023), article 103619. https://doi.org/10.1016/j.cviu.2022.103619.

[2] Yin Chen, Rick S. Blum, A new automated quality assessment algorithm for
image fusion, Image and Vision Computing, 27, no. 10 (2009), 1421-1432. DOI:
10.1016/j.imavis.2007.12.002.

[3] C. Cheng, T. Xu, X.-J. Wu, MUFusion: A general unsupervised image fu-
sion network based on memory unit, Information Fusion, 92 (2023), 80–92.
10.1016/j.inffus.2022.11.010.

[4] D. Cui, Image segmentation algorithm based on partial differential equa-


tion, Journal of Intelligent and Fuzzy Systems, 40 (4) (2021), 5945–5952.
10.14708/ma.v51i2.7204. MR2076335. Zbl 1058.11001.

[5] R.C. Gonzalez, R.E. Woods, S.L. Eddins, Digital image processing using MAT-
LAB, Gatesmark Publishing, 2020.

[6] L. Guo, L. Liu, A Perceptual-Based Robust Measure of Image Focus, IEEE


Signal Processing Letters, 29 (2022), 2717–21. 10.1109/lsp.2023.3235647.

[7] M. Hossny, S. Nahavandi, D. Creighton, A. Bhatti, Image fusion performance


metric based on mutual information and entropy driven quadtree decomposition,
Electronics Letters, 46, no. 18 (2010), 1266–1268. Print ISSN 0013-5194, Online
ISSN 1350-911X. DOI: 10.1049/el.2010.1778.

[8] C.-G. Im, D.-M. Son, H.-J. Kwon, S.-H. Lee, Tone Image Classification and
Weighted Learning for Visible and NIR Image Fusion, Entropy, 24, no. 10
(2022), article 1435. 10.3390/e24101435.

[9] H. Kaur, D. Koundal, V. Kadyan, Image Fusion Techniques: A Sur-


vey, Archives of Computational Methods in Engineering, 28, no. 7 (2021),
4425–4447. 10.1007/s11831-021-09540-7.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 193

[10] L. Li, C. Li, X. Lu, H. Wang, D. Zhou, Multi-focus image fusion with convo-
lutional neural network based on Dempster-Shafer theory, Optik, 272 (2023)
article 170223. 10.1016/j.ijleo.2022.170223.

[11] S. Liu, W. Peng, W.g Jiang, Y.Yang, J. Zhao, Y.Su, Multi-focus image fusion
dataset and algorithm test in real environment, Frontiers in Neurorobotics, 16
(2022). doi.org/10.3389/fnbot.2022.1024742.

[12] Yu Liu, Lei Wang, Juan Cheng, Chang Li, Xun Chen, Multi-focus image fusion:
A survey of the state of the art, Information Fusion, 24 (2020), 71–91.

[13] M. Nejati, Lytro Multi-focus Image Dataset, ResearchGate, 2016, January.

[14] Y. Niu, L. Shen, X. Huo, G. Liang, Multi-Objective Wavelet-Based Pixel-Level


Image Fusion Using Multi-Objective Constriction Particle Swarm Optimiza-
tion, Studies in Computational Intelligence, (2010), 151–78. 10.1007/978-3-642-
05165-4 7.

[15] X. Pan, Q. Zhao, and J. Liu, Edge extraction and reconstruction of tera-
hertz image using simulation evolutionary with the symmetric fourth order
partial differential equation, Optoelectronics Letters, 17 (3) (2023), 187–192.
doi.org/10.3389/fnbot.2022.1024742.

[16] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffu-
sion, Proceedings of IEEE Computer Society Workshop on Computer Vision,
November 1987, 16–22.

[17] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffu-
sion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, no.
7 (1990), 629–639, 10.1109/34.56205.

[18] V. Rajinikanth, S.C. Satapathy, N. Dey, R. Vijayarajan, DWT-PCA Image


Fusion Technique to Improve Segmentation Accuracy in Brain Tumor Analysis,
in Lecture Notes in Electrical Engineering, 2018, 453–62. 10.1007/978-981-10-
7329-8 46.

[19] J. Sliz and J. Mikulka, Advanced image segmentation methods using partial
differential equations: A concise comparison, 2016 Progress in Electromagnetic
Research Symposium (PIERS), IEEE, 2016.

[20] G.J. Trivedi, R. Sanghvi, Medical Image Fusion Using CNN with Automated
Pooling, Indian Journal Of Science And Technology, 15, no. 42 (2022), 2267–74.
10.17485/ijst/v15i42.1812.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
194 G. Trivedi & R. Sanghavi

[21] G. Trivedi and R. Sanghvi, Fusesharp: A multi-image focus fusion method using
discrete wavelet transform and unsharp masking, Journal of Applied Mathemat-
ics & Informatics, 41 (5) (2023), 1115–1128. DOI: 10.14317/jami.2023.1115. Zbl
07793714.

[22] G. Trivedi and R. Sanghvi, Optimizing image fusion using modified princi-
pal component analysis algorithm and adaptive weighting scheme, International
Journal of Advanced Networking and Applications, 15, no. 01 (2023), 5769–
5774. 10.35444/ijana.2023.15103.

[23] G. Trivedi and R. Sanghvi, Hybrid Model for Infrared and Visible Im-
age Fusion, Annals of the Faculty of Engineering Hunedoara, 21, no. 3
(2023), 167–173. https://www.proquest.com/scholarly-journals/hybrid-model-
infrared-visible-image-fusion/docview/2867370935/se-2.

[24] G. Trivedi and R. Sanghvi, Novel approach to multi-modal image fusion using
modified convolutional layers, Journal of Innovative Image Processing, 5 (3)
(2023), p. 229. DOI: 10.36548/jiip.2023.3.002.

[25] G. Trivedi and R. Sanghvi, MOSAICFUSION: Merging modalities with Partial


differential equation and Discrete cosine transformation, Journal of Applied &
Pure Mathematics, 5, no. 5–6 (2023), 389–406. DOI: 10.23091/japm.2023.3892.

[26] G. Trivedi and R. Sanghvi, Automated multimodal fusion with PDE preprocess-
ing and learnable convolutional pools, ADBU-Journal of Engineering Technol-
ogy, 13, no. 1 (2024), p. 0130104066.

[27] G. J. Trivedi and R. Sanghavi, MSCNN-Multi-Sensor Image Fusion Using Dual


channel CNN, Mathematica Applicanda (Matematyka Stosowana), 51, no. 2
(2023), 165–182. doi.org/10.14708/ma.v51i2.7204. MR4713481.

[28] G. T. Vasu, P. Palanisamy, Gradient-based multi-focus image fusion using fore-


ground and background pattern recognition with weighted anisotropic diffusion
filter, Signal, Image and Video Processing, 2023. 10.1007/s11760-022-02470-2.

[29] G. Xiao, D.P. Bavirisetti, G. Liu, X. Zhang, Image Fusion, Springer, 2020.
10.1007/978-981-15-4867-3.

[30] G. Zhang, R. Nie, J. Cao, L. Chen, Y. Zhu, FDGNet: A pair feature differ-
ence guided network for multimodal medical image fusion, Biomedical Signal
Processing and Control, 81 (2023), article 104545. 10.1016/j.bspc.2022.104545.

[31] L. Zhou, A Gradient-based Multi-focus Image Fusion Method Using Multi-


wavelets Transform, in 2012 International Conference on Industrial Control
and Electronics Engineering, 2012. 10.1109/icicee.2012.110.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma
PDE-CNN:A MultiFocus Image Fusion 195

[32] T. Zhou, Q. Li, H. Lu, Q. Cheng, X. Zhang, GAN review: Models and
medical image fusion applications, Information Fusion, 91 (2023), 134–148.
10.1016/j.inffus.2022.10.017.

Gargi J Trivedi
The Charutar Vidyamandal University,
Department of Applied Science & Humanities,
G H Patel College of Engineering & Technology,
Vallabh Vidhyanagar-388120, India.
e-mail: [email protected]

Rajesh Sanghvi
The Charutar Vidyamandal University,
Department of Applied Science & Humanities,
G H Patel College of Engineering & Technology,
Vallabh Vidhyanagar-388120, India.
e-mail: [email protected]

License
This work is licensed under a Creative Commons Attribution 4.0 International Li-
cense.

Received: July 2, 2023; Accepted: April 2, 2024; Published: April 19, 2024.

******************************************************************************
Surveys in Mathematics and its Applications 19 (2024), 179 – 195
https://www.utgjiu.ro/math/sma

You might also like