Neighbour Local Variability For Multi-Focus Images Fusion
Neighbour Local Variability For Multi-Focus Images Fusion
Neighbour Local Variability For Multi-Focus Images Fusion
6, December 2020
ABSTRACT
The goal of multi-focus image fusion is to integrate images with different focus objects in order to obtain a
single image with all focus objects. In this paper, we give a new method based on neighbour local
variability (NLV) to fuse multi-focus images. At each pixel, the method uses the local variability calculated
from the quadratic difference between the value of the pixel and the value of all pixels in its
neighbourhood. It expresses the behaviour of the pixel with respect to its neighbours. The variability
preserves the edge function because it detects the sharp intensity of the image. The proposed fusion of each
pixel consists of weighting each pixel by the exponential of its local variability. The quality of this fusion
depends on the size of the neighbourhood region considered. The size depends on the variance and the size
of the blur filter. We start by modelling the value of the neighbourhood region size as a function of the
variance and the size of the blur filter. We compare our method to other methods given in the literature.
We show that our method gives a better result.
KEYWORDS
Neighbour Local Variability; Multi-focus image fusion; Root Mean Square Error (RMSE)
1. INTRODUCTION
The limitation of the depth of field of optical lenses often causes difficulty in capturing an image
containing all objects in focus on the scene. Only objects in depth of field are sharp, while other
objects are out of focus. Merging multi-focus images is the solution to this problem. There are
different approaches that have been carried out in the literature. These approaches can be
separated into two kinds: the spatial domain method and the multi-scale fusion method. The
spatial domain fusion method is performed directly on the source images. The techniques of the
spatial domain directly consider the pixels of the image. We quote some methods of fusion of the
spatial domain approaches: the mean, the principal component analysis (PCA) [1], the rule of
maximal selection, the methods bilateral gradient based [2] and guided image filter (GIF) method
[3] and maximum selection rule. We relate to the spatial domain approaches the fact that they
produce a spatial distortion in the merged image. Spatial distortion can be handled very well by
multiscale approaches to image fusion. The merging process in multi-scale merging methods is
done on the source images after their decomposition into several scales. we cite some examples
of these methods: The discrete wavelet transform (DWT) [4] - [9], fusion of images of the
Laplacian pyramid [10] - [17], discrete cosine transform with variance calculation (DCT + var)
[18], method based on salience detection (SD) [19] are examples of image fusion techniques
under transformation domain. The objective of this work is to propose a multi-focus image fusion
at the pixel level based on the neighboring local variability (NLV). It consists in calculating for
each pixel the quadratic distance which separates it from the other pixels belonging to its
neighborhood. It expresses the behavior of the pixel with respect to all the pixels belonging to its
DOI: 10.5121/sipij.2020.11603 37
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
neighborhood. The variability preserves the edge function because it detects the sharp intensity of
the image. The fusion of each pixel is given by a linear model of the values of the pixel in each
image hanging by the exponential of its local variability.
The precision of this fusion depends on the width of the region of pixels considered in the
neighbourhood. To determine this optimal width, we minimize the RMSE error. We propose a
model giving this width as a function of the variance and the size of the blur filter.
A comparative study of our method with other existing methods in the literature (DWT and LP-
DWT), it was carried out and showed that our method gave the best result using the mean
squared error (RMSE).
This paper is organized as follows: The second section reveals the stages of the process of fusing
the proposed method and a model giving the size of the neighbourhood. In section 3, we studied
the experimental results and compared our method to some recent methods. Section 4 gives the
conclusion of this work. In section 5, we give mathematical details to show the property of local
variability..
where k is the index of kth source image (k = 1, 2), a is the size of neighborhood
I k ( x m, y n), if 1 x m N1 and 1 y n N 2
I k' x m, y n
I k ( x, y), otherwise ,
𝑅 = (2𝑎 + 1)2 − 𝑐𝑎𝑟𝑑(𝑆),
S m, n a, a {(0,0)} I k' x m, y n I k ( x, y ) .
2
In the annex 1,we show that this local variability is small enoughwherethe location is on
theblurred area (B1 or B2).
In this work, we give a novel fusion method where each pixel of each image is weighting by
exponential of neighbour local variability. We calculate the neighbour local variabilityfrom the
quadratic difference between the value of the pixel and the all pixel values of its neighbours. The
idea came from the fact that the variation of the value in blurred region is smaller than the
variation of the value in focused region. We used the neighbour, with the size "a", of a pixel
defined as follows:
For example, the neighbor with the small size ("a" = 1) contains: ( x 1, y 1) , ( x 1, y ) ,
( x 1, y 1) , ( x, y 1) , ( x, y 1) , ( x 1, y 1) , ( x 1, y ) , ( x 1, y 1) .
Consider M original source images, I1, ..., IM, with different focus and same size(𝑁1 x𝑁2 ).
Step 1: For each pixel of each image, we calculated the neighbor local variability (NLV) of every
source image, va,k(x,y) defined in (1).
Step 2: The proposed fusion for the pixel (x,y) is F(x,y) calculated by the following model:
𝑀
∑ exp(𝑣𝑎,𝑘 (𝑥,𝑦))𝐼𝑘 (𝑥,𝑦)
𝑘=1
𝐹 (𝑥, 𝑦) = 𝑀 (2)
∑ exp(𝑣𝑎,𝑘(𝑥,𝑦))
𝑖=1
We remark that method depends on the size "a". To determine the value of “a” , we find the value
minimizing the Root Mean Square Error (RMSE), where RMSE is defined in section 3. For that,
we showed that the value of "a" depends on the blurred area.
The choice of the size of the neighborhood "a" used in NLV method depends on variance (v) and
the size(s) of the blurring filter. Our problem is to have a model that gives the value of the "a"
according to the "v" and "s"; we take a sample of 1000 images that we blurred using Gaussian
filter with different values of v and s (v=1,2,3,...,35 and s=1,2,3,...,20).
After that, for each image we blurred with parameters "v" and "s", we applied our fusion method
with different values of "a"("a=1,2,...,17")and determined the value of "a" that gives the
aI v, s aI v, s
minimum RMSE, denoted by . Then, we took the mean of the for 1000 images,
a v, s
denoted , because the coefficient of variation is smaller than 0,1.
Firstly, we have studied the variation of "a" in according to variance "v" for each fixed size of
blurring filter "s". We noted that this variation is logarithmic. For example, "s=8" on Fig. 4.By
using nonlinear regression, we obtained the model:
a= 2.1096ln v 2.8689 .
39
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
Fig. 4. Graph between "a" and variance of blurring filter where "s"=8.
c1 ( s ) c2 ( s )
Fig. 5. graph of Fig. 6. graph of .
c c2
By giving a model of 1 and a model of and introducing these models in (3), we get the
general following model:
a v, s a(v, s)
As "a" is integer, we have two choices of a. It is either the floor of , denoted by
a(v, s) where x min n | n x and
or the ceiling of a(v,s) , denoted by
x max m | m x . Since the RMSE values of both "a" are very slightlydifferent, then
a(v, s) in the remaining part of this paper.
we can choose any "a" of them. We use "a" =
To validate this model we applied it to 100 images (we generated 100 pairs of multi-focus images
with different variance and blur filter size values) and the result is as good as expected. To use
this NLV method, you must first estimate the variance and size of the blur filter from the blur
image. For this, there are some articles giving the methods to estimate the variance of the blur
40
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
filter and the blur detection as in [23] - [27]. subsequently, we developed another method in
which we combined the method of the Laplacian pyramid and the NLV method where the
Laplacian pyramid was used with NLV as selection rule, denoted LP-NLV.
3. EXPERIMENTAL STUDY
We have applied the NLV method on a dataset of images [26] using Matlab2013a. These images
were blurred using a Gaussian filter with many values of variance and size. To simplify the
reading of the article, we have presented only two examples in 256x256 format (N1 = N2 = 256).
The first “bird” image Fig.1 and the second “bottle” image Fig.2. All images consist of two
images with a different focus and a reference image.
We then compare the NLV method to other methods existing in the literature such as: PCA
method [1], Discrete Wavelet Transform (DWT) method [6], Laplacian Pyramid LP_PCA [15],
LP_DWT [17] and Bilateral gradient (BG) [2].
For that, we used the following the evaluation criteria frequently used:
Root Mean Square Error (RMSE)
RMSE gives the information how the pixel values of fused image F deviate from the reference
imageR. RMSE is defined as follows:
1 c c
RMSE R ( x, y ) F ( x, y )
2
rc i 1 j 1
(5)
whereR is a reference image, F is a fused image,rx c is the size of the input image, and x, y
represents to the pixel locations. A smaller value of RMSE shows a good fusion result. If the
value of RMSE is equal to zero then it means the fused image is same the reference image.
For two images that are presented in this paper and blurred with variance = 10 and size of
blurring filter = 5, the model (5) gives the neighbour size "a" = 5 and "a" = 6. Here, we use "a" =
6 because it results the smaller RMSE compared to "a" = 5 however the RMSE values of "a" = 5
and "a" = 6 are very slightly different.
41
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
We have found that the NLV method better fusion compared to other methods, see Fig.1.
The table 1 gives the values of RMSE calculated for ten methods. As we can see on the Table 1
for image 'bird': the smallest RMSE is given by NLV method, the second smallest is LP-NLV,
the third is LP-PCA,. NLV method is the best method among the above methods and LP-NLV is
better than LP-PCA and LP-DWT.
42
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
We have found that the NLV method performs better compared to other methods, see Fig.2.
To confirm our visually result, we calculated the evaluation metrics: RMSE for ten methods see
Table 2. We can classify these methods from the smaller value of RMSE. The smallest value is
NLV, the second smallest is LP-NLV and the third smallest is LP-PCA.
According to the evaluation measure RMSE, the Table 3 gives the mean and standard deviation
of RMSE for the considered methods applied on 150 images.
43
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
Table 3. Statistic parameters of the sample (150 images)
Table 3 shows that the proposed method (NLV) has a smaller average of the RMSE. The RMSE
histograms for 150 images by different methods (Figures 3, 4, 5, 6, 7, 8 and 9) show for almost
all methods that the RMSE values are almost symmetrically centered around the mean value.
In order to compare analytically the proposed method to other methods,a study of the Analysis of
variance (ANOVA) with dependent samples (dependence by image) is done. The software R
gives the following Anova table:
As Pr(>F) is smaller than 1% , in table 4., the methods are significantly different. Then, the
Newman Keuls test is performed for comparing the methods two-by-two and giving groups
having significantly the same mean of RMSE. The software R shows the results of the test as
follows:
$groups
RMSE groups
SD 12.6072900 a
BG 11.0447767 b
PCA 8.7139600 c
DWT 4.1941660 d
DCT_var 2.8395233 e
GIF 2.5146353 e
LP_DWT 2.0496413 f
LP_PCA 1.9954953 f
LP_NLV 1.3446913 g
NLV 0.5921593 h
From table 5, we obtained the methods significantly different except the methods DCT_var and
GIF form the group “e” and the methods LP_DWT and LP_PCA form the group “f”.
The proposed method NLV in group “h” has a smaller mean (0.5921593) and significantly
different of the all methods. We conclude that the proposed method gives the better result than
other methods.
4. CONCLUSION
In this paper we propose an image fusion method based on local neighbour variability (NLV).
This method has been described in detail. Applying this method to 150 images gives a result of
significant improvement in both visual and quantitative image fusion compared to other fusion
45
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
methods. We used Laplacian pyramid with NLV as selection criterion, LP-NLV. Based on the
result of the experiment, we notice that LP-NLV is better than LP-DWT and DWT.
The proposed method has the advantage of taking into account the variability between each pixel
and its neighbours. Thus the coefficient of the pixels located in the focused part will be more
imposing. This method can be extended to multimodal images used in particular in medicine
(scanner, ultrasound, radiography, etc.) to give the presence of certain cancer cells seen in one
image and not visible in another image.
2) In the food industry to control the quality of some products, cameras have been used to
take pictures. Each camera targets one of many objects placed on a conveyor belt to detect
an anomaly. To have a photo containing all the objects clearly, we can use the proposed
merge method which gives more details.
• We intend to extend our method to color images because color conveys additional
information.
• Encouraged by these results, we plan to extend the NLV method to the fusion of more than
two images, taking into account the local variability of each image (intra-variability) and
the variability between the images (inter-variability). Inter-variability can detect "abnormal
pixels" among images.
REFERENCES
[1] Naidu, V.P.S. and. Raol, J.R. (2008) “Pixel-level Image Fusion using Wavelets and Principal
Component Analysis”, Defence Science Journal, Vol. 58, No. 3, pp. 338-352.
[2] Tian, J., Chen, L., Ma, L., Yu, W., (2011) “Multi-focus image fusion using a bilateral gradient-based
sharpness criterion”, Optic Communications, 284, pp 80-87.
[3] Zhan, K., Teng, J., Li, Q., Shi, J. (2015) “A novel explicit multi-focus image fusion method”, Journal
of Information Hiding and Multimedia Signal Processing, vol. 6, no. 3, pp.600-612.
[4] Mallat, S.G. (1989) “A Theory for multiresolution signal decomposition: The wavelet
representation”, IEEE Trans. Pattern Anal. Mach. Intel., 11(7), 674-93.
[5] Pajares, G., Cruz, J.M. (2004) “A Wavelet-Based Image Fusion Tutorial”, Pattern Recognition 37.
Science Direct.
[6] Guihong, Q., Dali, Z., Pingfan, Y. (2001) “Medical image fusion by wavelet transform modulus
maxima”. Opt. Express 9, pp. 184-190.
[7] Indhumadhi, N., Padmavathi, G., (2011) “Enhanced Image Fusion Algorithm Using Laplacian
Pyramid and Spatial Frequency Based Wavelet Algorithm”, International Journal of Soft Computing
and Engineering (IJSCE). ISSN: 2231-2307, Vol. 1, Issue 5.
[8] Sabre, R. Wahyuni, I.S, (2020) “Wavelet Decomposition and Alpha Stable”, Signal and Image
Processing (SIPIJ), Vol. 11, No. 1. pp. 11-24.
[9] Jinjiang Li , Genji Yuan and Hui Fan (2019) “Multifocus Image Fusion Using Wavelet-Domain-
Based Deep CNN”, Computational Intelligence and Neuroscience, Vol. 2019 Article ID 4179397 |
https://doi.org/10.1155/2019/4179397
[10] Burt, P.J., Adelson, E.H. (1983) “The Laplacian Pyramid as a Compact Image Code”, IEEE
Transactions on communication, Vol.Com-31, No 40.
46
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
[11] Burt, P.J. (1984) “The Pyramid as a Structure for Efficient Computation. Multiresolution Image
Processing and Analysis”, A. Rosenfeld, Ed., Springer-Verlag. New York.
[12] Burt, P.J., Kolezynski, R.J. (1993) “Enhanced Image Capture Through Fusion”, in: International
Conference on Computer Vision, pp. 173-182.
[13] Wang, W., Chang, F. (2011) “A Multi-focus Image Fusion Method Based on Laplacian Pyramid”,
Journal of Computers, Vol.6, No 12.
[14] Zhao, P., Liu, G., Hu, C., Hu, and Huang, H. (2013) “Medical image fusion algorithm on the Laplace-
PCA”. Proc. 2013 Chinese Intelligent Automation Conference, pp. 787-794.
[15] Verma, S. Kaur K.,, Kumar M., R..(2016) “Hybrid image fusion algorithm using Laplacian Pyramid
and PCA method”, proceeding of the Second International Conference on Information and
Communication Technology for Competitive Strategies.
[16] Wahyuni, I.S, Sabre, R. (2019) “Multifocus Image Fusion Using Laplacian Pyramid Technique Based
on Alpha Stable Filter”, CRASE Vol. 5, No. 2. pp. 58-62.
[17] Wahyuni, I.S, Sabre, R. (2016) “Wavelet Decomposition in Laplacian Pyramid for Image Fusion”,
International Journal of Signal Processing Systems Vol. 4, No. 1. pp. 37-44.
[18] Haghighat, M.B.A, Aghagolzadeh, A., Seyedarabi, H. (2010) “Real-time fusion of multifocus images
for visual sensor networks”. Machine vision and image processing (MVIP), 2010 6th Iranian. 2010.
[19] Bavirisetti, D.P., and Dhuli, R.(2016) ‘‘Multi-focus image fusion using multi-scale image
decomposition and saliency detection’’, Ain Shams Eng. J., to be published. [Online]. Available:
http://dx.doi.org/ 10.1016/j.asej.2016.06.011.
[20] Petland, A. (1984) “A new sense for depth of field”, IEEE Transactions on Pattern Analysis and
Machine Intelligent, Vol. 9, No. 4, pp. 523-531.
[21] Nayar, S.K.(1992) ”Shape from Focus System”, Proc. of IEEE Conf. Computer Vision and Pattern
Recognition, pp. 302-308.
[22] Gonzales, R.C., Woods, R.E. (2002) “Ditigal Image Processing” 2nd edition. Prentice Hall.
[23] Liu,R., Li, Z., Jia, J.(2008) “Image Partial Blur Detection and Classification”, Computer Vision and
Pattern Recognition, CVPR 2008. IEEE Conference DOI: 10.1109/CVPR.2008.4587465
[24] Aslantas, V. (2007) “A depth estimation algorithm with a single image.” Optic Express, Vol. 15,
Issue 8. OSA Publishing.
[25] Elder, J.H., Zucker, S.W. (1998) “ Local Scale Control for Edge Detection and BlurEstimation.”
IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol.20, No.7.
[26] www.rawsamples.ch. Accessed: 15 November 2017.
[27] Kumar, A., Paramesran, R., Lim, C. L., and Dass, S.C. (2016) “Tchebichef moment based restoration
of Gaussian blurred images”. Applied Optics, Vol. 55, Issue 32, pp. 9006-9016.
5. ANNEX 1
Consider, without loss the generality that we have a focus pixel (x, y) in image I1 and blurred
in image I2 as in Fig. 1.
Fig. 1. Two multi-focus images, the yellow part is blurred area and the white part is clear (focus) area.
The neighbor local variability of images I1 and I2, respectively is defined in (1) by:
47
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
1 1
𝑣𝑎,1 (𝑥, 𝑦) = exp (√ 𝑟1 (𝑥, 𝑦)) and 𝑣𝑎,2 (𝑥, 𝑦) = exp (√ 𝑟2 (𝑥, 𝑦)) where 𝑟1 (𝑥, 𝑦)and
𝑅 𝑅
𝑟2 (𝑥, 𝑦)
2𝑎
2𝑎
2
𝑟2 (𝑥, 𝑦) = ∑ ∑ |𝐼2 (𝑥, 𝑦) − 𝐼2 (𝑥 + (𝑚 − 𝑎), 𝑦 + (𝑛 − 𝑎))| (7)
𝑛=0
𝑚=0
Let IR is the reference image of multi-focus images I1 and I2. Moreover, it is shown in [20] and
[21] that the blurred image can be seen as the product convolution between the reference image
and Gaussian filter:
w * I ( x, y ), ( x, y ) B1 w * I ( x, y), ( x, y) B2
I1 ( x, y ) 1 R I 2 ( x, y) 2 R
I R ( x, y ), ( x, y ) B1 I R ( x, y), ( x, y) B2
, (8)
𝑘2 + 𝑙2
exp (− )
2𝜎1 2
𝑤1 (𝑘, 𝑙 ) = 𝑤1 (𝑘, 𝑙 ) = 𝑠1 , (k ,l) ∈ [−𝑠1 , 𝑠1 ]2 ,
𝑠1
𝑘2 + 𝑙2
∑ ∑ exp (− )
2𝜎1 2
𝑙=−𝑠1
𝑘=−𝑠1
𝑘2 + 𝑙2
exp (− )
2𝜎2 2
𝑤2 (𝑘, 𝑙) = 𝑠2 , (𝑘, 𝑙 ) ∈ [−𝑠2 , 𝑠2 ]2
𝑠2
𝑘2 + 𝑙2
∑ ∑ exp (− )
2𝜎2 2
𝑙=−𝑠2
𝑘=−𝑠2
s1 s1
w (k , l ) I
s2 s2
w1 * I R ( x, y )
k s1 l s1
1 R ( x k , y l ), w2 * I R ( x, y ) w (k , l ) I
k s2 l s2
2 R ( x k , y l ),
2a 2a 2a 2a
r1 ( x, y ) D(1m,n ) ( x, y ) r2 ( x, y ) D(2m ,n ) ( x, y )
2 2
D(1m,n ) ( x, y ) I1 ( x, y ) I1 x m a , y n a
where (10)
48
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
D(2m,n ) ( x, y ) I 2 ( x, y ) I 2 x m a , y n a
(11)
Proposition:
The local variability on blurred part is smaller than the local variability on focused part. Let
( x, y) B2 (the blurred part of I ) and ( x, y) B1 (focus par of I ), then r2 ( x, y) r1 ( x, y) .
2 1
Proof:
1
̂(𝑛,𝑚) 1
where 𝐷 (𝑥, 𝑦) is Fourier transform of𝐷(𝑚,𝑛) (𝑥, 𝑦).
1
̂(𝑛,𝑚) 1
𝐷 (𝑥, 𝑦) = 𝐹𝑇[𝐷(𝑚,𝑛) (𝑥, 𝑦)] = 𝐹𝑇[𝐼1 (𝑥, 𝑦) − 𝐼1 (𝑥 + (𝑚 − 𝑎), 𝑦 + (𝑛 − 𝑎))]
(13)
( x, y) B2 ( x, y) B1
As therefore , from (12), equation (13) can be written as follows:
̂(𝑛,𝑚)
𝐷 1
(𝑥, 𝑦) = 𝐹𝑇[𝐼𝑅 (𝑥, 𝑦) − 𝐼𝑅 (𝑥 + (𝑚 − 𝑎 ), 𝑦 + (𝑛 − 𝑎))] (14)
s2 s2
I 2 ( x, y ) w (k , l )* I
k s2 l s2
2 R (x k, y l)
and (15)
and
where
1, 𝑖𝑓 (k, l) ∈ [−𝑠2 , 𝑠2 ]2
1[−𝑠2 ,𝑠2]2 (𝑘, 𝑙 ) = {
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
.
D(2m,n) ( x, y)
The Fourier transform of is
2
̂(𝑛,𝑚)
𝐷 (𝑥, 𝑦) = 𝐹𝑇[𝑤2 1[𝑠2,𝑠2]2 ∗ 𝐼𝑅 (𝑥, 𝑦) − 𝑤2 1[𝑠2,𝑠2 ]2 ∗ 𝐼𝑅 (𝑥 + (𝑚 − 𝑎), 𝑦 + (𝑛 − 𝑎))]
49
Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
| 𝑠2 𝑠2 |
𝑘 2+𝑙 2
−( )
𝑒 2𝜎2 2
2
̂(𝑛,𝑚) 1
̂(𝑛,𝑚)
|𝐷 (𝑥, 𝑦)| = |∑ ∑ 𝑠2 𝑒 −𝑖2(𝑘𝑛+𝑙𝑚) 𝐷 (𝑥, 𝑦)|
𝑠2
𝑘′2+𝑙′2
−( )
| 𝑘=−𝑠2 𝑙=−𝑠2 ∑ ∑ 𝑒 2𝜎2 2 |
𝑙 ′ =−𝑠2
𝑘 ′ =−𝑠2
=
𝑠2 𝑠2 | |
𝑘 2+𝑙 2
−( )
𝑒 2𝜎2 2
≤ ∑∑| 𝑠2
̂(𝑛,𝑚)
| |𝐷 1 ̂(𝑛,𝑚)
(𝑥, 𝑦)| ≤ |𝐷 1
(𝑥, 𝑦)|
𝑠2
𝑘′2+𝑙′2
−( 2)
𝑘=−𝑠2 𝑙=−𝑠2 |∑ ∑ 𝑒 2𝜎2 |
𝑙′=−𝑠2
𝑘′=−𝑠2
(20)
On the other hand, from equation (13) and Plancherel-Parseval's theorem, we have
2𝑎 2𝑎
2𝑎 2𝑎
2 1 2
𝑟2 (𝑥, 𝑦) = 2
∑ ∑|𝐷(𝑚,𝑛) (𝑥, 𝑦)| = ̂(𝑛,𝑚)
∑ ∑|𝐷 2
(𝑥, 𝑦)|
(2𝑎 + 1)2
𝑛=0 𝑛=0
𝑚=0 𝑚=0
AUTHORS
Rachid Sabre received the PhD degree in statistics from the University of Rouen, Rouen, France, in 1993
and Habilitation (HdR) from the University of Burgundy, Dijon, France, in 2003.He joined Agrosup Dijon,
Dijon, France, in 1995, where he is an Associate Professor. From 1998 through 2010, he served as a
member of “Institut de Mathématiques de Bourgogne”, France. He was a member of the Scientific Council
AgroSup Dijon from 2009 to 2013. In 2012, he has been a member of “ Laboratoire Electronique,
Informatique, et Image” (Le2i), France. Since 2019 has been a member of Laboratory Biogeosciences
UMR CNRS, University of Burgundy. He is author/co-author of numerous papers in scientific and
technical journals and conference proceedings. His research interests lie in areas of statistical process and
spectral analysis for signal and image processing.
Ias Sri Wahyuni was born in Jakarta, Indonesia, in 1986. She earned the B.Sc. and M.Sc. degrees in
mathematics from the University of Indonesia, Depok, Indonesia, in 2008 and 2011, respectively.In 2009,
she joined the Department of Informatics (COMPUTING) System, Gunadarma University, Depok,
Indonesia, as a Lecturer. She is currently a PhD student at University of Burgundy, Dijon, France. Her
current research interests include statistics and image processing.
51