3D Wavelet Sub-Bands Mixing For Image De-Noising and Segmentation of Brain Images
3D Wavelet Sub-Bands Mixing For Image De-Noising and Segmentation of Brain Images
3D Wavelet Sub-Bands Mixing For Image De-Noising and Segmentation of Brain Images
2014
American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-03, Issue-01, pp-207-221 www.ajer.org Research Paper Open Access
3D Wavelet Sub-Bands Mixing for Image De-noising and Segmentation of Brain Images
Joyjit Patra1, Himadri Nath Moulick2,Shreyosree Mallick3, Arun Kanti Manna4
(C.S.E, Aryabhatta Institute Of Engineering And Management,Durgapur,West Bengal,India) (C.S.E, Aryabhatta Institute Of Engineering And Management,Durgapur,West Bengal,India) 3 (B.ech 4th year Student,CSE Dept,Aryabhatta Institute of Engineering and Mangement,Durgapur,W.B,India) 4 (Persuing Ph.D. from Techno India University,W.B.India)
2 1
Abstract: - A critical issue in image restoration is the problem of noise removal while keeping the integrity of
relevant image information. The method proposed in this paper is a fully automatic 3D block wise version of the Non Local (NL) Means filter with wavelet sub-bands mixing. The proposed a wavelet sub-bands mixing is based on a multi-resolution approach for improving the quality of image de-noising filter. Quantitative validation was carried out on synthetic datasets generated with the Brain Web simulator. The results show that our NL-means filter with wavelet sub-band mixing outperforms the classical implementation of the NL-means filter in of de -noising quality and computation time. Comparison with well established methods, such as non linear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better de-noising results. Finally, qualitative results on real data are presented. And this paper presents an algorithm for medical 3D image de-noising and segmentation using redundant discrete wavelet transform. First, we present a two stage de-noising algorithm using the image fusion concept. The algorithm starts with globally de-noising the brain images (3D volume) using Perona Maliks algorithm and RDWT based algorithms followed by combining the outputs using entropy based fusion approach. Next, a region segmentation algorithm is proposed using texture information and k-means clustering. The proposed algorithms are evaluated using brain 3D image/volume data. The results suggest that the proposed algorithms provide improved performance compared to existing algorithms.
Keywords: - Medical Image Analysis, De noising, Segmentation, Redundant Discrete Wavelet Transform. I. INTRODUCTION
Image de- noising can be considered as a component of processing or as a process itself. In the first case, the image de- noising is used to improve the accuracy of various image processing algorithms such as registration or segmentation. Then, the quality of the artifact correction influences performance of the procedure. In the second case, the noise removal aims at improving the image quality for visual inspection. The preservation of relevant image information is important, especially in a medical context. This paper focuses on a new de noising method firstly introduced by Buades et al. [4] for 2D image de noising: the Non Local (NL) means filter. We propose to improve this filter with an automatic tuning of the filtering parameter, a block wise implementation and a mixing of wavelet sub-bands based on the approach proposed in [17]. These contributions lead to a fully-automated method and overcome the main limitation of the classical NL-means: the computational burden. Section 2 presents related works. Section 3 presents the proposed method with details about our contributions. Section 4 shows the impact of our adaptations compared to different implementations of the NL-means filter and proposes a comparison with well-established methods. The validation experiments are performed on a phantom data set in a quantitative way. Finally, Section 5 shows results on real data. Typically, the field of medical image analysis involves: post-acquisition such as de noising and restoration, segmentation i.e. delineating features of interest, registration, i.e. align captured image with a model or previously captured image, computation i.e physical quantity derivation, visualization, and security. Existing algorithms in medical image analysis, in general, use partial differential equations, curvature driven flows and different mathematical models. Wavelet based methods have also been proposed for medical image analysis. In
www.ajer.org
Page 207
2014
1991, Weaver et al. [1] first proposed the use of wavelet theory in medical imaging with the application to noise reduction in MRI images. Thereafter, several algorithms have been proposed for de noising, segmentation, reconstruction, functional MRI, registration, and feature extraction using continuous wavelet transform (CWT), discrete wavelet transform (DWT), and redundant DWT (RDWT). Detailed survey of wavelet based algorithms for medical imaging can be found in [2] [3], [4], and [5]. In this paper, we propose algorithms for brain image de noising and region based segmentation using RDWT for improved performance. RDWT [6], [7], also known as shift invariant wavelet transform, has proven its potential in different signal processing applications but it is not well researched in the field of medical image analysis. The proposed algorithms utilize properties of RDWT such as shift invariance and noise per sub band relationship along with other techniques such as soft thresholding, clustering, and entropy for improved performance. Experimental results on the brain data show the usefulness of the proposed de noising and segmentation algorithms and clearly indicate their potential in medical image analysis. Section 2 briefly explains the fundamentals of redundant discrete wavelet transform. Medical image de noising algorithm is explained in Section 3 and Section 4 describes the proposed image segmentation algorithm. A. Noise in an Image It is generally desirable for image brightness (or film density) to be uniform except where it changes to form an image. There are factors, however, that tend to produce variation in the brightness of a displayed image even when no image detail is present. This variation is usually random and has no particular pattern. In many cases, it reduces image quality and is especially significant when the objects being imaged are small and have relatively low contrast. This random variation in image brightness is designated as noise. This noise can be either image dependent or image independent. All the digital images contain some visual noise. The presence of noise gives an image a mottled, grainy, textured or snowy appearance. 1. Random Noise Random noise revolves around an increase in intensity of the picture. It occurs through color discrepancies above and below where the intensity changes. It is random, because even if the same settings are used, the noise occurs randomly throughout the image. It is generally affected by exposure length. Random noise is the hardest to get rid of because we cannot predict where it will occur. The digital camera itself cannot remove it and it has to be lessened in an image editing program. 2. Fixed Pattern Noise Fixed pattern noise surrounds hot pixels. Hot pixels are pixel bits that are more intense than others surrounding it and are much brighter than random noise fluctuations. Long exposures and high temperatures cause fixed pattern noise to appear. If pictures are taken under the same settings, the hot pixels will occur in the same place and time. Fixed pattern noise is the easiest type to fix after it has occurred. Once a digital camera realizes the fixed pattern, it can be adjusted to lessen the effects on the image. However, it can be more dubious to the eye than random noise if not lessened. 3. Banding Noise Banding noise depends on the camera as not all digital cameras create it. During the digital processing steps, the digital camera takes the data produced by the sensor and creates the noise from that. High speeds, shadows and photo brightening will create banding noise. Gaussian noise, salt & pepper noise, passion noise, and speckle noise are some of the examples of this type of noise. 4. Speckle Noise Speckle noise is defined as multiplicative noise, having a granular pattern. It is an inherent property of ultrasound image and SAR image. Another source of reverberations is that a small portion of the returning sound pulse may be reflected back into the tissues by the transducer surface itself, and generates a new echo at twice the depth. Speckle is the result of the diffuse scattering, which occurs when an ultrasound pulse randomly interferes with the small particles or objects on a scale comparable to that of the sound wavelength. The backscattered echoes from irresolvable random tissue inhomogenities in ultrasound imaging and from objects in Radar imaging undergo constructive and destructive interferences resulting in mottled b-scan image.Speckle degrades the quality of US and SAR images and thereby reducing the ability of a human observer to discriminate the fine details of diagnostic examination. This artifact introduces fine-false structures whose apparent resolution is beyond the capabilities of imaging system, reducing image contrast and masking the real boundaries of the tissue leading to the decrease in the efficiency of further image processing such as edge detection, automatic segmentation, and registration techniques. Another problem in Ultrasound data is that the
www.ajer.org
Page 208
2014
received data from the structures lying parallel to the radial direction can be very weak, as where structures normal to the radial direction give a stronger echo. B. Filtering Techniques Filtering techniques are used as preface action before segmentation and classification. On the whole speckle reduction can be divided roughly into two categories: Incoherent processing techniques Image post processing The first one recovers the image by summing more than a few observations of the same object which suppose that no change or motion of the object happened during the reception of observations. These techniques do not require any hardware modification in the image reconstruction system, and hence have found a growing interest. In this the images are obtained as usual and the processing techniques are applied on the image obtained. Image post processing is an appropriate method for speckle reduction which enhances the signal to noise ratio while conserving the edges and lines in the image.
II.
These scans use high frequency sound waves which are emitted from a probe. The echoes that bounce back from structures in the body are shown on a screen. The structures can be much more clearly seen when moving the probe over the body and watching the image on the screen. The main problem in these scans is the presence of speckle noise which reduces the diagnosis ability. It provides live images, where the operator can select the most useful section for diagnosing thus facilitating quick diagnoses.
III.
One of the most fundamental problems in signal processing is to find a suitable representation of the data that will facilitate an analysis procedure. One way to achieve this goal is to use transformation, or decomposition of the signal on a set of basis functions prior to processing in the transform domain. Transform theory has played a key role in image processing for a number of years, and it continues to be a topic of interest in theoretical as well as applied work in this field. Image transforms are used widely in many image processing fields, including image enhancement, restoration, encoding, and description {jin_Jain_1989}.Historically, the Fourier transform has dominated linear time-invariant signal processing. The associated basis functions are complex sinusoidal waves ite that correspond to the eigenvectors of a linear time-invariant operator. A signal ()ft defined in the temporal domain and its Fourier transform () f, defined in the frequency domain, have the following relationships {jin_Jain_1989; jin_Papoulis_1987}: ()(),itfftedt+=(1) 1()().2itftfe d += (2) Fourier transform characterizes a signal ()ft via its frequency components. Since the support of the bases function ite covers the whole temporal domain (i.e infinite supp ort), ()f depends on the values of ()ft for all times. This makes the Fourier transform a global transform that cannot analyze local or transient properties of the original signal()ft.In order to capture frequency evolution of a non-static signal, the basis functions should have compact support in both time and frequency domain. To achieve this goal, a windowed Fourier transform (WFT) was first introduced with the use of a window function w(t) into the Fourier transform {jin_Mallat_1998}: (,)()().iSftfwted+= (3).The energy of the basis function ,()() itgtwte= is concentrated in the neighborhood of time over an interval of sizet, measured by the standard deviation of2g. Its Fourier transform is)(,g ( ) w ( )e i = , with energy in frequency domain localized around, over an interval of size. In a time-frequency plane(,)t, the energy spread of what is called the atom is represented by the Heisenberg rectangle with time width,() gtt and frequency width. The 4 uncertainty principle states that the energy spread of a function and its Fourier transform cannot be simultaneously arbitrarily small, verifying: 1.2t (4) Shape and size of Heisenberg rectangles of a windowed Fourier transform therefore determine the spatial and frequency resolution offered by such transform. Examples of spatial-frequency tiling with Heisenberg rectangles are shown in Figure 1. Notice that for a windowed Fourier transform, the shape of the time-frequency boxes are identical across the whole time-frequency plane, which means that the analysis resolution of a windowed Fourier transform remains the same across all frequency and spatial locations.
www.ajer.org
Page 209
2014
Figure 1: Example of spatial-frequency tiling of various transformations. x-axis: spatial resolution. y-axis: frequency resolution. (a) discrete sampling (no frequency localization ). (b) Fourier transform (no temporal localization). (c) windowed Fourier transform (constant Heisenberg boxes). (d) wavelet transform (variable Heisenberg boxes). To analyze transient signal structures of various supports and amplitudes in time, it is necessary to use time-frequency atoms with different support sizes for different temporal locations. For example, in the case of high frequency structures, which vary rapidly in time, we need higher temporal resolution to accurately trace the trajectory of the changes; on the other hand, for lower frequency, we will need a relatively higher absolute frequency resolution to give a better measurement on the value of frequency.
IV.
RELATED WORKS
Many methods for image de noising have been suggested in the literature, and a complete review of them can be found in [4]. Methods for image restoration aim at preserving the image details and local features while removing the undesirable noise. In many approaches, an initial image is progressively approximated by filtered versions which are smoother or simpler in some sense. Total Variation (TV) minimization [21], nonlinear diffusion [2, 19, 24], mode filters [25] or regularization methods [18, 21] are among the methods of choice for noise removal. Most of these methods are based on a weighted average of the gray values of the pixels in a spatial neighborhood [10, 23]. One of the earliest examples of such filters has been proposed by Lee [16]. An evolution of this approach has been presented by Tomasi et al [23], who devised the bilateral filter which includes both a spatial and an intensity neighborhood. Recently, the relationships between bilateral filtering and local mode filtering [25], local M-2estimators [26] and non-linear diffusion [1] have been established. In the context of statistical methods, between the Bayesian estimators applied on a Gibbs distribution resulting with a penalty functional [12], and averaging methods for smoothing has also been described in [10]. Finally, statistical averaging schemes enhanced via incorporating a variable spatial neighborhood scheme have been proposed in [13, 14, 20]. All these methods aim at removing noise while preserving relevant image information. The trade-off between noise removal and image preservation is performed by tuning the filter parameters, which is not an easy task in practice. In this paper we propose to overcome this problem with a 3D sub-bands wavelet mixing. As in [17], we have chosen to combine a multi resolution approach with the NL-means filter [4] which has recently shown very promising results. Recently introduced by Buades et al. [4], the NL-means filter proposes a new approach for the de-noising problem. Contrary to most de-noising methods based on a local recovery paradigm, the NL-means filter is based on the idea that any periodic, textured or natural image has redundancy, and that any voxel of the image has similar voxels that are not necessarily located in a spatial neighborhood. This new non-local recovery paradigm allows to improve the two most desired properties of a de-noising algorithm: edge preservation and noise removal. C. Methods In this section, we introduce the following notations: is the image, where represents the image grid, considered as
cubic for the sake of simplicity and without loss of generality . for the original voxelwise NL-means approach u(xi) is the intensity observed at voxel xi. Vi is the cubic search volume centered on voxel xi of size |Vi| = (2M + 1)3, M " N. Ni is the cubic local neighborhood of xi of size |Ni| = (2d + 1)3, d " N. u(Ni) = (u(1)(Ni), ..., u(|Ni|)(Ni))T is the vector containing the intensities of Ni (that we term patch in the following). NL(u)(xi) is the restored value of voxel xi. w(xi, xj) is the weight of voxel xj when restoring u(xi). for the block wise NL-means approach Bi is the block centered on xi of size u(Bi) is the vector containing the intensities of the block Bi. NL(u)(Bi) is the vector containing the restored value of Bi. w(Bi,Bj) is the weight of block Bj when restoring the block u(Bi). the blocks Bik are centered on voxels xik which represent a subset of the image voxels, equally regularly distributed over "3 (see Fig 2). n represents the distance between the centers of the blocks Bik (see Fig 2)
www.ajer.org
Page 210
2014
The Non Local Means filter In the classical formulation of the NL means filter [4], the restored intensity NL(u)(xi) of the voxel xi, is a weighted average of the voxels intensities u(xi) in the search volume Vi of size (2M+1)3:3
........................(1) where w(xi, xj) is the weight assigned to value u(xj) to restore voxel xi. More precisely, the weight evaluates the similarity between the intensity of the local
Figure 2: Left: Usual voxel wise NL-means filter: 2D illustration of the NL- means principle. The restored voxel xi (in red) is the weighted average of all intensities of voxels xj in the search volume Vi, based on the similarity of their intensity neighborhoods u(Ni) and u(Nj ). In this example, we set d = 1 and M = 8. Right: Blockwise NL-means filter: 2D illustration of the block wise NL-means principle. The restored value of the block Bik is the weighted average of all the blocks Bj in the search volume Vik . In this example, we set = 1 and M = 8. " [0, 1] and
(cf Fig. 2Left).For each voxel xj in Vi, the computation of the weight is based on the Euclidean distance between patches u(Nj) and u(Ni), defined as:
.(2) where Zi is a normalization constant ensuring that , and h acts as a filtering parameter controlling the decay of the exponential function. Automatic tuning of the filtering parameter h. As explained in the introduction, de-noising is usually the first step of complex image processing procedures. The number and the dimensions of the data to process being continually increasing, each step of the procedures needs to be as automatic 4 as possible. In this section we propose an automatic tuning of the filtering parameter h. First, it has been shown that the optimal smoothing parameter h is proportional to the standard deviation of the noise [4]. Second, if we want the filter independent of the neighborhood size, the optimal h must depend on |Ni| (see Eq. 2). Thus, the automatic tuning of the filtering parameter h amounts to determining the relationship where # is a constant. Firstly, the standard deviation of the noise needs to be estimated. In case of an additive white Gaussian noise, this estimation can be based on pseudoresiduals as defined in [3, 11]. For each voxel xi of the volume et us define:
..(3)
www.ajer.org
Page 211
2014
] in the
(4) Based on the fact that, in the case of Gaussian noise and with normalized L2- norm, the optimal de-noising is obtained for (2) can be written as:
..(5) where only the adjusting constant needs to be manually tuned. If our estimation " of the standard deviation of the noise " is correct, should be close to 1.The optimal choice for # will be discussed later. Blockwise implementation The main problem of the NL-means filter being its computational time, a blockwise approach can be used to 5 decrease the algorithmic complexity. Indeed, instead of de-noising the image at a voxel level, entire blocks are directly restored. A blockwise implementation of the NL-means filter consists in a) dividing the volume into blocks with overlapping supports, b) performing NL-means like restoration of these blocks and c) restoring the voxels values based on the restored values of the blocks they belong to: 2. A partition of the volume into overlapping blocks Bik of size (2!+1)3 is performed, such as k Bik , under the constraint that each block Bik intersects with at least one other block of the partition. These blocks are centered on voxels xik which constitute a subset of . The voxels xik are equally distributed at positions where n represents the distance between the centers of Bik . To ensure a global continuity in the de-noised image, the overlapping support of blocks is non empty: For each block Bik , a NL-means-like restoration is performed as follows:
..(6) where Zik is a normalization constant ensuring that (see Fig. 2(right).
For a voxel xi included in several blocks Bik , several estimations of the restored intensity NL(u)(xi) are obtained in different NL(u)(Bik ). Theestimations given by different NL(u)(Bik ) for a voxel xi are stored in a vector Ai. The final restored intensity of voxel xi is then defined as:
.(7) where Ai(p) denotes the pth element of the vector Ai.
www.ajer.org
Page 212
2014
Figure 3: Blockwise NL-means Filter. For each block Bik centered on voxel xik , a NL-means like restoration is performed from blocks Bj . In this way, for a voxel xi included in several blocks, several estimations are obtained. The restored value of voxel xi is the average of the different estimations stored in vector Ai.In this example the algorithm. Indeed, The main advantage of this approach is to significantly reduce the complexity of for a volume "3 of size N3, the global complexity is .For instance, with n = 2, the complexity is divided by a factor 8.Wavelet Sub-bands Mixing. A. Hybrid approaches Recently, hybrid approaches coupling the NL-means filter and a wavelet decomposition have been proposed [9, 17, 22]. In [9], a wavelet-based de-noising of blocks is performed before the computation of the non local means. The NL-means filter is performed with de-noised version of blocks in order to improve the denoising result. In [22], the NL-means filter is applied directly on wavelet coefficients in transform domain. This approach allows a direct de-noising of compressed images (such as JPEG2000) and a reduction of computational time since smaller images are processed. In [17], a multi-resolution framework is proposed to adaptively combine the result of de-noising algorithms at different space-frequency resolutions. This idea relies on the fact that a set of filtering parameters is not optimal over all the space-frequency resolutions. Thus, by combining in the transform domain the results obtained with different sets of filtering parameters, the de-noising is expected to be improved.
V.
OVERALL PROCESSING
In order to improve the de-noising result of the NL-means filter, we propose a Multi-resolution framework similar to [17] to implicitly adapt the filtering parameters ( h, |Bi|) over the different space-frequency resolutions of the image. This adaptation is based on the fact that the size of the patches impacts the De-noising properties of the NL-means filter. Indeed, the weight given to a bl depends on its similarity with the block under consideration, but the similarity between the blocks depends on their sizes. Thus, given the size of the blocks, removal or preservation of image components can be favored. In the transform domain, the main features of the image correspond to low frequency information while finer details and noise are associated to high frequencies. Nonetheless, noise is not a pure high frequency component in most images. Noise is spanned over a certain range of frequencies in the image, with mainly middle and high components [17]. In NL-means-based restoration, large blocks and setting # = 1 efficiently remove all frequencies of noise but tend to spoil the main features of the image,whereas small blocks and low smoothing parameter ( # = 0.5) tend to better preserve the image components but cannot completely remove all frequencies of noise. De-noising of the original image I using wo sets of filtering parameters: one adapted to the noise components removal (i.e. large blocks and ) and the other adapted to the image features preservation (i.e. small blocks and = 0.5). This yields two images Io and Iu. In Io, the noise is efficiently removed and, conversely, in Iu the image features are preserved.
www.ajer.org
Page 213
2014
Decomposing Io and Iu into low and high frequency sub-bands. The first level decomposition of the images is performed with a 3D discrete Wavelet Transform (DWT). Mixing the highest frequency sub-bands of Io and the lowest frequency sub-bands of Iu. Reconstructing the final image by an inverse 3D DWT from the combination of the selected high and low frequencies. In this paper, we propose an implementation of this approach using our optimized blockwise NL-means filter and the 3D DWT Daubechies-8 basis. The latter is implemented in Qccpack1 in the form of dyadic sub-band pyramids. This DWT is widely used in image compression due to its robustness and efficiency.
VI.
VISUAL ASSESSMENT
Visually, the proposed method combines the most important attributes of a De-noising algorithm: edge preservation and noise removal. Fig.4 shows that our filter removes noise while keeping the integrity of MS lesions (i.e. no structure appears in the removed noise). Fig. 4 focuses on the differences between the Optimized Blockwise NLM and the Optimized Blockwise NLM with WM filters. The de-noising result obtained with the Optimized Blockwise NLM with WM filter visually preserves the edges better than the Optimized Blockwise NLM filter. This is also confirmed by visual inspection of the comparison with the ground truth . The images of difference between phantom and the de-noised image show that less structures have been removed with the Optimized Blockwise NLM with WM filter. Thus, the multi-resolution approach allows to better preserve the edges and to enhance the contrast between tissues.
Figure 4: Fully-automatic restoration obtained with the optimized blockwise NL-means with wavelet mixing filter in 3 minutes on a Dual Core Intel(R) Pentium( R) D CPU 3.40GHz. The image is a T2-w phantom MRI with MS of 181 217 181 voxels and 9% of noise.
VII.
Medical imaging technology is becoming an important component of large number of applications such as the diagnosis research and treatment. It enables the physicians to create the images of the human body for the clinical purposes. Medical images like X-Ray, CT, MRI and PET, SPECT have minute information about the heart brain and nerves. These images suffer from a lot of short comings including the acquisition of noise from the equipment, ambient noise from the environment and the presence of background tissue, other organs and anatomical influences such as body fat and breathing motion. Noise reduction therefore becomes very important. The main techniques of image de-noising are filters wavelets and neural networks. The BPNN based approach is a powerful and effective method for image de-noising. Earlier proposed methods suffered from drawbacks such as noise, artifacts and degradation. Although all the spatial filters performs well on the digital images but still suffered from some constraints such as resolution degradation these filters operated by smoothing over a fixed window and it produces artifacts around the object and sometimes caused over smoothing thus causing the blurring of image. Wavelet transform outperforms the filters because of its properties like sparsity, multi resolution and multi scale nature and proved promising as they are capable of suppressing noise while maintaining high frequency signal details. But the limitation with wavelet transform was that the local scale- space information of the image is not adaptively considered by the standard wavelet thresholding methods. Other difficulty was that the soft thresholding function was a piecewise function and does not have high order derivates. A new type of thresholding neural network was presented which outperforms the soft thresholding using wavelet transform but still does not promised a high performance in terms of PSNR, MSE and visual test. Considering and analyzing the drawbacks of the previous methods we propose a new improved BPNN approach and Fuzzy to de-noise the medical images. This approach includes using both mean and median statistical functions for calculating the output pixels of the NN and Fuzzy. This uses a part of degraded image pixels to generate the system training pattern. Different test, images noise levels and
www.ajer.org
Page 214
2014
neighborhood sizes are used. Based on using samples of degraded pixels neighborhoods as input, the output of the proposed approach provided a good image de-noising performance which exhibits a promising results of the degraded noisy image in terms of PSNR, MSE and visual test.
VIII.
In this section, we compare the proposed method with two of the most used approaches in MRI domain: the Non Linear Diffusion (NLD) filter r [19] and the Total Variation (TV) minimization [21]. The main difficulty to achieve this comparison is related to the tuning of smoothing parameters in order to obtain the best results for NLD filter and TV minimization scheme. After quantifying the parameter space, we exhaustively tested all possible parameters within a certain range. This allows us to obtain the best possible results for the NLD filter and the TV minimization.For the Optimized Blockwise NLM with WM the same set of parameters Su = (!u,MW, #u) = (1, 3, 0.5) and So = (!o,MW, #o) = (2, 3, 1) are used for all noise levels. The automatic tuning of h adapts the smoothing to the noise level.For NLD filter, the parameter K varied from 0 .05 to 1 with a step of 0.05 and the number of iterations varied from 1 to 10. For TV minimization, the parameter & varied from 0.01 to 1 with a step of 0.01 and the number of iterations varied from 1 to 10. The results obtained for a 9% of Gaussian noise are presented , but this screening was performed for the four levels of noise. It is important to underline that the results giving the best PSNR are used, but these results do not necessary give the best visual output. Actually, the best PSNR value for the NLD filter and TV minimization are obtained for a visually under-smoothed image since these methods tend to spoil the edges (see Fig. 5). This is explained by the fact that the optimal PSNR is obtained when a good trade-off is reached between edge preserving and noise removing.
Figure 5: Result for the NLD filter and the TV minimization on phantom images with Gaussian noise at 9%. For the NLD filter, K varied from 0 .05 to 1 with a step of 0.05 and the number of iterations varied from 1 to 10. For the TV minimization, & varied from 0.01 to 1 with a step of 0.01 and the number of iterations varied from 1 to 10.
IX.
QUANTITATIVE RESULTS
Figure 6: Comparison between Non Linear Diffusion, Total Variation and Optimized Blockwise NLmeans with wavelet mixing denoising. The PSNR experiments show that the Optimized Blockwise NL-means with wavelet mixing filter significantly outperforms the well-established Total Variation minimization process and the Non
www.ajer.org
Page 215
2014
Linear Diffusion approach. As presented in Fig. 6, our block optimized NL-means with wavelet mixing filter produced the best PSNR values whatever the noise level.
X.
VISUAL ASSESSMENT
The de-noising results obtained by the NLD filter, the TV minimization and our Optimized blockwise NLM with WM. Visually, the NL means- based approach produced the best de-noising. The removed noise shows that the proposed method removes significantly less structures than NLD filter or TV minimization. Finally, the comparison with the ground truth underlines that the NL-means restoration gives a result very close to the ground truth and better preserves the anatomical structure compared to NLD filter and TV minimization.
XI.
The T1-weighted MR images used for experiments were obtained with T1 sense 3D sequence on 3T Philips Gyroscan scanner. The restoration results, presented in Fig. 7, show good preservation of the cerebellum. Fully automatic segmentation and quantitative analysis of such structures are still a challenge, to improve restoration schemes could greatly improve these processings.
Figure 7: Fully-automatic restoration obtained with the optimized blockwise NL-means with wavelet mixing filter on a 3 Tesla T1-w MRI data of 2563 voxels in less than 4 minutes on a Dual Core Intel(R) Pentium(R) D CPU 3.40GHz.
XII.
Wavelet Transform generally, DWT [6], [8] is used in wavelet based medical image analysis as it preserves frequency information in stable form and allows good localization both in time and spatial frequency domain. However, one of the major drawbacks of DWT is that the transformation does not provide shift invariance. This causes a major change in the wavelet coefficients of the image even for minor shifts in the input image. In medical imaging, we need to know and preserve exact location of different information; but shift variance may lead to inaccuracies. For example, in medical image de-noising it is important to preserve edge information and remove noise, but DWT based de-noising may produce specularities along the edges. Several techniques have been proposed to address shift variance in de-noising and segmentation [9]. In this paper, we use RDWT [6], [7], [10] to overcome the shift variance problem of DWT. RDWT can be considered as an approximation to discrete wavelet transform that removes the down-sampling operation from traditional critically sampled DWT, produces an over complete representation, and provide noise per sub band relationship [7]. The shift variance of DWT arises from the use of down-sampling whereas RDWT is shift invariant because the spatial sampling rate is fixed across scale. Similar to DWT, RDWT and Inverse RDWT (IRDWT) of a two dimensional image or three dimensional volume data is obtained by computing each dimension separately where detailed and approximation bands are of the same size as the input image/data 1.Fusion based Two Stage Approach to Medical Image De-noising This section presents a fusion based de-noising algorithm that utilizes the concept of image fusion. In this two stage approach, we first concurrently apply two de-noising algorithms globally and then, in the second stage, generate the quality enhanced image by locally combining the good quality regions from the two de-noised images. In this research, we use Perona Maliks algorithm [11] and RDWT based de-noising algorithm as the two ingredient algorithms and the outputs of these two algorithms are combined using the proposed fusion technique. This section first describes the RDWT based de-noising algorithm followed by the fusion approach. RDWT base Image De-noising Let IT be the true image and N be the noise component. As described by Jin et al. [2], the relationship of noisy image IN corresponding to IT and N can be written as:
www.ajer.org
Page 216
2014
(9) where IR represents the reconstructed signal, W represents the wavelet based de-noising, l represents the level of decomposition, and t is the function that aims at eliminating noise components in the transform domain while preserving the true signal coefficients. In ideal conditions, IR = IT . DWT based de-noising algorithms have been proposed in [12]-[13] using different wavelet basis and thresholding schemes. All these algorithms use some technique to handle shift variance but suffer due to presence of visual artifacts and Gibbs phenomenon. Here, we use RDWT in the proposed de-noising algorithm to address shift invariance and challenges due to artifacts. IN is decomposed at l levels using 3D/2D RDWT. Soft thresholding technique [14] is applied on the RDWT coefficients of sub band Ci (i = 1, 2, , l) with threshold ti to obtain the de-noised sub band C0i .
(10) where the threshold for each sub band ti is computed using Equation 4
..(11) Here, is the standard derivation for the sub band and the noise variance for each sub band computed using Equation 5,
i , is
.(12) The scale parameter S is computed using i.e length of sub band at scale,
..(13) Finally, the medical volume/image is reconstructed by applying 3D/2D IRDWT on medical data. to get the de-noised
XIII.
Segmentation of biomedical images is the basis for 3D visualization and operation simulation. Precision in segmentation is critical to diagnosis and treatment. Conventionally, segmentation methods are divided into region based segmentation and edge or gradient based segmentation. Region based segmentation [16], [17] is usually based on the concept of finding similar features such as brightness and texture patterns. Edge based segmentation methods [18] are based on finding the high gradient value in the image and then connect them to form a curve representing a boundary of the object. In this section, we propose RDWT based medical image segmentation algorithm which is a region-based method but inherently provides the features of edge-based segmentation method too. Since the detail bands of RDWT decomposed image provide gradient information, we can use these information for region segmentation. The proposed region based segmentation algorithm is described as follows: Let I be the medical image/volume data to be segmented. This image is decomposed into n levels using RDWT. The proposed approach uses the wavelet energy features computed from the approximation band and all the detailed bands using block size of (In our
www.ajer.org
Page 217
2014
experiments, we chose the block size as 33). fi, the energy features for RDWT sub bands (where i = {a, h, v, d} and a- approximation , v - vertical, d - diagonal, h - horizontal),are computed using Equations 14
..(14) These energy features, fa, fv, fd, and fh, reflect the texture property of an image, and the wavelet energy features computed from detailed sub bands provide the gradient information which facilitates the robust segmentation. Further, we use k-means clustering based learning algorithm which first learns from the training data and then identifies different feature regions at the testing time. Training data is used to train the k-means clustering algorithm [19] and form different clusters or groups of brain regions such as background, skull, and fat. As shown in Figure 3, we consider six regions present in the brain image namely, background, CSF, grey matter, white matter, skull, and fat. k-means clustering algorithm is trained using the simulated brain data as training data and different colors are assigned to the clusters. For segmentation, the test image is first decomposed into n = 3 levels and wavelet energy features are computed for every level. For the nth level, trained k-means algorithm classifies every feature and assigns a color to each feature. The segmented sub bands are reconstructed to get the n 1 level of segmented decomposition. At this point of time, approximation band is the segmented image obtained from previous step and detailed sub bands are non-segmented. The same procedure is applied till the reconstruction reaches to 0th level which gives the final segmented image. This algorithm uses the concept of multi-resolution analysis since the results of nth level are used to compute the results of (n 1) th level. Figure 5 shows the segmentation results on the brain web database.
XIV.
EXPERIMENTAL EVALUATION
To evaluate the performance of the proposed de-noising and segmentation algorithms, we use the 3D Brainweb database [15]. This database contains images with different noise factors along with the ground truths. To quantitatively evaluate the de-noising algorithm, Mean Square Error (MSE) and Structural SImilarity Metrics (SSIM) [20] are used. Table 1 delineates the experimental results for the proposed de-noising algorithm.Using the ground truth and noisy images with 7% noise, MSE is 121.4500 and SSIM is 0.5613 whereas with 9% noise, MSE is 189.6959 and SSIM is 0.5040. De-noising algorithm should decrease the MSE and increase the SSIM values. On applying Perona Maliks de -noising algorithm[11] to the 7% noisy brain volume, MSE is reduced to 93.9106 and SSIM is increased to 0.6449 whereas with the RDWT based de-noising algorithm, MSE is 88.3808 and SSIM is 0.6494. Compared to existing algorithms, the proposed fusion based algorithm significantly improves the visual quality of the brain image. This observation also holds with the 9% noisy brain data (Table 1). These results also suggest that the Perona Maliks de -noising algorithm and RDWT based de-noising algorithm provide complementary information and the fusion approach combines the globally de-noised images such that the fused information provide better quality image. An interesting observation is related to the time taken to de-noise the image. With Perona Maliks algorithm, time to de -noise the image is very much dependent on the amount of noise present in the image. With RDWT based de-noising algorithm, computational requirement is reduced because of inherent advantages of shift invariance and able to tolerate noise. For fusion based approach, the computational time includes time to globally de-noise the brain image and to fuse the de-noised images. Although the computational time for fusion approach is higher than constituent algorithms but the visual quality is significantly increased, thereby making it applicable to medical applications. Next, correct classification accuracy is used to evaluate the segmentation algorithm. Figure 5 shows a close view of the segmentation result. Visually, the results are encouraging and preserve both the region and edge information. Since the Brainweb database provides the ground truth, correct classification accuracy quantitatively represents the performance of the segmentation algorithm. For the six categories or regions, Table 4 shows that the proposed algorithm provides the accuracy in the range of 91.9-94.8%. In comparison with the existing SVM based segmentation algorithm [21], the proposed algorithm yields similar results. However, the main advantage is with computational time. With the proposed algorithm, the time taken to segment the regions of 3D brain volume is 5.37 seconds (at an average) whereas with the SVM based algorithm, it is 37.22 seconds. Further, in another experiment, we segment the noisy brain data (Figure 5). It is clear from this result that the segmentation of noisy images yield erroneous results. However, when the brain image is first de-noised and then segmented, the results show clear and correct segmentation. Furthermore, visual results were shown to eminent medical professionals and they asserted that the proposed de-noising and segmentation algorithms provide better information compared to existing algorithms.
www.ajer.org
Page 218
2014
Fig. 10: Training data used for training the clustering algorithm.
www.ajer.org
Page 219
2014
XV.
This paper presented a fully-automated blockwise version of the Non Local (NL) means filter with subbands wavelet mixing. Experiments were carried out on the Brain Web dataset [6] and real data set. The results on phantom shows that the proposed Optimized Blockwise NL-means with sub-bands wavelet mixing filter outperforms the classical implementation of the NL-means filter and the optimized implementation presented in [7, 8], in terms of PSNR values and computational time. Compared to the classical NL-means filter, our implementation (with block selection, blockwise implementation and wavelet sub-bands mixing) considerably decreases the required computational time (up to a factor of 20) and significantly increases the PSNR of the denoised image. The comparison of the filtering process with and without wavelet mixing shows that the subbands mixing better preserves edges and better enhances the contrast between the tissues. This multi resolution approach allows to adapt the smoothing parameters along the frequencies by combining several de-noised images. The comparison with well-established methods such as NLD filter and TV minimization shows that the NL-means-based restoration produces better results. Finally, the impact of the proposed multi resolution approach based on wavelet sub-bands mixing should be investigated further, for instance when combined to the Non Linear diffusion filter [19] and the Total Variation minimization [21]. Computer assisted diagnosis and therapy, in general, require image processing operations such as de-noising and segmentation. Sophisticated imaging techniques such as MRI and CAT scanning provide abundant information but require preprocessing techniques so that 3D image/volume can be optimally used for diagnosis. This paper presents fusion based denoising algorithm and RDWT entropy based region segmentation algorithm. Using the 3D Brainweb database,
www.ajer.org
Page 220
2014
the proposed algorithms show significant improvement over existing algorithms. In future, fusion based denoising algorithm and segmentation will be extended with the non-linear learning approach for further reducing the errors.
XVI.
ACKNOWLEDGEMENTS REFERENCES
The authors are thankful to Mr.Saikat Maity and Dr.Chandan Konar for preapering this paper.
[1] [2]
[3]
[8]
D. Barash. A fundamental relationship between bilateral filtering, adaptive smoothing, and the nonlinear diffusion equation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(6):844847, June 2002. M.J. Black and G. Sapiro. Edges as outliers: Anisotropic smoothing using local image statistics. In Scale-Space Theories in Computer Vision, Second International Conference, Scale-Space99, Corfu, Greece, September 26-27, 1999, Proceedings, pages 259270, 1999. J. Boulanger, Ch. Kervrann, and P. Bouthemy. Adaptive spatio-temporal restoration for 4D fluoresence microscopic imaging. In Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI05) , Palm Springs, USA, October 2005. A. Buades, B. Coll, and J. M. Morel. A review of image denoising algorithms, with a new one. Multiscale Modeling & Simulation, 4(2):490530, 2005. A. Buades, B. Coll, and J.-M. Morel. Image and movie denoising by nonlocal means. Technical Report 25, CMLA, 2006. D.L. Collins, A.P. Zijdenbos, V. Kollokian, J.G. Sled, N.J. Kabani, C.J. Holmes, and A.C. Evans. Design and construction of a realistic digital brain phantom. IEEE Transactions on Medical Imaging, 17(3):463468, 1998. P. Coupe, P. Yger, and C. Barillot. Fast Non Local Means Denoising for 3D MR Images. In R. Larsen, M. Nielsen, and J. Sporring, editors, 9th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI2006, volume 4191 of Lecture Notes in Computer Science, pages 3340, Copenhagen, Denmark, October 2006. Springer. O. Rioul and M. Vetterli, Wavelets and signal processing, IEEE Signal Processing Magazine, vol. 8, no. 4, pp. 14 38, 1991. G. Beylkin, On the representation of operators in bases of compactly supported wavelets, SIAM Journal of Numerical Analysis, vol. 29, pp. 17161740, 1992. G. Strang and T. Nguyen, Wavelet and filter banks. Wellesly- Cambridge Press, 1996. P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffusion, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629 639, 1990. E. Angelini, A. Laine, S. Takuma, J. Holmes, and S. Homma, LV volume quantification via spatio -temporal analysis of real-time 3D echocardiography, IEEE Transactions on Medical Imaging, vol. 20, pp. 457469, 2001. Y. Jin, E. Angelini, P. Esser, and A. Laine, De-noising SPECT/PET images using cross-scale regularization, Proceedings of the Sixth International Conference on Medical Image Computing and Computer Assisted Interventions, vol. 2879, no. 2, pp. 3240, 2003. D. Donoho, De-noising by Soft-thresholding, IEEE Transaction on Information Theory, vol. 41, no. 3, pp. 613 627, 1995. Http://www.bic.mni.mcgill.ca/brainweb/. D. H. Ballard and C. Brown, Computer vision. Prentice Hall, 1982. A. Laine and J. Fan, Frame representation for texture segmentation, IEEE Transaction on Image Processing, vol. 5, no. 5, pp. 771780, 1996. M.Kass, A.Witkin, and D.Terzopoulus, Snakes: Active contour models,International Journal of Computer Vision, vol. 1, no. 4, pp. 321 331, 1988. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification. Willey, 2000. Z. Wang, A. C. Bovik, B. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Transaction on Image Processing, vol. 13, pp. 500 512, 2004. L. Guo, X. Liu, Y. Wu, W. Yan, and X. Shen, Research on the segmentation of MRI image based on multi classification support vector machine, Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 6019 6022, 2007. W. Souidene, A. Beghdadi, and K. Abed-Meraim. Image denoising in the transformed domain using non local neighborhoods. volume 2, pages IIII, 2006. C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In ICCV 98: Proceedings of the Sixth International Conference on Computer Vision, page 839, Washington, DC, USA, 1998. IEEE Computer Society. D. Tschumperle. Curvature-preserving regularization of multi-valued images using pdes. In 9th European Conference on Computer Vision (ECCV06), pages 428433, Graz, 2006. J. van deWeijer and R. van den Boomgaard. Local mode filtering. In CVPR 01: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8-14 December, Kauai, HI, USA, pages 428433, 2001. G. Winkler, V. Aurich, K. Hahn, A. Martin, and K. Rodenacker. Noise reduction in images: Some recent edgepreserving methods. Pattern Recognition and Image Analysis, 9:749766, 1999.
www.ajer.org
Page 221