Analysis of Image Restoration Techniques
Analysis of Image Restoration Techniques
Analysis of Image Restoration Techniques
The term digital image refers to the processing of a two dimensional picture by a digital computer. In a broader context, it implies digital processing of any two dimensional data. A digital image is an array of real or complex numbers represented by a finite number of bits. An image given in the form of a transparency, slide, photograph or an X-ray is first digitized and stored as a matrix of binary digits in computer memory. This digitized image can then be processed and/or displayed on a high-resolution television monitor. For display, the image is stored in a rapid-access buffer memory, which refreshes the monitor at a rate of 25 frames per second to produce a visually continuous display.
Images are produced to record or display useful information. Due to imperfections in the imaging and capturing process, however, the recorded image invariably represents a degraded version of the original scene . There exists a wide range of different degradations, which are to be taken into account, for instance noise, geometrical degradations (pincushion distortion), illumination and color imperfections (under / verexposure, saturation), and blur . Blurring is a form of bandwidth reduction of an ideal image owing to the imperfect image formation process . It can be caused by relative motion between the camera and the original scene, or by an optical system that is out of focus. When aerial photographs are produced for remote sensing purposes, blurs are introduced by atmospheric turbulence, aberrations in the optical system, and relative motion between the camera and the ground .. In addition to these blurring effects, noise always corrupts any recorded image . Noise may be introduced by the medium through which the image is created (random absorption or scatter effects), by the recording medium (sensor noise), by measurement errors due to the limited accuracy of the recording system, and by quantization of the data for digital storage .The field of image restoration (sometimes referred to as image deblurring or image deconvolution) is concerned with the reconstruction or estimation of the uncorrupted image from a blurred and noisy one. Essentially, it tries to perform an operation on the image that is the inverse of the imperfections in the image formation system . In the use of image restoration methods, the characteristics of the degrading 1
system and the noise are assumed to be known a priori. The goal of blur identification is to estimate the attributes of the imperfect imaging system from the observed degraded image itself prior to the restoration process.
Digitizer
Mass Storage
Image Processor
Digital Computer
Operator Console
Display
Figure 1.1 Block Diagram of a Typical Image Processing System The digital image processing system includes the following modern techniques.
1.1.1 Digitizer
A digitizer converts an image into a numerical representation suitable for input into a digital computer. Some common digitizers are Microdensitometer. 1. 2. 3. Flying spot scanner Image dissector Vidicon camera Photosensitive solid- state arrays.
4.
An image processor does the functions of image acquisition, storage, preprocessing, segmentation, representation, recognition and interpretation and finally displays or records the resulting image.
Problem Domain
Image Acquisition
Segmentation
Preprocessing
Knowledge Base
Result
Figure 1.2 Block Diagram of Fundamental Sequence involved in an Image processing system
Figure 1.2 shows the block diagram of the fundamental sequence involved in an Image Processing system. As detailed in the diagram, the first step in the process is image acquisition by an imaging sensor in conjunction with a digitizer to digitize the image. The next step is the preprocessing step where the image is improved being fed as an input to the other processes. Preprocessing typically deals with enhancing, removing noise, isolating regions, etc. Segmentation partitions an image into its constituent parts or objects. The output of segmentation is usually raw pixel data, which consists of either the boundary of the region or the pixels in the region themselves. Representation is the process of transforming the raw pixel data into a form useful for subsequent processing by the computer. Description deals with extracting features that are basic in differentiating one class of objects from another. Recognition assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. The knowledge about a problem domain is incorporated into the knowledge base. The knowledge base guides the operation of each processing module and also controls the interaction between the modules. Not all modules need be necessarily present for a specific function. The composition of the image processing system depends on its application. The frame rate of the image processor is normally around 25 frames/second. 3
1.2
Applications
1. 2. 3. 4. 5. 6. 7. 8. Computer vision Face detection Feature detection Lane departure warning system Non-photorealistic rendering Medical image processing Microscope image processing Morphological image processing
1.3
viewed and 2. The amount of light reflected by the objects in the scene.
The former is known as the Illumination and the latter is known as the Reflectance components of the image.
Lmin <= l < Lmax where, Lmin is the minimum gray level Lmax is the maximum gray level
--------- (2.1)
And the only requirement is that be positive Lmin and Lmax be finite. If imin and imax are the minimum and maximum values of the illumination and rmin and rmax are the minimum and maximum values of reflectance respectively, we have
The interval L m in ,L m ax is called the gray scale of the image. Normally the image is shifted to the interval [0,L] where L=0 is considered black and L=1 is considered white.
domains.Frequency domain analysis is performed by considering the individual frequency components of the full range of frequencies that one such signal is comprised of. A useful application for this method is in considering problems like motion blur in images. Since devices such as cameras dont capture an image in an instant, but rather over an exposure time, rapid movements cause the acquired image to have blur that represents one object occupying multiple positions over this exposure time. In a blurred image, edges appear vague and washed out meaning that over those areas their frequency components will be similar. Ideally, the edges would be sharp and that would be reflected by a significant frequency difference along those edges. This project explored the efficiency of using frequency domain techniques to remove motion blur from images. The overall approach consisted of taking an image, converting it into its spatial frequencies, developing a point spread function (PSF) to filter the image with, and then converting the filtered result back into the spatial 6
domain to see if blur was removed. This was performed in several steps, each of which built from having a greater understanding of the one preceding it. The first step was taking a normal (i.e. not blurred) image, creating a known blurring PSF, and then filtering the image so as to add blur to it. The next step was removing this blur by various methods, but with the information about the PSF that was used to create the blur. After that, de-blurring was performed without knowing anything about nature of the blurring PSF, except for its size. Finally, an algorithm was developed for removing blur from an already blurry image with no information regarding the blurring PSF.
F(x, y)
G(X ,Y)
Degraded Function H
NOISE n(x,y)
DEGRADATION
RESTORATION
Motion Blur
3.3 Deblurring
Better looking image Improved identification Reduces overlap of image structure to more easily identify features in the image (needs high SNR) PSF calibration
Removes artifacts in the image due to the point spread function (PSF) of the system, i.e. extended halos, lumpy Airy rings etc.
9
Higher resolution
10
Where Sxx(f1,f2),Snn(f1,f2) are respectively the power spectra of the original image and the additive noise, and H(f1,f2) is the blurring filter. Here one can see that the Wiener filter has two separate parts, an inverse filtering part and a noise smoothing part. It not only performs the deconvolution by inverse filtering (high pass filtering) but also removes the noise with low pass filtering.
stop convergence when the error-metric reaches the noise-limit, gi=gi+ni , such that E N2
From Bayes theorem P(gi|fj) = hij and the object distribution can be expressed iteratively as so that the LR kernel approaches unity as the iterations progress
11
You can use MATLAB in a wide range of applications, including signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. Add-on toolboxes (collections of special-purpose MATLAB functions, available separately) extend the MATLAB environment to solve particular classes of problems in these application areas.
MATLAB provides a number of features for documenting and sharing your work. You can integrate your MATLAB code with other languages and applications, and distribute your MATLAB algorithms and applications.
12
The value 0 corresponds to black and 255 to white. The class uint8 only requires roughly 1/8 of the storage compared to the class double. On the other hand, many mathematical functions can only be applied to the double class.
13
The value assigned to the integer coordinates [m, n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m, n]. In fact, in most cases a (x, y)--which we might consider to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color (), and time (t).
The image shown has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. 14
1.
The creation of images involves two main tasks, a) 1.Spatial sampling, which determines the resolution of an image b) 2.Quantization, which determines how many intensity levels are allowed
2. a) b)
Spatial sampling determines what level of detail can be seen Finer sampling allows for smaller detail Finer sampling requires more pixels and larger images
3. Quantization determines how smooth the contrast changes in the image are, a) b) Finer quantization will prevent false contouring (artificial edges) Courser quantization allows for compressing images
15
Design is concerned with identifying software components specifying relationships among components. Specifying software structure and providing blue print for the document phase. Modularity is one of the desirable properties of large systems. It implies that the system is divided into several parts. In such a manner, the interaction between parts is minimal clearly specified. Design will explain software components in detail. This will help the implementation of the system. Moreover, this will guide the further changes in the system to satisfy the future requirements.
16
LEVEL 0:
Input Noisy Image Noise Reduction Output denoised Image
17
LEVEL 1:
Specify the length and angle of PSF Read the input image
Apply the PSF and noise into i/p image Create the noise with specified parameters
Create the Point Spread Function Apply the PSF into i/p image
18
Removal of noise
19
when
they
should
occur; describing
complicated
sequential
removal of noise
20
6. 1 IMPLEMENTATION
Original Image
It represents the original image taken without any noise added to the image
21
Created PSF
Fig 6.1(b) Created PSF It represents the created PSF according to the specified length and blur.
Blurred image
Fig 6.1(c) Blurred Image It represents the blurred image after the created PSF is added to it.
22
It represents the restored image denoising using the weiner Filter.Tthe noise is removed in a better manner when compared with the other two. Deblurred Image using regularized filter
23
It represents the image obtained after denoising using the Regularized Filter. The noise is removed in a better manner but while comparing with the weiner it is worst. Deblurred Image using Lucy-Richardson filter
Fig 6.1(f) Deblurred Image using Lucy-Richardson filter It represents the image obtained after denoising using the Lucy-Richardson Filter. The Lucy-Richardson turned out to be the worst , even though the image was ok . Blurred Image with noise
Fig 6.1(g) Blurred Image with noise It represents the blurred image obtained after adding the gaussian noise.
24
Fig 6.1(h) Denoised Image using Lucy-Richardson filter It represents the restored image using the Lucy-Richardson Filter. The Lucy Richardson gave a very good result -much better than the two other filters when there is a presence of noise. Denoised Image using weiner filter
25
It represents the image obtained after denoising using the weiner Filter. When Gaussian noise is added to the blur, Wiener filter gave the worst result. Denoised Image using Regularized filter
Fig 6.1(j) Denoised Image using Regularized filter It represents the image obtained after denoising using the Regularized Filter. Regularized filter was a little better than Wiener but still it was a poor quality Image, when Gaussian noise is added to it
26
6.2 TESTING
By varying the parameters such as length and theta, used to create the PSF, the corresponding PSNR values is computed using the equation. PSNR=10log10(2552/MSE) ..(6.1) MSE=MSE/(row*col) ...(6.2)
PSNR values REGULARIZED LUCY-RICHARD 23.2345 21.006 23.2345 20.7086 25.5186 23.8316 22.6033 19.8786
PSNR values REGULARIZED LUCY-RICHARD 12.6105 18.6616 12.6044 18.4019 12.4725 18.2590 12.8124
27
Wiener filter's working principle is based on the least squares restoration problem [4]. It is a method of restoring image in the presence of blur and noise. The frequencydomain expression for the Wiener filter is: W(s) = H(s)/F+(s), H(s) = Fx,s (s) e^as /Fx( s)
7.2. REGULARIZED FILTER: Regulated filter is the deblurring method to deblur an Image by using deconvlution function deconverge which is effective when the limited information is known about additive noise Reparameterization of the object with a smoothing kernel (sieve function or lowpass filter). F(r ) = (r )2 * a(r ) Truncated iterations 28
stop convergence when the error-metric reaches the noise-limit, gi=gi+ni , such that E N2
Where: pij is the point spread function (the fraction of light coming from true location j that is observed at position i), ui is the pixel value at location j in the latent image, and ci j is the observed value at pixel location i. The statistics are performed under the assumption that ui are Poisson distributed, which isappropriate for photon noise in the data. The basic idea is to calculate the most likely ui given the observed ci and
that the Lucy-Richardson method produced the worst results in this instance, as it is a nonlinear technique, and supposedly more advanced. However, after Gaussian noise was added to the image in addition to blur, the Lucy-Richardson algorithm actually performed the best. This context help make sense of the previous problem because when just blur is added, only a linear modification is being made and so the linear Wiener restoration technique should work the best. Introducing Gaussian noise, and thus a degree of spatial nonlinearity, caused the nonlinear Lucy-Richardson method to produce the best results. As information about the PSF that was used to perform the blurring was removed from the algorithms, the efficacy of blur removal dropped.
The PSNR values of the each and every techniques specified here.
Table 7.3 Comparison of PSNR values
PSF
WEINER
REGULARIZED
LUCY-RICHARD
LENGTH
THETA
31
11
25.3295
12.5188
23.2345
12.6105
21.006
18.6616
55
36
23.9368
12.4988
23.2354
12.6044
20.7086
18.4019
14
25
26.8267
12.2067
25.5186
12.4725
23.8316
18.2590
65
63
23.2008
12.7937
22.6033
12.8124
19.8786
18.2990
30
From the above table 7.3, it is observed that the weiner filter gave the best result in case of PSF without noise. And when noise is added to PSF, Lucy-Richardson filter gave the best result when compared to other two.
31
CHAPTER 8
CONCLUSION
In this paper, two main approaches were used to evaluate the results of the aforementioned procedures. The first was a simple qualitative measure of blur removal. A known amount of blur, but no noise, was added to an image, and then the image was filtered to remove this known amount of blur using Wiener, regularized and Lucy-Richardson deblurring methods. The regularized and Wiener techniques produced what appeared to be the best results. It was surprising that the LucyRichardson method produced the worst results in this instance. However, after Gaussian noise was added to the image in addition to blur, the Lucy-Richardson algorithm actually performed the best.
32
CHAPTER 9
REFERENCES
1.
A.K.Jain, "Fundamentals of Digital Image Processing", Engelwood Cliff, N. J.: Print ice Hall, 2006.. Aizenberg I., Myasnikova E., Samsonova M. and Reinitz J., Temporal Classification of Drosophila Segmentation Gene Expression Patterns by the Multi-Valued Neural Recognition Method, Journal of Mathematical Biosciences, Vol.176 (1) (2002) 145-159.Karim, (Nov. 2002) G. Arce and R. Foster, Detail-preserving ranked-order based filter forimage processing, IEEE Trans. Acoust., Speech, Signal Processing,vol.37, pp. 8398, 1989. G. Pavlovic and A. M. Tekalp, Maximum likelihood parametric blur identification based on a continuous spatial domain model, IEEE Trans. Image Process., vol. 1, no. 10, pp. 496-504, Oct. 1992. T.S. Huang, G.J. Yang, And G.Y. Tang, "A Fast Two Dimensional Median Filtering Algorithm", IEEE Trans. On Accustics, Speech,Signal Processing, Vol. ASSP-27, No.1, Feb 1997. M. M. Chang, A. M. Tekalp, and A. T. Erdem, Blur identification using the bispectrum, IEEE Trans. Acoust., Speech, Signal Process.,vol. 39, no. 5, pp. 2323-2325, Oct. 1991.nd K. K. Ma, (Jun. 2006) . Neelamani R., Choi H., and Baraniuk R. G., "Forward: Fourier-wavelet regularized deconvolution for ill-conditioned systems", IEEE Trans. On Signal Processing, Vol. 52, No 2 (2003) 418-433. Prieto (eds.) Bio-inspired Applications of Connectionism. Lecture Notesin Computer Science, Vol. 2085 Springer-Verlag, Berlin Heidelberg New York (2001) 369-374. R. L. Lagendijk, J. Biemond, and D. E. Boekee, Blur identification using the expectation-maximization algorithm, in Proc. IEEE. Int.Conf. Acoustics, Speech, Signal Process., vol. 37, Dec. 1989, pp. 1397-1400. Rafael C. Gonzalez and Richard E. Woods, Digital image processing, 2nd edition, Addison- Wesely, 2004.
2.
3.
4.
5.
6.
7.
8.
9.
10.
33
11.
Satyadhyan Chickerur, Aswatha Kumar, A Biologically Inspired Filter For Image Restoration, International Journal of Recent Trends in Engineering, Vol 2, No. 2, November 2009. W.Y. Han and J. C. Lin, Minimummaxmum exclusive mean (MMEM)filter to remove impulse noise from highly corrupted image,Electron.Lett., vol. 33, pp. 124125, 1997.
12.
34