Analysis of Image Restoration Techniques

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

CHAPTER 1 INTRODUCTION

The term digital image refers to the processing of a two dimensional picture by a digital computer. In a broader context, it implies digital processing of any two dimensional data. A digital image is an array of real or complex numbers represented by a finite number of bits. An image given in the form of a transparency, slide, photograph or an X-ray is first digitized and stored as a matrix of binary digits in computer memory. This digitized image can then be processed and/or displayed on a high-resolution television monitor. For display, the image is stored in a rapid-access buffer memory, which refreshes the monitor at a rate of 25 frames per second to produce a visually continuous display.

Images are produced to record or display useful information. Due to imperfections in the imaging and capturing process, however, the recorded image invariably represents a degraded version of the original scene . There exists a wide range of different degradations, which are to be taken into account, for instance noise, geometrical degradations (pincushion distortion), illumination and color imperfections (under / verexposure, saturation), and blur . Blurring is a form of bandwidth reduction of an ideal image owing to the imperfect image formation process . It can be caused by relative motion between the camera and the original scene, or by an optical system that is out of focus. When aerial photographs are produced for remote sensing purposes, blurs are introduced by atmospheric turbulence, aberrations in the optical system, and relative motion between the camera and the ground .. In addition to these blurring effects, noise always corrupts any recorded image . Noise may be introduced by the medium through which the image is created (random absorption or scatter effects), by the recording medium (sensor noise), by measurement errors due to the limited accuracy of the recording system, and by quantization of the data for digital storage .The field of image restoration (sometimes referred to as image deblurring or image deconvolution) is concerned with the reconstruction or estimation of the uncorrupted image from a blurred and noisy one. Essentially, it tries to perform an operation on the image that is the inverse of the imperfections in the image formation system . In the use of image restoration methods, the characteristics of the degrading 1

system and the noise are assumed to be known a priori. The goal of blur identification is to estimate the attributes of the imperfect imaging system from the observed degraded image itself prior to the restoration process.

1.1THE IMAGE PROCESSING SYSTEM

A typical digital image processing system is given in Figure 1.1

Digitizer

Mass Storage

Image Processor

Digital Computer

Operator Console

Display

Hard Copy Device

Figure 1.1 Block Diagram of a Typical Image Processing System The digital image processing system includes the following modern techniques.

1.1.1 Digitizer
A digitizer converts an image into a numerical representation suitable for input into a digital computer. Some common digitizers are Microdensitometer. 1. 2. 3. Flying spot scanner Image dissector Vidicon camera Photosensitive solid- state arrays.

4.

1.1.2 Image Processor

An image processor does the functions of image acquisition, storage, preprocessing, segmentation, representation, recognition and interpretation and finally displays or records the resulting image.

Problem Domain

Image Acquisition

Segmentation

Representation & Description

Preprocessing

Knowledge Base

Recognition & interpretation

Result

Figure 1.2 Block Diagram of Fundamental Sequence involved in an Image processing system

Figure 1.2 shows the block diagram of the fundamental sequence involved in an Image Processing system. As detailed in the diagram, the first step in the process is image acquisition by an imaging sensor in conjunction with a digitizer to digitize the image. The next step is the preprocessing step where the image is improved being fed as an input to the other processes. Preprocessing typically deals with enhancing, removing noise, isolating regions, etc. Segmentation partitions an image into its constituent parts or objects. The output of segmentation is usually raw pixel data, which consists of either the boundary of the region or the pixels in the region themselves. Representation is the process of transforming the raw pixel data into a form useful for subsequent processing by the computer. Description deals with extracting features that are basic in differentiating one class of objects from another. Recognition assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. The knowledge about a problem domain is incorporated into the knowledge base. The knowledge base guides the operation of each processing module and also controls the interaction between the modules. Not all modules need be necessarily present for a specific function. The composition of the image processing system depends on its application. The frame rate of the image processor is normally around 25 frames/second. 3

1.2

Applications
1. 2. 3. 4. 5. 6. 7. 8. Computer vision Face detection Feature detection Lane departure warning system Non-photorealistic rendering Medical image processing Microscope image processing Morphological image processing

1.3

FUNDAMENTALS OF IMAGE PROCESSING


The ultimate goal of any image processing technique is to help an observer

interpret the content of an image.

1.3.1 Illumination and Reflectance


The term image refers to a two dimensional light intensity function, denoted by f(x, y), where the value or amplitude of f at spatial coordinates (x,y) gives the intensity of the image at that point. Light is a form of energy and is hence nonzero and finite. The images people perceive in day to day life consist of light reflected from objects. The basic nature of the image may be characterized by two components 1. The amount of source light being incident on the scene being

viewed and 2. The amount of light reflected by the objects in the scene.

The former is known as the Illumination and the latter is known as the Reflectance components of the image.

1.3.2 Gray Scale


The intensity of a monochrome image f at coordinates (x,y) is known as the gray level (l) of the image at that point. It is evident that 4

Lmin <= l < Lmax where, Lmin is the minimum gray level Lmax is the maximum gray level

--------- (2.1)

And the only requirement is that be positive Lmin and Lmax be finite. If imin and imax are the minimum and maximum values of the illumination and rmin and rmax are the minimum and maximum values of reflectance respectively, we have

L i r min min min


L i r max max max

-------- (2.2) -------- (2.3)

The interval L m in ,L m ax is called the gray scale of the image. Normally the image is shifted to the interval [0,L] where L=0 is considered black and L=1 is considered white.

1.3.3 Classes in Image Processing


An image processing system may handle a number of problems and have a number of applications but it mostly involves the following processes known as the basic classes in image processing 1. 2. 3. 4. 5. 6. 7. Image Representation and Description Image Enhancement Image Restoration Image Recognition and Interpretation Image Segmentation Image Reconstruction Image Data Compression

CHAPTER 2 LITERATURE SURVEY


Literature Survey is done in order to make a feasible study about the existing problems and to formalize the organizations requirements. This process forms the basis of software development and validation by understanding the domain for the software as well as required function behaviour and performance. The essential purpose of this phase is to find the need and to define the problem that needs to be solved .This chapter gives a brief discussion about the detailed study of the proposed system.

2.1 Problem Definition


Almost every system or signal that scientists and engineers deal with can be viewed in several different ways. They can exist as a function of space, defined by physical parameters like length, width, height, color intensity, and others. They can exist as a function of time, defined by changes in any measurable characteristic. They can also exist as a function of frequency, defined by the composition of periodicities that make up light, sound, space, or any other dynamic system or signal. Fourier and LaPlace transforms are the functions used for the conversion between these

domains.Frequency domain analysis is performed by considering the individual frequency components of the full range of frequencies that one such signal is comprised of. A useful application for this method is in considering problems like motion blur in images. Since devices such as cameras dont capture an image in an instant, but rather over an exposure time, rapid movements cause the acquired image to have blur that represents one object occupying multiple positions over this exposure time. In a blurred image, edges appear vague and washed out meaning that over those areas their frequency components will be similar. Ideally, the edges would be sharp and that would be reflected by a significant frequency difference along those edges. This project explored the efficiency of using frequency domain techniques to remove motion blur from images. The overall approach consisted of taking an image, converting it into its spatial frequencies, developing a point spread function (PSF) to filter the image with, and then converting the filtered result back into the spatial 6

domain to see if blur was removed. This was performed in several steps, each of which built from having a greater understanding of the one preceding it. The first step was taking a normal (i.e. not blurred) image, creating a known blurring PSF, and then filtering the image so as to add blur to it. The next step was removing this blur by various methods, but with the information about the PSF that was used to create the blur. After that, de-blurring was performed without knowing anything about nature of the blurring PSF, except for its size. Finally, an algorithm was developed for removing blur from an already blurry image with no information regarding the blurring PSF.

2.2 DRAWBACKS IN EARLIER TECHNIQUES


In earlier methods motion length estimation is proposed only for noise free images. In addition , some methods cannot handle the motion length with more than 35 pixels. A major drawback of existing deconvolution methods for images is that they suffer from poor convergence properties. Another disadvantage is that some methods make restrictive assumptions on the PSF .

CHAPTER 3 PROJECT DESCRIPTION

3.1 DEGRADATION MODEL:


Capturing an image exactly as it appears in the real world is very difficult if not impossible. In case of photography or imaging systems these are caused by the graininess of the emulsion, motion-blur, and camera focus problems. The result of all these degradations is that the image is an approximation of the original. The above mentioned degradation process can adequately be described by a linear spatial model. The original input is a two-dimensional (2D) image f(x, y). This image is operated on by the system H and after the addition of n(x, y). One can obtain the degraded image g(x,y). Digital image restoration may be viewed as a process in which we try to obtain an approximation to f(x, y), given g(x, y), and H and after applying Restoration filters we obtain restored image f(x, y).

F(x, y)

G(X ,Y)

Degraded Function H

Restoration F^(X, filter (s)

NOISE n(x,y)

DEGRADATION

RESTORATION

Fig 3.1 DEGRADATION MODEL

3.2 Blurring Types


In digital image there are 3 common types of Blur effects: Average Blur Gaussian Blur 8

Motion Blur

3.2.1 Average Blur


The Average blur is one of several tools we can use to remove noise and specks in an image. Use it when noise is present over the entire image. This type of blurring can be distribution in horizontal and vertical direction and can be circular averaging by radius R which is evaluated by the formula: R = g2 + f2 Where: g is the horizontal size blurring direction and f is vertical blurring size direction and R is the radius size of the circular average blurring.

3.2.2 Gaussian Blur


The Gaussian Blur effect is a filter that blends a specific number of pixels incrementally, following a bellshaped curve. The blurring is dense in the center and feathers at the edge. Apply Gaussian Blur to an image when we want more control over the Blur effect.

3.2.3 Motion Blur


The Motion Blur effect is a filter that makes the image appear to be moving by adding a blur in a specific direction. The motion can be controlled by angle or direction (0 to 360 degrees or 90 to +90) and/or by distance or intensity in pixels (0 to 999), based on the software used.

3.3 Deblurring
Better looking image Improved identification Reduces overlap of image structure to more easily identify features in the image (needs high SNR) PSF calibration

Removes artifacts in the image due to the point spread function (PSF) of the system, i.e. extended halos, lumpy Airy rings etc.
9

Higher resolution

3.3.1 Deblurring Model


A blurred or degraded image can be approximately described by this equation: g = PSF*f + N, Where: g the blurred image, h the distortion operator called Point Spread Function (PSF), f the original true image and N Additive noise, introduced during image acquisition, that corrupts the image. Point Spread Function (PSF) Point Spread Function (PSF) is the degree to which an optical system blurs (spreads) a point of light. The PSF is the inverse Fourier transform of Optical Transfer Function (OTF).in the frequency domain ,the OTF describes the response of a linear, position-invariant system to an impulse.OTF is the Fourier transfer of the point (PSF). Image deblurring is an inverse problem which is used to recover an image which has suffered from linear degradation. The blurring degradation can be space-invariant or space-in variant[2]. Image deblurring methods can be divided into two classes: nonblind, in which the blurring operator is known and blind, in which the blurring operator is unknown.

3.4 DEBLURRING TECHNIQUES 3.4.1 Wiener Filter Deblurring Technique


The Wiener filter isolates lines in a noisy image by finding an optimal trade-off between inverse filtering and noise smoothing. It removes the additive noise and inverts the blurring simultaneously so as to emphasize any lines which are hidden in the image. This filter operates in the Fourier domain, making the limitation of noise easier as the high and low frequencies are removed from the noise to leave a sharp image. Using Fourier transforms means the noise is easier to completely eliminate and the actual line embedded in noise easier to isolate making it a slightly more effective method of filtering. The Wiener filter in Fourier domain can be expressed as follows:

10

Where Sxx(f1,f2),Snn(f1,f2) are respectively the power spectra of the original image and the additive noise, and H(f1,f2) is the blurring filter. Here one can see that the Wiener filter has two separate parts, an inverse filtering part and a noise smoothing part. It not only performs the deconvolution by inverse filtering (high pass filtering) but also removes the noise with low pass filtering.

3.4.2 Regularized Filter Deblurring Technique


Regulated filter is the deblurring method to deblur an Image by using deconvlution function deconverge which is effective when the limited information is known about additive noise Reparameterization of the object with a smoothing kernel (sieve function or lowpass filter). F(r ) = (r )2 * a(r ) Truncated iterations

stop convergence when the error-metric reaches the noise-limit, gi=gi+ni , such that E N2

3.4.3 Lucy-Richardson Algorithm Technique


Discrete Convolution

From Bayes theorem P(gi|fj) = hij and the object distribution can be expressed iteratively as so that the LR kernel approaches unity as the iterations progress

11

CHAPTER 4 SYSTEM REQUIREMENTS AND SOFTWARE DESCRIPTION

4.1 Hardware Requirements


Processor RAM Hard disk : : : Pentium or higher 128 MB SD RAM 40 GB or higher

4.2 Software Requirements


Operating System coding language : : Windows 7 matlab

4.3 About Software


4.3.1 Introduction to matlab
MATLAB is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. Using MATLAB, you can solve technical computing problems faster than with traditional programming languages, such as C, C++ and Fortran.

You can use MATLAB in a wide range of applications, including signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. Add-on toolboxes (collections of special-purpose MATLAB functions, available separately) extend the MATLAB environment to solve particular classes of problems in these application areas.

MATLAB provides a number of features for documenting and sharing your work. You can integrate your MATLAB code with other languages and applications, and distribute your MATLAB algorithms and applications.

12

4.3.2 Working formats in MATLAB


However, in order to start working with an image, for example perform a wavelet transform on the image, we must convert it into a different format. This section explains four common formats.

4.3.3 Intensity Image (Gray scale image)


This is the equivalent to a "gray scale image" and this is the image we will mostly work with in this course. It represents an image as a matrix where every element has a value corresponding to how bright/dark the pixel at the corresponding position should be coloured. There are two ways to represent the number that represents the brightness of the pixel: The double class (or data type). This assigns a floating number ("a number with decimals") between 0 and 1 to each pixel. The value 0 corresponds to black and the value 1 corresponds to white. The other class is called uint8, which assigns an integer between 0 and 255 to represent the brightness of a pixel.

The value 0 corresponds to black and 255 to white. The class uint8 only requires roughly 1/8 of the storage compared to the class double. On the other hand, many mathematical functions can only be applied to the double class.

4.3.4 Indexed Image:


This is a practical way of representing colour images. In this course we will mostly work with gray scale images but once you have learned how to work with a gray scale image you will also know the principle how to work with colour images. An indexed image stores an image as two matrices. The first matrix has the same size as the image and one number for each pixel. The second matrix is called the color map and its size may be different from the image. The numbers in the first matrix is an instruction of what number to use in the color map matrix.

13

4.3.5 Multi-frame Image


In some applications we want to study a sequence of images. This is very common in biological and medical imaging where you might study a sequence of slices of a cell. For these cases, the Multi-frame format is a convenient way of working with a sequence of images. In case you choose to work with biological imaging later on in this course, you may use this format.

4.3.6 Digital Image Definitions:


A digital image a[m, n] described in a 2D discrete space is derived from an analog image a(x, y) in a 2D continuous space through a sampling process that is frequently referred to as digitization. For now we will look at some basic definitions associated with the digital image. The effect of digitization is shown in Figure 1.3. In the Figure 1.3, the 2D continuous image a (x, y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel.

Fig 4.3.6 Effect of Digitization

The value assigned to the integer coordinates [m, n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m, n]. In fact, in most cases a (x, y)--which we might consider to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color (), and time (t).

The image shown has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. 14

1.

The creation of images involves two main tasks, a) 1.Spatial sampling, which determines the resolution of an image b) 2.Quantization, which determines how many intensity levels are allowed

2. a) b)

Spatial sampling determines what level of detail can be seen Finer sampling allows for smaller detail Finer sampling requires more pixels and larger images

3. Quantization determines how smooth the contrast changes in the image are, a) b) Finer quantization will prevent false contouring (artificial edges) Courser quantization allows for compressing images

15

CHAPTER 5 SYSTEM DESIGN

Design is concerned with identifying software components specifying relationships among components. Specifying software structure and providing blue print for the document phase. Modularity is one of the desirable properties of large systems. It implies that the system is divided into several parts. In such a manner, the interaction between parts is minimal clearly specified. Design will explain software components in detail. This will help the implementation of the system. Moreover, this will guide the further changes in the system to satisfy the future requirements.

5.1 Input Design


Input design is the process of converting user-originated inputs to a computerbased format. Input design is one of the most expensive phases of the operation of computerized system and is often the major problem of a system. In this project, the input design is original images that is corrupted by salt and pepper noise.

5.2 Output Design


Output design generally refers to the results and information that are generated by the system for many end-users; output is the main reason for developing the system and the basis on which they evaluate the usefulness of the application. In this project the output is denoised images after the removal of salt and pepper noise.

5.3 Code Design


The code design should be such that with less amount of coding we can achieve more results. The speed of the system will be more if the coding is less. Whether the data in the system is usable and readable by the system is depending on the coding. In the project, the coding is being done such that a denoising function is written for the proposed algorithm and the salt and pepper noise contained in the image is removed.

16

5.4 Data Flow Diagram


A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. DFDs can also be used for the visualization of data processing (structured design). On a DFD, data items flow from an external data source or an internal data store to an internal data store or an external data sink, via an internal process. A DFD provides no information about the timing or ordering of processes, or about whether processes will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows the flow of control through an algorithm, allowing a reader to determine what operations will be performed, in what order, and under what circumstances, but not what kinds of data will be input to and output from the system, nor where the data will come from and go to, nor where the data will be stored (all of which are shown on a DFD).

LEVEL 0:
Input Noisy Image Noise Reduction Output denoised Image

Fig 5.4(a) LEVEL 0-Data flow diagram

17

LEVEL 1:
Specify the length and angle of PSF Read the input image

Apply the PSF and noise into i/p image Create the noise with specified parameters

Create the Point Spread Function Apply the PSF into i/p image

Apply the restoring function with created PSF Restored Image

Fig 5.4(b) LEVEL 1-Data flow Diagram

5.5 STATECHART DIAGRAM


A state diagram is a type of diagram used in computer science and related fields to describe the behavior of systems. State diagrams require that the system described is composed of a finite number of states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction. There are many forms of state diagrams, which differ slightly and have different semantics.

18

Read the noisy image

Identify the presence of noise

Removal of noise

Comparison with other filters

Fig.5.5.state chart diagram

5.6 ACTIVITY DIAGRAM


Activity diagrams describe the workflow behavior of a system. Activity diagrams are similar to state diagrams because activities are the state of doing something. The diagrams describe the state of activities by showing the sequence of activities performed. Activity diagrams can show activities that are conditional or parallel. Activity diagrams should be used in conjunction with other modeling techniques such as interaction diagram and state diagram. The main reason to use activity diagrams is to model the workflow behind the system being designed. Activity Diagrams are also useful for: analyzing a use case by describing what actions needs to take place and

19

when

they

should

occur; describing

complicated

sequential

algorithm; and modeling applications with parallel processes.

read the noisy image

identify the presence of noise

removal of noise

compare with other filter

Fig 5.6 ACTIVITY DIAGRAM

20

CHAPTER 6 IMPLEMENTATION AND TESTING

6. 1 IMPLEMENTATION

Original Image

Fig 6.1(a) Original Image

It represents the original image taken without any noise added to the image

21

Created PSF

Fig 6.1(b) Created PSF It represents the created PSF according to the specified length and blur.

Blurred image

Fig 6.1(c) Blurred Image It represents the blurred image after the created PSF is added to it.

22

Deblurred Image using weiner filter

Fig 6.1(d) Deblurred Image using weiner filter

It represents the restored image denoising using the weiner Filter.Tthe noise is removed in a better manner when compared with the other two. Deblurred Image using regularized filter

Fig 6.1(e) Deblurred Image using Regularized filter

23

It represents the image obtained after denoising using the Regularized Filter. The noise is removed in a better manner but while comparing with the weiner it is worst. Deblurred Image using Lucy-Richardson filter

Fig 6.1(f) Deblurred Image using Lucy-Richardson filter It represents the image obtained after denoising using the Lucy-Richardson Filter. The Lucy-Richardson turned out to be the worst , even though the image was ok . Blurred Image with noise

Fig 6.1(g) Blurred Image with noise It represents the blurred image obtained after adding the gaussian noise.

24

Denoised Image using Lucy-Richardson filter

Fig 6.1(h) Denoised Image using Lucy-Richardson filter It represents the restored image using the Lucy-Richardson Filter. The Lucy Richardson gave a very good result -much better than the two other filters when there is a presence of noise. Denoised Image using weiner filter

Fig 6.1(i) Denoised Image using weiner filter

25

It represents the image obtained after denoising using the weiner Filter. When Gaussian noise is added to the blur, Wiener filter gave the worst result. Denoised Image using Regularized filter

Fig 6.1(j) Denoised Image using Regularized filter It represents the image obtained after denoising using the Regularized Filter. Regularized filter was a little better than Wiener but still it was a poor quality Image, when Gaussian noise is added to it

26

6.2 TESTING
By varying the parameters such as length and theta, used to create the PSF, the corresponding PSNR values is computed using the equation. PSNR=10log10(2552/MSE) ..(6.1) MSE=MSE/(row*col) ...(6.2)

Table 6.2(a) PSNR values at different PSF without noise

PSF LENGTH 31 55 14 65 THETA 11 36 25 63 WEINER 25.3295 23.9368 26.8267 23.2008

PSNR values REGULARIZED LUCY-RICHARD 23.2345 21.006 23.2345 20.7086 25.5186 23.8316 22.6033 19.8786

Table 6.2 (b)PSNR values at different PSF with noise

PSF LENGTH 31 55 14 65 THETA 11 36 25 63 WEINER 12.5188 12.4988 12.2067 12.7937

PSNR values REGULARIZED LUCY-RICHARD 12.6105 18.6616 12.6044 18.4019 12.4725 18.2590 12.8124

27

CHAPTER 7 COMPARATIVE ANALYSIS

7.1 WEINER FILTER:


The Wiener filter purpose is to reduce the amount of noise present in a signal by comparison with an estimation of the desired noiseless signal. It is based on a statistical approach. Typical filters are designed for a desired frequency response. The Wiener filter approaches filtering from a different angle. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the LTI filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following: 1. Assumption: signal and (additive) noise are stationary linear stochastic processes with known spectral characteristics or known autocorrelation and crosscorrelation. 2. Requirement: the filter must be physically realizable, i.e. causal (this requirement can be dropped, resulting in a non-causal solution). 3.Performance criteria: minimum mean-square error.

Wiener filter's working principle is based on the least squares restoration problem [4]. It is a method of restoring image in the presence of blur and noise. The frequencydomain expression for the Wiener filter is: W(s) = H(s)/F+(s), H(s) = Fx,s (s) e^as /Fx( s)

7.2. REGULARIZED FILTER: Regulated filter is the deblurring method to deblur an Image by using deconvlution function deconverge which is effective when the limited information is known about additive noise Reparameterization of the object with a smoothing kernel (sieve function or lowpass filter). F(r ) = (r )2 * a(r ) Truncated iterations 28

stop convergence when the error-metric reaches the noise-limit, gi=gi+ni , such that E N2

7.3. RICHARDSON-LUCY ALGORITHM:


The Richardson-Lucy algorithm , also called the expectation maximization (EM) method, is an iterative technique used heavily for the restoration of astronomical images in the presence of Poisson noise. It attempts to maximize the likelihood of the restored image by using the Expectation Maximization (EM) algorithm. The EM approach constructs the conditional probability density

Where: pij is the point spread function (the fraction of light coming from true location j that is observed at position i), ui is the pixel value at location j in the latent image, and ci j is the observed value at pixel location i. The statistics are performed under the assumption that ui are Poisson distributed, which isappropriate for photon noise in the data. The basic idea is to calculate the most likely ui given the observed ci and

known pij . 7.4 COMPARISON


Two main approaches were used to evaluate the results of the aforementioned procedures. The first was a simple qualitative measure of blur removal. A known amount of blur, but no noise, was added to an image, and then the image was filtered to remove this known amount of blur using Wiener, regularized and Lucy-Richardson deblurring methods. The regularized and Wiener techniques produced what appeared to be the best results. They were largely able to restore the image to its original form, although it was grainier. This grainy effect was especially prevalent in regions that had been low in contrast prior to the initial blurring. We believe this to be due to the fact that in low contrast regions the blur factor smeared relatively similar tones into fewer, indistinguishable ones that could not be perfectly restored. It was surprising 29

that the Lucy-Richardson method produced the worst results in this instance, as it is a nonlinear technique, and supposedly more advanced. However, after Gaussian noise was added to the image in addition to blur, the Lucy-Richardson algorithm actually performed the best. This context help make sense of the previous problem because when just blur is added, only a linear modification is being made and so the linear Wiener restoration technique should work the best. Introducing Gaussian noise, and thus a degree of spatial nonlinearity, caused the nonlinear Lucy-Richardson method to produce the best results. As information about the PSF that was used to perform the blurring was removed from the algorithms, the efficacy of blur removal dropped.

The PSNR values of the each and every techniques specified here.
Table 7.3 Comparison of PSNR values

PSF

WEINER

REGULARIZED

LUCY-RICHARD

LENGTH

THETA

WITHOUT WITH NOISE NOISE

WITHOUT WITH NOISE NOISE

WITHOUT WITH NOISE NOISE

31

11

25.3295

12.5188

23.2345

12.6105

21.006

18.6616

55

36

23.9368

12.4988

23.2354

12.6044

20.7086

18.4019

14

25

26.8267

12.2067

25.5186

12.4725

23.8316

18.2590

65

63

23.2008

12.7937

22.6033

12.8124

19.8786

18.2990

30

From the above table 7.3, it is observed that the weiner filter gave the best result in case of PSF without noise. And when noise is added to PSF, Lucy-Richardson filter gave the best result when compared to other two.

31

CHAPTER 8

CONCLUSION
In this paper, two main approaches were used to evaluate the results of the aforementioned procedures. The first was a simple qualitative measure of blur removal. A known amount of blur, but no noise, was added to an image, and then the image was filtered to remove this known amount of blur using Wiener, regularized and Lucy-Richardson deblurring methods. The regularized and Wiener techniques produced what appeared to be the best results. It was surprising that the LucyRichardson method produced the worst results in this instance. However, after Gaussian noise was added to the image in addition to blur, the Lucy-Richardson algorithm actually performed the best.

32

CHAPTER 9

REFERENCES

1.

A.K.Jain, "Fundamentals of Digital Image Processing", Engelwood Cliff, N. J.: Print ice Hall, 2006.. Aizenberg I., Myasnikova E., Samsonova M. and Reinitz J., Temporal Classification of Drosophila Segmentation Gene Expression Patterns by the Multi-Valued Neural Recognition Method, Journal of Mathematical Biosciences, Vol.176 (1) (2002) 145-159.Karim, (Nov. 2002) G. Arce and R. Foster, Detail-preserving ranked-order based filter forimage processing, IEEE Trans. Acoust., Speech, Signal Processing,vol.37, pp. 8398, 1989. G. Pavlovic and A. M. Tekalp, Maximum likelihood parametric blur identification based on a continuous spatial domain model, IEEE Trans. Image Process., vol. 1, no. 10, pp. 496-504, Oct. 1992. T.S. Huang, G.J. Yang, And G.Y. Tang, "A Fast Two Dimensional Median Filtering Algorithm", IEEE Trans. On Accustics, Speech,Signal Processing, Vol. ASSP-27, No.1, Feb 1997. M. M. Chang, A. M. Tekalp, and A. T. Erdem, Blur identification using the bispectrum, IEEE Trans. Acoust., Speech, Signal Process.,vol. 39, no. 5, pp. 2323-2325, Oct. 1991.nd K. K. Ma, (Jun. 2006) . Neelamani R., Choi H., and Baraniuk R. G., "Forward: Fourier-wavelet regularized deconvolution for ill-conditioned systems", IEEE Trans. On Signal Processing, Vol. 52, No 2 (2003) 418-433. Prieto (eds.) Bio-inspired Applications of Connectionism. Lecture Notesin Computer Science, Vol. 2085 Springer-Verlag, Berlin Heidelberg New York (2001) 369-374. R. L. Lagendijk, J. Biemond, and D. E. Boekee, Blur identification using the expectation-maximization algorithm, in Proc. IEEE. Int.Conf. Acoustics, Speech, Signal Process., vol. 37, Dec. 1989, pp. 1397-1400. Rafael C. Gonzalez and Richard E. Woods, Digital image processing, 2nd edition, Addison- Wesely, 2004.

2.

3.

4.

5.

6.

7.

8.

9.

10.

33

11.

Satyadhyan Chickerur, Aswatha Kumar, A Biologically Inspired Filter For Image Restoration, International Journal of Recent Trends in Engineering, Vol 2, No. 2, November 2009. W.Y. Han and J. C. Lin, Minimummaxmum exclusive mean (MMEM)filter to remove impulse noise from highly corrupted image,Electron.Lett., vol. 33, pp. 124125, 1997.

12.

34

You might also like