A Report: A Guided Tour To Image Processing Analysis and Its Application
A Report: A Guided Tour To Image Processing Analysis and Its Application
A Report: A Guided Tour To Image Processing Analysis and Its Application
ON
BY
AT
1
IASc-INSA-NASI Summer Research Fellowship
Title of the Project: A Guided Tour to Image Processing Analysis and its Application
Key Words: Image processing, Image Enhancement, Texture measurement, Fuzzy Logic
2
ACKNOWLEDGEMENTS
I would like to take this opportunity to thank my guide Prof. Malay Kumar Kundu,
Machine Intelligence Unit, ISI Kolkata for the excellent supervision. He has been a
constant source of motivation and I am really honored to be able to work under his
guidance. He constantly helped me get a lot of clarity to understand various topics
involved in this project.
I thank Dr. P. Maji for providing knowledge and guidance throughout my fellowship. I
thank him for his willingness to help me achieve my goals.
I would also like to extend my gratitude towards Mr. G Madhavan, Executive Secretary
Indian Academy of Sciences for addressing all my correspondences and enquiries.
Finally, My heartfelt thanks to everyone at ISI and IAS and all others whose names I did
not mention, but who contributed in any form towards the successful completion of the
project.
3
ABSTRACT
The project involved understanding image processing techniques like enhancement and
working on automatic selection of object enhancement operator based on fuzzy set
theoretic measures – and implementing these operations on C. It involved modifying the
existing approach, which minimized grayness ambiguity and spatial ambiguity
(compactness). The modification involved incorporating a measure of texture ambiguity
(entropy) and total connectedness. A fuzzy membership function was assigned to both of
them. Textural ambiguity, a function of textural membership is minimized along with
maximizing connectedness, to arrive at an optimum point with enhanced texture and
maintaining the original connectivity of the pixels. Based on the above results a nonlinear
enhancement function is chosen. Finally as an application of image processing, Image
thinning algorithms are applied on welding seam images, as a starting step of the
algorithm for vision based seam tracking. This work focuses in the direction of getting
more deterministic measure of welding groove parameters and thus on accurate control.
4
CONTENTS
1. Chapter 1
• Introduction
• Scope of the project
2. Chapter 2
• Existing enhancement algorithms
• Implementation
3. Chapter 3
• Some relevant definitions
A. Texture – a brief introduction
A. Entropy
B. Non-linear enhancement functions
4. Chapter 4
• Automatic enhancement
A. Previous approach
B. Modified approach
(i) Calculation of Textural Ambiguity
(ii) Calculation of Connectedness
(iii) Algorithm
(iv) Implementation
5. Chapter 5
Chapter 1
5
INTRODUCTION
In the scientific community, a lot of essential work goes into the application of digital
image processing. In particular, digital image processing is the only practical technology
for Classification, Feature extraction, Pattern recognition, Projection, Multi-scale signal
analysis. One of the most elemental steps in image processing is enhancement also
termed as pre-processing.
In the beginning of my project I learnt about various steps involved in digital image
processing like enhancement in spatial and frequency domain, I understood the
mechanics of spatial filtering and the use of smoothing spatial filters used for blurring,
sharpening spatial filters used for highlighting fine details. In sharpening I encountered
the use of the Laplacian (2nd order derivatives) and the Gradient (1st order derivative)
needed for such enhancements. Then I started segmentation – specifically edge detection
in detail using gradient operators like Roberts, Prewitts and Sobels operators and
Laplacian operators and representation and description. I then went on implementing
enhancement algorithms in C language, such as basic gray level manipulation, log
transforms, negatives and thresholding. I also implented histogram construction in C.
I then assigned a fuzzy membership function to all these measures and optimized them,
maintaining the original connectivity and enhancing the image texture.
The basic need of image enhancement is to improve the quality for visual judgment of the
picture. Most of the existing enhancement techniques are heuristic and problem
dependent. When an image is processed for visual interpretation, it is ultimately up to the
viewers to judge its quality for a specific application and how well a particular method
works. The process of evaluation of image quality therefore becomes subjective which
makes the definition of a well-processed image an elusive standard for comparison of
algorithm performance. Hence, it becomes necessary to have an iterative process with
human interaction in order to select an appropriate operator for obtaining such a desired
processed output. Given arbitrary image, problems like choosing an appropriate nonlinear
6
function without prior knowledge on image statistics and knowing the function how can
one quantify the enhancement quality for obtaining the optimal one arise. To resolve this
human interaction in an iterative process is required.
Therefore, to avoid such human interaction we apply the theory of fuzzy sets. The
original algorithm minimizes (optimizes) two types of ambiguity (fuzziness), namely,
ambiguity in grayness and ambiguity in geometry of an image containing an object. We
extend this further by understanding the concept of texture in image processing. This is
done by choosing edges per unit area as a statistical measure of image texture. This is
done in order to obtain the automatic enhancement of the image texture.
Chapter 2
7
EXISTING ENHANCEMENT ALGORITHMS
The principal objective of enhancement is to process an image so that the result is more
suitable than the original image for a specific application. It is to highlight certain
features of interest in an image. When an image is processed for visual interpretation, the
viewer is the ultimate judge of how well a particular method works. Visual evaluation of
image quality is a highly subjective process, thus making the definition of a ‘good image’
an elusive standard by which to compare algorithm performance. Some of the simplest
image enhancement techniques are gray-level transformation functions.
There are some basic types of gray-level transformation functions used frequently for
image enhancement: negative, logarithmic, gray-level manipulation and thresholding.
• Image negative: The negative of an image with gray levels in the range [0,L-1]
where L being the maximum gray-level value, is obtained by using the following
transformation function:
s=L–1-r
s: transformed gray-level value
r: initial gray-level value
This reverses the intensity levels of an image and produces the equivalent
photographic negative. It is suited for enhancing white or gray detail embedded in
dark regions of an image.
• Averaging Filter Operation: This is a smoothing operation used for blurring and
noise reduction. The output of running a smoothing, linear spatial filter is the
average of the pixels contained in the neighborhood of the filter mask.
IMPLEMENTATION
8
As a first step I processed a PGM image (Portable Gray Map) in C. I used basic file I/O
instructions in C and read the value of each of the pixels into an array. This involved
basic understanding of the PGM format specifications.
Once you get an image data read into an array, all transformations on image would be in
effect achieved by manipulating each data-element in this array. The data-elements in the
array are nothing but the gray-level value at each pixel position.
I implemented the code for several images and I have displayed the
results for one particular image shown in figure 1. I have chosen the
image shown in figure 1 to be my original image and then performed
the following operations on it:
•
Threshold Operation: As shown in figure 5
Threshold Value of 127
if (picture[row][col] > 127 )
temp=255;
else
temp=0;
The histogram of a digital image with gray-levels in the range [0,L-1] is a discrete
function, which gives us the probability of occurrence of all the gray-levels present in the
given image.
9
• Averaging Filter Operation: As shown in figure 7
if ((col==0) ||(row==0)||(row==(numRows-1))||(col==(numCols-1)))
temp=255;
else
temp= (picture[row-1][col+1]+picture[row-1][col]+picture[row-1]
[col-1]+picture[row][col+1]+picture[row][col]+picture[row][col-
1]+picture[row+1][col-1]+picture[row+1][col]+picture[row+1]
[col+1])/9;
10
Figure 5: Threshold Operation Figure 6: Histogram Plot
11
Chapter 3
Texture is an important characteristic for the analysis of many types of images. Texture
measures look for visual patterns in images and how they are spatially defined. It can be
seen in all images from multi-spectral scanner images obtained from aircraft or satellite
platforms (which the remote sensing community analyzes) to microscopic images of cell
cultures or tissue samples (which the biomedical community analyzes). Texture, when
decomapsable has two basic type of dimensions on which it is described. The first
dimension is for describing the primitives out of which the image texture is composed i.e
tonal primitives or local properties and the second dimension is concerned with the
spatial organisation of these tonal primitives. Thus, image texture can be quantitatively
evaluated as having properties of fineness, coarseness, smoothness, granulation,
randomness and many more. There are statistical as well as structural approaches to the
measurement and characterization of image texture.
HARALICK in [1] summarizes some of the extraction techniques and models, which
investigators have been using to measure textural properties.
The number and types of its primitives and the spatial organization or layout of its
primitives describe an image texture. The spatial organization may be random, may have
a pair-wise dependence of one primitive on a neighboring primitive, or may have a
dependence of n primitives at a time. The dependence may be structural, probabilistic,
or functional (like a linear dependence).
Edge Per Unit Area: Rosenfeld and Troy [3] and Rosenfeld and Thurston [4]
suggested the amount of edge per unit area for a texture measure. The primitive here is
the pixel and its property is the magnitude of its gradient. The gradient can be calculated
by any one of the gradient neighborhood operators. For some specified window centered
on a given pixel, the distribution of gradient magnitudes can then be determined. The
mean of this distribution is the amount of edge per unit area associated with the given
pixel. The image in which each pixel's value is edge per unit area is actually a defocused
gradient image.
B. Entropy
The entropy of a given image gives us global information and provides an average
amount of fuzziness in grayness of an image say X. This is the degree of difficulty
(ambiguity) in deciding whether a pixel would be treated as black (dark) or white
(bright). The difficulty is minimum when the fuzzy membership is 0 or 1 (that is the
12
image is crisp with either fully black or white pixels.) and maximum when the fuzzy
membership is 0.5 (that is semi-bright pixels).
There are 4 basic types of non-linear mapping functions used for enhancement. Different
forms of non-linear enhancement functions along with their formulas are discussed.
Where Fe and Fd are positive constants and β is the value of f(Xmn ) for Xmn == 0
The application of this mapping function to an image will produce the direct opposite
effect as that of the above mentioned function.
13
The mapping function in Figure 2 is represented by
The use of this function will result in stretching of the middle range gray levels.
Where Fe and Fd are positive constants and β is the value of f(Xmn ) for Xmn == 0
When used as mapping function, will compress drastically the midrange values, and at
the same time it will stretch the gray levels of the upper and lower ends.
Figure 1 Figure 2
14
Figure 3 Figure 4
15
Chapter 4
AUTOMATIC ENHANCEMENT
A. Previous approach
The proposed algorithm in [2] has three parts. Given an input image X and a set of
nonlinear transformation functions, it first of all enhances the image with a particular
enhancement function with its varying parameters. The second phase consists of
measuring both spatial ambiguity and grayness ambiguity of the various enhanced X'
using the algorithm in [4], and of checking if these measures posses any valley
(minimum) with change in the parameters. The same procedure is repeated in the third
stage for other functions. Among all the valleys, the global one is selected. The
corresponding function with the prescribed parameter values can be regarded as optimal,
and the value of ambiguity corresponding to the global minimum can be viewed as a
quantitative measure of enhancement quality.
B. Modified approach
This process has been modified to enhance the texture automatically. This report fully
describes the modifications proposed and the results obtained with such methods. In [2]
the spatial ambiguity in grayness and compactness in the image are minimized. Instead,
here, the spatial ambiguity in texture is minimized. But this enhancement is actually prior
to the feature extraction stage of Content based image retrieval. So the texture
distribution needs to be as close to that of initial image as possible. Hence the measure of
connectedness in the image is also maximized.
16
1. Calculation of Textural Ambiguity
A 3x3 gradient operator is run over the initial image (say Y) and a 5X5 operator
averaging is run on the resultant image to get ‘edges per unit area’ image. Each value
in this image is put the Zadeh’s S function, which yields the membership value at
each location, thus constituting a membership plane for texture (X).
2. Calculation of Connectedness
In each neighborhood the center pixel value is subtracted from each of the 4
neighbors and minimum of absolute values of gradients is considered to construct an
image. This resultant image is used to calculate the membership function for
Connectedness with the Zadeh’s S function. The sum of Membership function values
at each pixel is considered to be the measure of disconnectedness in the whole image.
3. Algorithm
• Enhance the given Image by any of the four enhancement functions with a
specific set of parameter values.
• Calculate the product of texture entropy and disconnectedness.
• Change the parameter value and repeat the calculation of product. This process is
done for a set of parameter values and then the product plot is probed for a global
minimum in product. If a global minimum is found then process is stopped.
• If, it’s not then select another form of enhancement function and repeat the
process until a minima is found for one of the form of the function.
• The image corresponding to the minima is the enhanced image.
4. Implementation
Case 1:A smoothened image is given as input to the algorithm and the product gives
global minima for function form 4.
The original image (LANDSAT image) is shown in Figure 1. After the application of
the algorithm mentioned in (3) the output image is shown in figure 2.
Figure 3 shows the product plot of textural ambiguity and disconnectedness as the
parameter for functional form 4 (as described in chapter 3) varies.
17
Figure 1: Input Image1 Figure 2: Enhanced Image
18
Case 2: The image shown in Figure 1 is taken as another input image for this
algorithm. Figure 2 shows its corresponding histogram. It can be seen that it is
concentrated in the middle.
Figure 3 shows the output image obtained after the application of the algorithm
described in (3) and Figure 4 shows the product plot of textural ambiguity and
disconnectedness as the parameter for functional form 4 (as described in chapter 3)
varies.
19
Figure 3: the LANDSAT image corresponding to extended Histogram that is the
after enhancement image
20
Case 3: The image shown in Figure 1 is taken as another input image for this
algorithm. It can be seen that it’s histogram concentrated in the left region.
Figure 2 shows the output image obtained after the application of the algorithm
described in (3) and Figure 3 shows the product plot of textural ambiguity and total
connectedness as the parameter for functional form (as described in chapter 3)
varies.
21
Case 4: The image shown in Figure 1 is taken as another input image for this
algorithm. It can be seen that it’s histogram concentrated in the right region (over-
exposed).
Figure 2 shows the output image obtained after the application of the algorithm
described in (3) and Figure 3 shows the product plot of textural ambiguity and total
connectedness as the parameter for functional form (as described in chapter 3) varies.
22
Chapter 4
A. Seam Tracking:
1. Introduction:
The use of robots in manufacturing industry has increased rapidly during the past decade.
Arc welding is an actively growing area and many new procedures have been developed
for use with new lightweight, high strength alloys. One of the basic requirements for such
applications is seam tracking. Seam tracking is required because of the inaccuracies in
joint fit-up and positioning, war page, and distortion of the work piece caused by thermal
expansion and stresses during the welding.
23
B. Previous Approach:
As quoted in [6] the algorithm involved takes the Image ROI as direct input (with a thick
LASER line) and then operations like filtering, edge detection and thresholding are
performed on the image.
Figure 2 Typical region of interest obtained from the welding groove for LASER line
source welding seam tracking apparatus
24
Figure 3 Typical region of interest obtained from the calibration bar for LASER line
source welding seam tracking apparatus
As we observe in both the cases the LASER line source is 6-8 pixels wide. But a precise
measurement demands a very thin LASER line source. Also template matching and other
methods of measurement demand thin lines. As to obtain a further thin LASER line, we
need a costly source; we can use thinning algorithms to obtain a thin LASER line.
Algorithm Desciption:
In [7] a parallel gray-tone thinning algorithm has been quoted. Gray tone thinning (GT)
can be thought of as a generalization of the two-tone thinning algorithm. In the two-tone
thinning algorithm, the object pixels, which are adjacent to the background, are mapped
to the background value. Similarly, in GT, pixels, which are very close to the background
both in location and gray level, are mapped to the local maximum value (local
background value). This similarity suggests that a two-tone thinning algorithm can be
modified to suit the gray level environment.
25
To implement this algorithm for a gray level picture with similar checking of conditions,
the neighborhood pixels around the candidate pel are to be mapped temporarily to some
compatible state.
The threshold value calculated over (N×N) window is given by
The Algorithm:
It is a two-pass algorithm.
Pass 1:
Here A(p) is the number of '01' patterns in the ordered set Pl,P2 P8 (see Figure 1), and
B(p) is the total number of nonzero neighbors of Xmn.
(3) &(4)
The four conditions are checked in the first pass and if it is found to be true for all of
them, then the candidate pixel xmn,(Figure above) is changed to the new value:
Pass 2:
In pass 2 the equations (1), (2), (5) and (6) are checked and same is done as in pass 1, if
all the four equations are satisfied.
26
Implementation of thinning algorithms in C:
This algorithm is coded in C and for several cases of seam tracking images obtained the
automatic enhancement and thinning algorithms are run in conjunction to obtain the
thinned version of images.
The results are as shown:
27
FUTURE PROSPECTS OF WORK:
The Algorithm quoted in [6] is a typical Machine vision algorithm but not much of
intelligence is embedded into it. Which, if done, can make measurements less error
prone. Soft computing tools like fuzzy logic can be used to modify the existing algorithm.
Automatic enhancement and thinning algorithms are efforts in this direction. Further
progress can be made by incorporating contour searching algorithms and detecting
patterns in the search results. The ambiguity in detecting the corners can be modeled by
fuzzy corner ness [8].
Prof. Kundu’s guidance and the exposure to image processing and soft computing tools
obtained at Center for Soft Computing Research, I believe, will help me make more
progress in this direction.
28
CONCLUSION:
29
REFERENCES:
[1] R. M. Haralick, “Statistical and structural approaches to texture,” Proc. IEEE, vol.
67, no. 5, pp. 786-804, May 1979.
[2] Malay K. Kundu and Sankar K. Pal. ``Automatic selection of object enhancement
operator with quantitative justification based on fuzzy set theoretic measure '', Pattern
Recognition Letters, vol. 11, pp. 811--829, 1990.
[3] A. Rosenfeld and E. Troy, “Visual texture analysis” Tech. Rep.70-116, University of
Maryland, College Park, MD, June 1970.Atso in Confmnce Record for Symposium on
Feature Extration and Selection in Pattern Recognition, Argonne, IL, IEEE Publication
7OC-51C, Oct. 1970, pp. 115-124.
[4] A. Rosenfeld and M. Thurston, “Edge and curve detection for visual scene analysis,”
ZEEE T m Cornput, vol. C-20, pp.562-569, May 1971.
[5] Lee, Kwang H., First Course on Fuzzy Theory and Applications, Series: Advances in
Soft Computing, Vol. 27, Springer-Verlag, Berlin, March 2005,
[6]. Akash Raman, Aditya Reddy & Harish Reddy. Laser Vision Based Seam Tracking
System For Welding Automation. The International Conference on Image Processing,
Computer Vision, and Pattern Recognition (IPCV, WorldComp'08 Congress, Las Vegas,
USA), July, 2008
[7] Malay Kumar Kundu, Bidyut Baran Chaudhuri, D. Dutta Majumder: A parallel
graytone thinning algorithm (PGTA). Pattern Recognition Letters 12(8): 491-496 (1991)
[8] Minakshi Banerjee, Malay K. Kundu: Content Based Image Retrieval with
Multiresolution Salient Points. ICVGIP 2004: 399-404
[8] D. Phillips; Image Processing in C: Analyzing and Enhancing Digital Images, RandD
Publications, 1994.
30