A Report: A Guided Tour To Image Processing Analysis and Its Application

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 30

A REPORT

ON

A GUIDED TOUR TO IMAGE PROCESSING ANALYSIS AND ITS


APPLICATION

BY

Names of the Student Discipline


Adithya Reddy Gangidi Electrical and Electronics Engineering

Prepared in partial fulfillment of the

IASc-INSA-NASI Summer Research Fellowship

AT

INDIAN STATISTICAL INSTITUTE, KOLKATA

1
IASc-INSA-NASI Summer Research Fellowship

Station: Indian Statistical Institute Center: Kolkata

Duration: 8 weeks Date of Start: 28th April, 08

Date of Submission: 12h July 08

Title of the Project: A Guided Tour to Image Processing Analysis and its Application

Name: Adithya Reddy Gangidi

Discipline: B. E. (Hons.) Electrical and Electronics Engineering

Name of Guide: Prof. Malay Kumar Kundu

Key Words: Image processing, Image Enhancement, Texture measurement, Fuzzy Logic

Project Areas: Soft Computing

Signature of Guide Signature of Student

2
ACKNOWLEDGEMENTS

I would like to take this opportunity to thank my guide Prof. Malay Kumar Kundu,
Machine Intelligence Unit, ISI Kolkata for the excellent supervision. He has been a
constant source of motivation and I am really honored to be able to work under his
guidance. He constantly helped me get a lot of clarity to understand various topics
involved in this project.

I thank Dr. P. Maji for providing knowledge and guidance throughout my fellowship. I
thank him for his willingness to help me achieve my goals.

I would also like to extend my gratitude towards Mr. G Madhavan, Executive Secretary
Indian Academy of Sciences for addressing all my correspondences and enquiries.

Finally, My heartfelt thanks to everyone at ISI and IAS and all others whose names I did
not mention, but who contributed in any form towards the successful completion of the
project.

3
ABSTRACT

The project involved understanding image processing techniques like enhancement and
working on automatic selection of object enhancement operator based on fuzzy set
theoretic measures – and implementing these operations on C. It involved modifying the
existing approach, which minimized grayness ambiguity and spatial ambiguity
(compactness). The modification involved incorporating a measure of texture ambiguity
(entropy) and total connectedness. A fuzzy membership function was assigned to both of
them. Textural ambiguity, a function of textural membership is minimized along with
maximizing connectedness, to arrive at an optimum point with enhanced texture and
maintaining the original connectivity of the pixels. Based on the above results a nonlinear
enhancement function is chosen. Finally as an application of image processing, Image
thinning algorithms are applied on welding seam images, as a starting step of the
algorithm for vision based seam tracking. This work focuses in the direction of getting
more deterministic measure of welding groove parameters and thus on accurate control.

4
CONTENTS

1. Chapter 1
• Introduction
• Scope of the project

2. Chapter 2
• Existing enhancement algorithms
• Implementation

3. Chapter 3
• Some relevant definitions
A. Texture – a brief introduction
A. Entropy
B. Non-linear enhancement functions

4. Chapter 4
• Automatic enhancement
A. Previous approach
B. Modified approach
(i) Calculation of Textural Ambiguity
(ii) Calculation of Connectedness
(iii) Algorithm
(iv) Implementation

5. Chapter 5

• Application of Image Processing: Seam Tracking


A. Seam Tracking
B. Previous approach
C. Usage of thinning algorithms

Chapter 1

5
INTRODUCTION

In the scientific community, a lot of essential work goes into the application of digital
image processing. In particular, digital image processing is the only practical technology
for Classification, Feature extraction, Pattern recognition, Projection, Multi-scale signal
analysis. One of the most elemental steps in image processing is enhancement also
termed as pre-processing.

In the beginning of my project I learnt about various steps involved in digital image
processing like enhancement in spatial and frequency domain, I understood the
mechanics of spatial filtering and the use of smoothing spatial filters used for blurring,
sharpening spatial filters used for highlighting fine details. In sharpening I encountered
the use of the Laplacian (2nd order derivatives) and the Gradient (1st order derivative)
needed for such enhancements. Then I started segmentation – specifically edge detection
in detail using gradient operators like Roberts, Prewitts and Sobels operators and
Laplacian operators and representation and description. I then went on implementing
enhancement algorithms in C language, such as basic gray level manipulation, log
transforms, negatives and thresholding. I also implented histogram construction in C.

I started to work on modifying an algorithm for Automatic selection of a nonliner


function appropriate for object enhancement of a given image. I referred to [1], and learnt
the basics of Fuzzy sets, operations of fuzzy sets and fuzzy relation and composition [5].
The previous algorithm, minimizes fuzziness (ambiguity) in both grayness and in spatial
domain. Entropy being a measure of grayness ambiguity and compactness being a
measure of spatial ambiguity.

In my modification, I tried to incorporate texture instead of brightness. For this purpose I


referred to [1] which explains various methods involved in texture measurement. I used a
statistical approach – edges per unit area and implemented it using a Roberts Gradient
Function. I also calculated the connectedness associated with the image.

I then assigned a fuzzy membership function to all these measures and optimized them,
maintaining the original connectivity and enhancing the image texture.

SCOPE OF THE PROJECT

The basic need of image enhancement is to improve the quality for visual judgment of the
picture. Most of the existing enhancement techniques are heuristic and problem
dependent. When an image is processed for visual interpretation, it is ultimately up to the
viewers to judge its quality for a specific application and how well a particular method
works. The process of evaluation of image quality therefore becomes subjective which
makes the definition of a well-processed image an elusive standard for comparison of
algorithm performance. Hence, it becomes necessary to have an iterative process with
human interaction in order to select an appropriate operator for obtaining such a desired
processed output. Given arbitrary image, problems like choosing an appropriate nonlinear

6
function without prior knowledge on image statistics and knowing the function how can
one quantify the enhancement quality for obtaining the optimal one arise. To resolve this
human interaction in an iterative process is required.

Therefore, to avoid such human interaction we apply the theory of fuzzy sets. The
original algorithm minimizes (optimizes) two types of ambiguity (fuzziness), namely,
ambiguity in grayness and ambiguity in geometry of an image containing an object. We
extend this further by understanding the concept of texture in image processing. This is
done by choosing edges per unit area as a statistical measure of image texture. This is
done in order to obtain the automatic enhancement of the image texture.

Chapter 2

7
EXISTING ENHANCEMENT ALGORITHMS

The principal objective of enhancement is to process an image so that the result is more
suitable than the original image for a specific application. It is to highlight certain
features of interest in an image. When an image is processed for visual interpretation, the
viewer is the ultimate judge of how well a particular method works. Visual evaluation of
image quality is a highly subjective process, thus making the definition of a ‘good image’
an elusive standard by which to compare algorithm performance. Some of the simplest
image enhancement techniques are gray-level transformation functions.

There are some basic types of gray-level transformation functions used frequently for
image enhancement: negative, logarithmic, gray-level manipulation and thresholding.

• Image negative: The negative of an image with gray levels in the range [0,L-1]
where L being the maximum gray-level value, is obtained by using the following
transformation function:
s=L–1-r
s: transformed gray-level value
r: initial gray-level value

This reverses the intensity levels of an image and produces the equivalent
photographic negative. It is suited for enhancing white or gray detail embedded in
dark regions of an image.

• Gray-level manipulation: Given an image, we can perform several types of


gray-level manipulation by just multiplying each gray-level value with a constant.
The value of constant usually taken is 3 or 5.

• Log transformation: This transformation maps a narrow range of low gray-level


values in an input image to a wider range of output levels. This can also be termed
as expansion of gray levels. The basic type of log transform is given by the
following expression:
s = c log (1 + r)
Where, c is a constant and r >= 0.

• Thresholding operation: This operation will create a 2 level image called a


binary image. It is also termed as contrast stretching/enhancement. If the chosen
threshold is ‘r’, all values below r are darken or taken as 0 and all values above r
are brighten or taken as L-1.

• Averaging Filter Operation: This is a smoothing operation used for blurring and
noise reduction. The output of running a smoothing, linear spatial filter is the
average of the pixels contained in the neighborhood of the filter mask.

IMPLEMENTATION

8
As a first step I processed a PGM image (Portable Gray Map) in C. I used basic file I/O
instructions in C and read the value of each of the pixels into an array. This involved
basic understanding of the PGM format specifications.

Once you get an image data read into an array, all transformations on image would be in
effect achieved by manipulating each data-element in this array. The data-elements in the
array are nothing but the gray-level value at each pixel position.

An array ‘picture’ is made after processing the PGM image. It consists


of value of grayness of all the pixels. Each grayness value read as
“picture[row][col]”, modified and written into a temp variable this in
turn is written into the “temp”
Write the temp variable into the new image for each of the
modification to the pixel.

I implemented the code for several images and I have displayed the
results for one particular image shown in figure 1. I have chosen the
image shown in figure 1 to be my original image and then performed
the following operations on it:

• Image Negative: As shown in figure 2


temp=(255-(picture[row][col]));
In our image the maximum gray-level value was 256.

• Gray – Level Manipulation: As shown in figure 3


temp=(picture[row][col]*5);
Constant value is chosen as 5.

• Log transformation: As shown in figure 4


temp=(20*log(1+picture[row][col]));


Threshold Operation: As shown in figure 5
Threshold Value of 127
if (picture[row][col] > 127 )
temp=255;
else
temp=0;

• Histogram plot: As shown in figure 6

The histogram of a digital image with gray-levels in the range [0,L-1] is a discrete
function, which gives us the probability of occurrence of all the gray-levels present in the
given image.

9
• Averaging Filter Operation: As shown in figure 7
if ((col==0) ||(row==0)||(row==(numRows-1))||(col==(numCols-1)))
temp=255;
else
temp= (picture[row-1][col+1]+picture[row-1][col]+picture[row-1]
[col-1]+picture[row][col+1]+picture[row][col]+picture[row][col-
1]+picture[row+1][col-1]+picture[row+1][col]+picture[row+1]
[col+1])/9;

Figure 1: Original Image (baloon.pgm) Figure 2: Image negative

Figure 3: Gray Level Manipulation Figure 4: Log Transformation

10
Figure 5: Threshold Operation Figure 6: Histogram Plot

Figure 7: Pre-filtered Image Figure 7: Post-filtered Image


(Smoothing Operation)

11
Chapter 3

SOME RELEVANT DEFINTIONS

A. Texture – a brief introduction

Texture is an important characteristic for the analysis of many types of images. Texture
measures look for visual patterns in images and how they are spatially defined. It can be
seen in all images from multi-spectral scanner images obtained from aircraft or satellite
platforms (which the remote sensing community analyzes) to microscopic images of cell
cultures or tissue samples (which the biomedical community analyzes). Texture, when
decomapsable has two basic type of dimensions on which it is described. The first
dimension is for describing the primitives out of which the image texture is composed i.e
tonal primitives or local properties and the second dimension is concerned with the
spatial organisation of these tonal primitives. Thus, image texture can be quantitatively
evaluated as having properties of fineness, coarseness, smoothness, granulation,
randomness and many more. There are statistical as well as structural approaches to the
measurement and characterization of image texture.

HARALICK in [1] summarizes some of the extraction techniques and models, which
investigators have been using to measure textural properties.

The number and types of its primitives and the spatial organization or layout of its
primitives describe an image texture. The spatial organization may be random, may have
a pair-wise dependence of one primitive on a neighboring primitive, or may have a
dependence of n primitives at a time. The dependence may be structural, probabilistic,
or functional (like a linear dependence).

One such measure of texture is quoted here:

Edge Per Unit Area: Rosenfeld and Troy [3] and Rosenfeld and Thurston [4]
suggested the amount of edge per unit area for a texture measure. The primitive here is
the pixel and its property is the magnitude of its gradient. The gradient can be calculated
by any one of the gradient neighborhood operators. For some specified window centered
on a given pixel, the distribution of gradient magnitudes can then be determined. The
mean of this distribution is the amount of edge per unit area associated with the given
pixel. The image in which each pixel's value is edge per unit area is actually a defocused
gradient image.

B. Entropy

The entropy of a given image gives us global information and provides an average
amount of fuzziness in grayness of an image say X. This is the degree of difficulty
(ambiguity) in deciding whether a pixel would be treated as black (dark) or white
(bright). The difficulty is minimum when the fuzzy membership is 0 or 1 (that is the

12
image is crisp with either fully black or white pixels.) and maximum when the fuzzy
membership is 0.5 (that is semi-bright pixels).

Given an image X, the entropy H (X) can be calculated as:

C. Non-linear enhancement functions

There are 4 basic types of non-linear mapping functions used for enhancement. Different
forms of non-linear enhancement functions along with their formulas are discussed.

The mapping function in Figure l is represented by

Where the parameter b is a positive constant.


When applied on an image, makes the dark area (lower range of gray level) stretched and
the bright area compressed, resulting in an increase in the contrast within the darker area
of the image.

The mapping function in Figure 3 is represented by

Where Fe and Fd are positive constants and β is the value of f(Xmn ) for Xmn == 0
The application of this mapping function to an image will produce the direct opposite
effect as that of the above mentioned function.

13
The mapping function in Figure 2 is represented by

The use of this function will result in stretching of the middle range gray levels.

The mapping function in Figure 4 is represented by

Where Fe and Fd are positive constants and β is the value of f(Xmn ) for Xmn == 0
When used as mapping function, will compress drastically the midrange values, and at
the same time it will stretch the gray levels of the upper and lower ends.

Figure 1 Figure 2

14
Figure 3 Figure 4

15
Chapter 4

AUTOMATIC ENHANCEMENT

A. Previous approach

As discussed in chapter 1, this approach is an attempt to demonstrate an application of the


theory of fuzzy sets to avoid human iterative interaction and to make the task of
subjective evaluation objective.

An algorithm for automatic selection of a nonlinear function appropriate for object


enhancement of a given image is described in [2]. The algorithm does not need iterative
visual interaction and prior knowledge of image statistics in order to select the
transformation function for its optimal enhancement. A quantitative measure for
evaluating enhancement equality has been provided based on fuzzy geometry. The
concept of minimizing fuzziness (ambiguity) in both grayness and in spatial domain, as
used by Pal and Rosenfeld [4], has been adopted in [2]. The selection criteria are further
justified from the point of bounds of the membership function. The effectiveness of the
algorithm is demonstrated for unimodal, multimodal and right skewed images when
possible nonlinear transformation functions are taken into account.

The proposed algorithm in [2] has three parts. Given an input image X and a set of
nonlinear transformation functions, it first of all enhances the image with a particular
enhancement function with its varying parameters. The second phase consists of
measuring both spatial ambiguity and grayness ambiguity of the various enhanced X'
using the algorithm in [4], and of checking if these measures posses any valley
(minimum) with change in the parameters. The same procedure is repeated in the third
stage for other functions. Among all the valleys, the global one is selected. The
corresponding function with the prescribed parameter values can be regarded as optimal,
and the value of ambiguity corresponding to the global minimum can be viewed as a
quantitative measure of enhancement quality.

B. Modified approach

This process has been modified to enhance the texture automatically. This report fully
describes the modifications proposed and the results obtained with such methods. In [2]
the spatial ambiguity in grayness and compactness in the image are minimized. Instead,
here, the spatial ambiguity in texture is minimized. But this enhancement is actually prior
to the feature extraction stage of Content based image retrieval. So the texture
distribution needs to be as close to that of initial image as possible. Hence the measure of
connectedness in the image is also maximized.

16
1. Calculation of Textural Ambiguity

A 3x3 gradient operator is run over the initial image (say Y) and a 5X5 operator
averaging is run on the resultant image to get ‘edges per unit area’ image. Each value
in this image is put the Zadeh’s S function, which yields the membership value at
each location, thus constituting a membership plane for texture (X).

The entropy can be calculated by the expression defined in chapter 3.

2. Calculation of Connectedness

In each neighborhood the center pixel value is subtracted from each of the 4
neighbors and minimum of absolute values of gradients is considered to construct an
image. This resultant image is used to calculate the membership function for
Connectedness with the Zadeh’s S function. The sum of Membership function values
at each pixel is considered to be the measure of disconnectedness in the whole image.

3. Algorithm

• Enhance the given Image by any of the four enhancement functions with a
specific set of parameter values.
• Calculate the product of texture entropy and disconnectedness.
• Change the parameter value and repeat the calculation of product. This process is
done for a set of parameter values and then the product plot is probed for a global
minimum in product. If a global minimum is found then process is stopped.
• If, it’s not then select another form of enhancement function and repeat the
process until a minima is found for one of the form of the function.
• The image corresponding to the minima is the enhanced image.

4. Implementation

Algorithm is tested for the following cases:

Case 1:A smoothened image is given as input to the algorithm and the product gives
global minima for function form 4.

The original image (LANDSAT image) is shown in Figure 1. After the application of
the algorithm mentioned in (3) the output image is shown in figure 2.

Figure 3 shows the product plot of textural ambiguity and disconnectedness as the
parameter for functional form 4 (as described in chapter 3) varies.

17
Figure 1: Input Image1 Figure 2: Enhanced Image

Figure 3: Product Plot of Textural Ambiguity and Total connectedness

18
Case 2: The image shown in Figure 1 is taken as another input image for this
algorithm. Figure 2 shows its corresponding histogram. It can be seen that it is
concentrated in the middle.

Figure 3 shows the output image obtained after the application of the algorithm
described in (3) and Figure 4 shows the product plot of textural ambiguity and
disconnectedness as the parameter for functional form 4 (as described in chapter 3)
varies.

Figure 1: Input Image2

Figure 2: the LANDSAT image's Histogram - Mostly on the center

19
Figure 3: the LANDSAT image corresponding to extended Histogram that is the
after enhancement image

Figure 4: Product Plot of Textural Ambiguity and disconnectedness as the


parameter for functional form 4 varies.

20
Case 3: The image shown in Figure 1 is taken as another input image for this
algorithm. It can be seen that it’s histogram concentrated in the left region.

Figure 2 shows the output image obtained after the application of the algorithm
described in (3) and Figure 3 shows the product plot of textural ambiguity and total
connectedness as the parameter for functional form (as described in chapter 3)
varies.

Figure 1 Input Image3 Figure 2 Exposure corrected

Figure 3 Product Plot of Textural Ambiguity and total connectedness

21
Case 4: The image shown in Figure 1 is taken as another input image for this
algorithm. It can be seen that it’s histogram concentrated in the right region (over-
exposed).

Figure 2 shows the output image obtained after the application of the algorithm
described in (3) and Figure 3 shows the product plot of textural ambiguity and total
connectedness as the parameter for functional form (as described in chapter 3) varies.

Figure 1: Over Exposed image Figure 2: Corrected image

Figure 1 Product Plot of Textural Ambiguity and total Connectedness

22
Chapter 4

APPLICATION OF IMAGE PROCESSING: SEAM TRACKING

A. Seam Tracking:

1. Introduction:

The use of robots in manufacturing industry has increased rapidly during the past decade.
Arc welding is an actively growing area and many new procedures have been developed
for use with new lightweight, high strength alloys. One of the basic requirements for such
applications is seam tracking. Seam tracking is required because of the inaccuracies in
joint fit-up and positioning, war page, and distortion of the work piece caused by thermal
expansion and stresses during the welding.

2. Description of apparatus arrangement:

A laser beam is projected on to a measurement surface where it is scattered from surface


and its image is detected by an optical detector, a camera. The images [6] contain the
detail of groove measurements. PC processes the images, infers the details and gives the
feedback to motor.

Fig 1: Apparatus of the system with interconnections

23
B. Previous Approach:

As quoted in [6] the algorithm involved takes the Image ROI as direct input (with a thick
LASER line) and then operations like filtering, edge detection and thresholding are
performed on the image.

The algorithm in previous approach involves the following steps:

• Calibrate the image with help of set of blocks of known dimensions.


• Get the image and store it into an array variable.
• Convert the true color image into gray scale image.
• Select a region of interest for further processing.
• Filter the image by removing noises.
• Detect the edge of the images using edge detection operator.
• Once edge detection is done program, execute the MATLAB codes to find
various parameters required.
• Calculate the pixel values of edge and root centers.
• Estimate the amount of deviation in successive values of edge center using the
calibration measurements.

• Give corresponding feedback to the motor so as to correct the deviation.

Figure 2 Typical region of interest obtained from the welding groove for LASER line
source welding seam tracking apparatus

24
Figure 3 Typical region of interest obtained from the calibration bar for LASER line
source welding seam tracking apparatus
As we observe in both the cases the LASER line source is 6-8 pixels wide. But a precise
measurement demands a very thin LASER line source. Also template matching and other
methods of measurement demand thin lines. As to obtain a further thin LASER line, we
need a costly source; we can use thinning algorithms to obtain a thin LASER line.

C. Using thinning algorithms:

Algorithm Desciption:

In [7] a parallel gray-tone thinning algorithm has been quoted. Gray tone thinning (GT)
can be thought of as a generalization of the two-tone thinning algorithm. In the two-tone
thinning algorithm, the object pixels, which are adjacent to the background, are mapped
to the background value. Similarly, in GT, pixels, which are very close to the background
both in location and gray level, are mapped to the local maximum value (local
background value). This similarity suggests that a two-tone thinning algorithm can be
modified to suit the gray level environment.

25
To implement this algorithm for a gray level picture with similar checking of conditions,
the neighborhood pixels around the candidate pel are to be mapped temporarily to some
compatible state.
The threshold value calculated over (N×N) window is given by

Where pi for a (3 x 3) window is as specified above.


For (3x3) window

The mapped threshold value of the neighborhood pixels is

The Algorithm:
It is a two-pass algorithm.
Pass 1:

3 <=B(p') =< 5, (1)


A(p') = 1, (2)

Here A(p) is the number of '01' patterns in the ordered set Pl,P2 P8 (see Figure 1), and
B(p) is the total number of nonzero neighbors of Xmn.

(3) &(4)

The four conditions are checked in the first pass and if it is found to be true for all of
them, then the candidate pixel xmn,(Figure above) is changed to the new value:

Pass 2:
In pass 2 the equations (1), (2), (5) and (6) are checked and same is done as in pass 1, if
all the four equations are satisfied.

(5) And (6)


This 2-pass algorithm is repeated again and again until the thickness comes to acceptable
levels.

26
Implementation of thinning algorithms in C:
This algorithm is coded in C and for several cases of seam tracking images obtained the
automatic enhancement and thinning algorithms are run in conjunction to obtain the
thinned version of images.
The results are as shown:

Figure 4 Calibration Image Thinned version

Figure 5 Seam Image thinned version

27
FUTURE PROSPECTS OF WORK:

The Algorithm quoted in [6] is a typical Machine vision algorithm but not much of
intelligence is embedded into it. Which, if done, can make measurements less error
prone. Soft computing tools like fuzzy logic can be used to modify the existing algorithm.
Automatic enhancement and thinning algorithms are efforts in this direction. Further
progress can be made by incorporating contour searching algorithms and detecting
patterns in the search results. The ambiguity in detecting the corners can be modeled by
fuzzy corner ness [8].
Prof. Kundu’s guidance and the exposure to image processing and soft computing tools
obtained at Center for Soft Computing Research, I believe, will help me make more
progress in this direction.

28
CONCLUSION:

Various forms of enhancement functions are implemented in C. A fuzzy measure for


textural ambiguity and disconnectedness has been proposed. An Algorithm for automatic
enhancement using fuzzy sets has been modified using minimization of textural
ambiguity and disconnectedness such that it automatically enhances the texture in the
image. The algorithm is tested on images of various kinds and the outputs are
encouraging. As an application of image processing for seam tracking, thinning
algorithms are successfully used to process welding seam images. Work has been done in
direction so as to suggest an algorithm, which has a more deterministic control for seam
tracking.

29
REFERENCES:

[1] R. M. Haralick, “Statistical and structural approaches to texture,” Proc. IEEE, vol.
67, no. 5, pp. 786-804, May 1979.

[2] Malay K. Kundu and Sankar K. Pal. ``Automatic selection of object enhancement
operator with quantitative justification based on fuzzy set theoretic measure '', Pattern
Recognition Letters, vol. 11, pp. 811--829, 1990.

[3] A. Rosenfeld and E. Troy, “Visual texture analysis” Tech. Rep.70-116, University of
Maryland, College Park, MD, June 1970.Atso in Confmnce Record for Symposium on
Feature Extration and Selection in Pattern Recognition, Argonne, IL, IEEE Publication
7OC-51C, Oct. 1970, pp. 115-124.

[4] A. Rosenfeld and M. Thurston, “Edge and curve detection for visual scene analysis,”
ZEEE T m Cornput, vol. C-20, pp.562-569, May 1971.

[5] Lee, Kwang H., First Course on Fuzzy Theory and Applications, Series: Advances in
Soft Computing, Vol. 27, Springer-Verlag, Berlin, March 2005,

[6]. Akash Raman, Aditya Reddy & Harish Reddy. Laser Vision Based Seam Tracking
System For Welding Automation. The International Conference on Image Processing,
Computer Vision, and Pattern Recognition (IPCV, WorldComp'08 Congress, Las Vegas,
USA), July, 2008

[7] Malay Kumar Kundu, Bidyut Baran Chaudhuri, D. Dutta Majumder: A parallel
graytone thinning algorithm (PGTA). Pattern Recognition Letters 12(8): 491-496 (1991)

[8] Minakshi Banerjee, Malay K. Kundu: Content Based Image Retrieval with
Multiresolution Salient Points. ICVGIP 2004: 399-404

[8] D. Phillips; Image Processing in C: Analyzing and Enhancing Digital Images, RandD
Publications, 1994.

30

You might also like