Finding Diameter Using Matlab
Finding Diameter Using Matlab
Finding Diameter Using Matlab
Measuring objects within an image or frame can be an important capability for many
applications where computer vision is required instead of making physical measurements.
This application note will cover a basic step-by-step algorithm for isolating a desired object
and measuring its diameter. MATLAB is a high-level language and interactive environment
for computer computation, visualization, and programming. Image Processing Toolbox is an
application available for use in MATLAB, which provides a comprehensive set of reference-
standard algorithms, functions, and apps for image processing, analysis, visualization, and
algorithm development.” Using these tools provides a fast and convenient way to process and
analyze images without the need for advanced knowledge of a complex coding language it is
used to measure the dimensions of the object with in the image without using physical
measurements.
iv
INDEX
1 INTRODUCTION 1-5
1.1 LITERATURE SURVEY
2 IMAGEPROCESSING,MEASURINGDIGITAL 6 -10
2.1 IMAGE
2.1.2 PIXEL
2.2 DIGITAL IMAGE PROCESSING
2.3 NONCONTACT MEASUREMENT
2.3.1 MEASURING IN IMAGE
2.4 IMAGENOISE
2.4.1 HOLES
2.4.2 BLOB
5 15 -18
MODELLING
6 19
RESULT
7 20
REFERENCE
v
LIST OF FIGURES
1. PIXEL 12
2. ORIGINAL IMAGE 19
SEGMENTED IMAGE 20
3.
5. NOISE REMOVAL
22
vi
1. INTRODUCTION
If you want to measure objects that are represented by simple shapes like circles, ellipses,
rectangles, or lines, and you have approximate knowledge about their positions,
orientations, and dimensions, you can use 2D metrology to determine the exact shape
parameters. In particular, the values of the initial shape parameters are refined by a
measurement that is based on the exact location of edges within so called measure regions.
1.1Measurement tasks
For abroad range of different 2D measurement tasks. Measuring in images consists of the
extraction of specific features of objects. 2D features that are often extracted comprise
1.The area of an object, i.e., the number of pixels representing the object,
5. The dimension of an object, i.e., its diameter, width, height, or the distance between
objects or parts of objects.
6.the number of objects. To extract the features, several tools are available. Which tool to
choose depends on the goal of the measuring task ,the required accuracy, and the way the
object is represented in the image.
A preprocessing is recommended if the conditions during the image acquisition are not ideal,
e.g., the image isnoisy, cluttered, or the object is disturbed or overlapped by objects of small
extent so that small spots or thin lines prevent the actual object of interest from being
1
described by a homogeneous region. Often applied preprocessing steps comprise the
elimination of noise using mean image or binomial filter and the suppression of small spots
or thin lines with median image. Further operators common to preprocess the whole image
comprise. A smoothing of the image can be realized with smooth image function. If you want
to smooth the image but you want to preserve edges, you can apply anisotropic diffusion
instead.For regions, holes can be filled up using fill up or a morphological operator.
Morphological operators modify regions to suppress small areas, regions of a given
orientation, or regions that are close to other regions. When having an inhomogeneous
background, a shading correction is suitable to compensate the influence of the background.
There, a reference image of the background without the object to measure is taken and
subtracted from the images containing the object.
The most common that you probably noticed would be in crime scene photos when an object
of known size such as a pencil or even better a ruler is placed next to the object of interests.
From this it is very easy to testimage the size of the object of interest.
As long as you know the focal length and the object distance, both of which some lenses
return to the camera you can calculate the real size of an item in the image. The focal length
will give you the lens field of view in terms of degrees.
2
For example, a 100mm lens has a vertical field of view of 14 degrees on a full frame camera.
Something that fills half your frame vertically therefore uses 7 degrees of field of view.
If you no the distance, you can now build a right angled triangle with internal angles of 7, 90
and 83 degrees (cos the internal angles of a triangle add up to 180 degrees).
At this point you have enough information to use the law of tangents (you need two angles
and a length or two lengths and an angle) to calculate the remaining sides of the triangle and
hence the size of the object.
Another method would be to use some clever mathematics based on whatever information
you can gather such as the focal length of a camera or the angle of your camera's view
frustum.
Rajagopalan and Chaudhuri [2] divided the image into many sub-images and superimposed
the sub-images by using block shift variant blur model
Algorithms for image Thresholding by Mohmad A.EI-sayed[5] this paper describes about
various threhsolding algorithms like spatial thresholding and improved adaptive thresholding.
3
R.Thilepa and M.Thanikachalam (2010)[7] mention the image processing technique using
MATLAB that identify the fault presents on the fabrics. For this image processing technique
first image is taken, Noise Filtering, Histogram and Thresholding techniques are applied on
the image and the output is get. First Input color fabric fault image to the Matlab in image
processing system is done, then Color image to gray image conversion is done, then Noise
removal and filtering from the image is done, ThenBinary image conversion from the noise
removed output is done, then Histogram output is obtain, then Thresholding technique is
applied.
Ching Yee Yong et.al. (2012) [8] mention that MATLAB is among the famous software
package used in image processing. In this paper first Introduces general view of the
visualization tools in medical image processing.Then objectives of the study, and then
discusses the background of studies, literature review and the study implementation. Then the
computerEnvironment, developmental tool, processing and analysis of various medical
images are discussed. Then mansion the results, conclusions, future developments and
possible enhancement and improvement on this study.
Anita Chaudhary and Sonit SukhrajSingh (2012)[10] state three stages in study preprocessing
stage, feature extraction stage and lung cancer cell identification stage. Watershed
segmentation method is used to separate a lung of a CT image and then apply a small
scanning window to check if any pixel is part of a disease lump. Most of the lumps can be
detected if parameters are carefully selected. The main purpose of the study is to computerize
these selections.
4
process. In this paper they work on physical separation and nutrient content of seeds using
different methods like Erosion and Dilation, Watershed Model and Line draw method.
Dr. N. Senthilkumaran (2012)[13] mentions the theory of edge detection for dental x-ray
image segmentation using a neural network approach. Neural networks have been applied to
edge detection based on the adaptive learning ability and nonlinear mapping ability. Neural
networks can be trained to detect edges and can serve as nonlinear filters once they are fully
trained. Dr. N. Senthilkumaran (2012)studies the edge detection method for Dental X-ray
image segmentation based on a genetic algorithm approach. It will first select a random point,
which will divide the 2-D array into four parts, and then exchange the genetic material in two
of the four parts. The mutation operation is used, just like the traditional one, which is to
select random genes, and then toggle the bit.
5
2. IMAGE PROCESSING AND MEASURING IN IMAGE
2.1.1 Pixel
In digital imaging, a pixel, pel or picture element is a physical point in a raster image, or the
smallest addressable element in an all points addressabledisplay device, so it is the smallest
controllable element of a picture represented on the screen.
fig1. Pixels
Each pixel is a sample of an original image; more samples typically provide more accurate
representations of the original. The intensity of each pixel is variable. In color imaging
systems, a color is typically represented by three or four component intensities such as red,
green, and blue, or cyan, magenta, yellow, and black.
6
2.2 Digital Image Processing
Digital image processing is the use of computer algorithms to perform image processing on
digital images. As a subcategory or field of digital signal processing, digital image processing
has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up of
noise and signal distortion during processing. Since images are defined over two dimensions
(perhaps more) digital image processing may be modeled in the form of multidimensional
systems. The generation and development of digital image processing are mainly affected by
three factors: first, the development of computers; second, the development of mathematics
(especially the creation and improvement of discrete mathematics theory); third, the demand
for a wide range of applications in environment, agriculture, military, industry and medical
science has increased. Digital image processing allows the use of much more complex
algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the
implementation of methods which would be impossible by analog.
Image editing encompasses the processes of altering images, whether they are digital
photographs, traditional photo-chemical photographs, or illustrations. Traditional analog
image editing is known as photo retouching, using tools such as an airbrush to modify
photographs, or editing illustrations with any traditional art medium. Graphic software
programs, which can be broadly grouped into vector graphics editors, raster graphics editors,
and 3D modelers, are the primary tools with which a user may manipulate, enhance, and
transform images. Many image editing programs are also used to render or create computer
art from scratch.
Image restoration is the operation of taking a corrupt/noisy image and estimating the clean,
original image. Corruption may come in many forms such as motion blur, noise and camera
mis-focus. Image restoration is performed by reversing the process that blurred the image and
7
such is performed by imaging a point source and use the point source image, which is called
the Point Spread Function (PSF) to restore the image information lost to the blurring process
Artificial neural networks (ANN) or connectionist systems are computing systems that are
inspired by, but not identical to, biological neural networks that constitute animal brains.
Such systems "learn" to perform tasks by considering examples, generally without being
programmed with task-specific rules. For example, in image recognition, they might learn to
identify images that contain cats by analyzing example images that have been manually
labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this
without any prior knowledge of cats, for example, that they have fur, tails, whiskers and cat-
like faces. Instead, they automatically generate identifying characteristics from the examples
that they process.
Pattern recognition is the automated recognition of patterns and regularities in data. Pattern
recognition is closely related to artificial intelligence and machine learning, together with
applications such as data mining and knowledge discovery in databases (KDD), and is often
used interchangeably with these terms. However, these are distinguished: machine learning is
one approach to pattern recognition, while other approaches include hand-crafted (not
learned) rules or heuristics; and pattern recognition is one approach to artificial intelligence,
while other approaches include symbolic artificial intelligence.
Using images like X-ray images we can able to detect length of tumors, cracks and also to
find dimensions of objects.
2.2.5 Filtering
Digital filters are used to blur and sharpen digital images. Filtering can be performed by
convolution with specifically designed kernels (filter array) in the spatial domain masking
specific frequency regions in the frequency (Fourier) domain.
8
2.3 Non-contact measurement
(i) Spatial measurements -Measurements of distance, area, and volume. These involve the
first two dimensions of the image, its width and height.
(ii) Density measurements- Measurements involving the third dimension, the pixel values.
Pixel values can represent temperature, elevation, salinity, population density, or virtually
any phenomenon you can quantify.
Before you can make meaningful measurements, you need to calibrate the image — that is,
"tell" the software what a pixel represents in real-world terms of size or distance (spatial
calibration), in terms of what the pixel values mean (density calibration), or both. In this
section, you will learn how to spatially calibrate digital images.
Image noise is random variation of brightness or color information in images, and is usually
an aspect of electronic noise. It can be produced by the sensor and circuitry of a scanner or
digital camera. Image noise can also originate in film grain and in the unavoidable shot noise
of an ideal photon detector. Image noise is an undesirable by-product of image capture that
obscures the desired information.
9
2.4.1 Holes
Holes in a binary image,the boundary of the required object is dark and distinct from its
white background. So, we use simple image thresholding to separate the boundary from the
background. In other words, we say pixels with intensities above a certain value ( threshold )
are the background and the rest are the foreground. The black represents background, and
white represents foreground. Unfortunately, even though the boundary has been nicely
extracted ( it is solid white ), the interior of the object has intensities similar to the
background.
2.4.2 BLOB
A Binary Large Object (BLOB) is a collection of binary data stored as a single entity in a
database management system. Blobs are typically images, audio or other multimedia objects,
though sometimes binary executable code is stored as a blob.is a data type that can store
binary data. This is different than most other data types used in databases, such as integers,
floating point numbers, characters, and strings, which store letters and numbers. Since blobs
can store binary data, they can be used to store images or other multimedia files. For
example, a photo album could be stored in a database using a blob data type for the images,
and a string data type for the captions.
Because blobs are used to store objects such as images, audio files, and video clips, they
often require significantly more space than other data types. The amount of data a blob can
store varies depending on the database type, but some databases allow blob sizes of several
gigabytes.
10
3. SOFTWARE USED
MATLAB was first adopted by researchers and practitioners in control engineering, Little's
specialty, but quickly spread to many other domains. It is now also used in education, in
particular the teaching of linear algebra and numerical analysis, and is popular amongst
scientists involved in image processing.
Image Processing Toolbox apps let you automate common image processing workflows. You
can interactively segment image data, compare image registration techniques, and batch-
process large datasets. Visualization functions and apps let you explore images, 3D volumes,
and videos; adjust contrast; create histograms; and manipulate regions of interest.
Image analysis is the process of extracting meaningful information from images such as
finding shapes, counting objects, identifying colors, or measuring object properties. The
11
toolbox provides a comprehensive suite of reference-standard algorithms and visualization
functions for image analysis tasks such as statistical analysis and property measurement.
The Color Thresholder app lets you threshold color images by manipulating the color
components of these images, based on different color spaces. Using this app, you can create a
segmentation mask for a color image.
This example shows how to segment an image based on regions with similar color. You can
display the image in different color spaces to differentiate objects in the image.
You can perform color thresholding on an image acquired from a live USB webcam.
Use point cloud control to segment an image by selecting a range of colors belonging to the
object to isolate.
12
4. SYSTEM DEVELOPMENT
4.1 ALGORITHM:
2.Segmentation
3.Thresholding
4.Remove Noise.
5.Measuring image.
4.1.2 Segmentation
In image processing, image segmentation is the process of partitioning a digital image into
multiple segments (sets of pixels, also known as image objects). The goal of segmentation is
to simplify and/or change the representation of an image into something that is more
meaningful and easier to analyze. Image segmentation is typically used to locate objects
and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the
process of assigning a label to every pixel in an image such that pixels with the same label
share certain characteristics.
The result of image segmentation is a set of segments that collectively cover the entire image,
or a set of contours extracted from the image (see edge detection). Each of the pixels in a
region are similar with respect to some characteristic or computed property, such
as color, intensity, or texture. Adjacent regions are significantly different with respect to the
13
same characteristic(s). When applied to a stack of images, typical in medical imaging, the
resulting contours after image segmentation can be used to create 3D reconstructions with the
help of interpolation algorithms like Marching cubes.
4.1.3 Thresholding
The simplest method of image segmentation is called the thresholding method. This method
is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image.
The key of this method is to select the threshold value (or values when multiple-levels are
selected). Several popular methods are used in industry including the maximum entropy
method, balanced histogram thresholding, Otsu's method (maximum variance), and k-means
clustering.
Recently, methods have been developed for thresholding computed tomography (CT) images.
The key idea is that, unlike Otsu's method, the thresholds are derived from the radiographs
instead of the (reconstructed) image.
14
5. MODELLING
Here we are implementing the algorithm into a Matlab code which can be used to determine
the object diameter of an object with in an image.
Code:
clear;
clc;
obj=imread(‘filename.jpj’);
imshow(obj);
15
5.2 Segment Image based on intensities
We have to divide the image into its respective RGB intensities
Code:
red=obj(: , : , 1);
green=obj(: , : , 2);
blue=obj(: , : , 3);
figure(1)
subplot(2,2,1);
imshow(obj);
title(‘original Image’);
subplot(2,2,2);
imshow(red);
title(‘Redplane’);
subplot(2,2,3);
imshow(green);
title(‘greenplane’);
subplot(2,2,4);
imshow(blue);
title(‘blueplane’);
16
5.3 Thresholding
The blue plane is the best choice to use for Image Thresholding because it provides the most
contrast between the desired object (foreground) and the background. Image Thresholding
takes an intensity imageand converts it into a binary image based on the leveldesired(See line
25).Avalue between 0 and 1 determines which pixels (based on their value)will be set to a 1
(white) or 0 (black)). To choose the best value suited for your application right-click on the
value and at the top of the menu and select “Increment Value and Run Section”. Set the
increment value to 0.01 and choose the best value at which to threshold. Figure 5shows the
result of the Image Thresholding at 0.37. You can see that the image (Top-right of Figure 6)
hasbeen segmented between the objectwe desire to measure and the background.
Code:
Figure(2);
Level=0.37;
Bw2=im2bw(blue,level)
Subplot(2,2,1);
Imshow(bw2);
Title(‘blue plane threshold’);
17
Fig 5 image after noise removal
Code:
Fill=imfill(bw2,’holes’);
Subplot(2,2,2);
Imshow(fill);
Title(‘holes filled’);
Clear=imclearborder(fill);
Subplot(2,2,3);
Imshow(clear);
Title(‘Remove bolobs on border’);
Se=strel(‘disk’,7);
Open=imopen(fill,se);
Subplot(2,2,4);
Imshow(open);
Title(‘Remove small blobs’)
Code:
Diameter=regionprops(open,’majorlengthaxis’);
Figure(3)
Imshow(obj)
D=imdistline;
18
6.RESULT
The diameter is now displayed in the Command Window to be approx pixels value across.
This was verified in figure by using the ‘imdistline’ function in line. As you can see between
the two figures, the value calculated by the code was very close to the manual measurement
in figure. We are able to learn how to get accurate measurements using mat lab(image
processing tool)with in a image. This allowed us to get a better understanding of the principle
of the system to practice using the different measuring syntax for variety of objects the main
object. We gain from this project was when it comes from measurements, it is best if we can
make the accuracy of the measurements as precise as possible , Since this an essential skill
that will be required more often as we progress in the field of engineering.
19
7.REFERENCE
[2] Rafael C. Gonzalez and Richard E. Woods, Digital Image Processing, Second Ed,
Prentice-Hall, 2001. ISBN 0-20-118075-8
[3] BerndJahne, Digital Image Processing, (FourthEd) Springer-Verlag, 1997. ISBN 3-540-
62724-3
[4] Anil K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall,1989. ISBN 0-13-
336165-9
20