Colour Image Segmentation Using FPGA

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 63

Colour Image Segmentation Using FPGA

Chapter 1

INTRODUCTION

People are only interested in certain parts of the image in the research and application
of the image. These parts are frequently referred as a target or foreground (other part is called
background); they generally correspond to the image in a specific and unique nature of the
area. It needs to extract and separate them in order to identify and analyze object, on this
basis it will be possible to further use for the target. Image segmentation is a technique and
process which divide the image into different feature of region and extract out the interested
target. Here features can be pixel grayscale, colour, texture, etc. Pre-defined targets can
correspond to a single region or multiple regions. To illustrate the level of the image
segmentation in image processing, we have introduced "image engineering" concept ", it
bring the involved theory, methods, algorithms, tools, equipment of image segmentation into
an overall framework. Image Engineering is a new subject for research and application of
image field, its content is very abundant. According to the different of the abstract degree and
research methods, it can be divided into three levels: Image processing, image analysis and
image understanding.

Figure 1 - Hierarchical Needs

Dept. of ECE, VJCET Page 1


Colour Image Segmentation Using FPGA

Image processing is emphasis on the transformation between the images and improves
the visual effects of image. Image analysis is mainly monitor and measure the interested
targets in the image in order to get its objective information as a result build up a description
of the image, the key point of the image understanding is further study on the nature of each
target and the linkage of each other as well obtain an explanation of objective scenario for
original image as result guide and plan to action.

Image processing, image analysis and image understanding have different operational,
refer to Figure 1. Image processing is relatively low-level operations; it is mainly operated on
the pixel-level. Then image analysis enters the middle-level, it focuses on measuring,
expression and description of target. Image Understanding is mainly high-level operation,
essentially it focus on the operation and illation of data symbol which abstracts from the
description.

Image segmentation is a key step from the image processing to image analysis, it
occupy an important place. On the one hand, it is the basis of target expression and has
important effect on the feature measurement. On the other hand, as the image segmentation,
the target expression based on segmentation, the feature extraction and parameter
measurement that converts the original image to more abstract and more compact form, it is
possible to make high-level image analysis and understanding.

In the actual production life, the application of image segmentation is also very wide
and almost appeared in all related areas of image processing as well as involved various types
of image. In these applications, image segmentation is usually used for image analysis,
identification and compress code, etc.

Dept. of ECE, VJCET Page 2


Colour Image Segmentation Using FPGA

Chapter 2

THE STUDY OF COLOR IMAGE SEGMENTATION

Human eyes can distinguish thousands of colours but can only distinguish 20 kinds of
grayscale, so we can easily and accurately find the target from the colour images. However, it
is difficult to find out from the grayscale image. The reason is that colour can provide more
information than grayscale. The colour for the pattern recognition and machine vision is very
useful and necessary. At present, specifically applied to the colour image segmentation
approach is not so much as for the grayscale images, most of proposed colour image
segmentation methods are the combination of the existing grayscale image segmentation
method on the basis of different colour space. Commonly used for colour image segmentation
methods are histogram threshold, feature space clustering, region-based approach, based on
edge detection methods, fuzzy methods, artificial neural network approach, based on physical
model methods, etc.

The basic idea of region growing is a collection of pixels with similar properties to
form a region. First, we need to find a seed pixel as a started point for each of needed
segmentation. And then merge the same or similar property of pixel (Based on a pre-
determined growing or similar formula to determine) with the seed pixel around the seed
pixel domain into the domain of seed pixel. These new pixels as a new seed pixel to continue
the above process until no more pixels that satisfy the condition can be included, and then the
region has grown. In the practical application of this method we need to address three
questions: first, chose or determined a group of seed pixel which can correctly represent the
required region; second, fixed the formula which can contain the adjacent pixels in the
growth; third, made rules or conditions to stop the growth process. The advantage of region
growing algorithm is easy to complete and compute. Similar to the threshold, the region
growing methods are rarely used alone; it is often used with other segmentation methods. The
practical method of this subject combines the watershed algorithm and region growing
algorithm for colour image segmentation. The disadvantage of region growing: first, it needs
human interaction to obtain the seed point, so that the user needs to implant a seed point in
every region which needs to extract; second, The patterns of regional growth are also
sensitive to noise as result the extracted region has empty or links the separate region under
the case of local effect. This article according a certain rules to automatic select seed pixels as

Dept. of ECE, VJCET Page 3


Colour Image Segmentation Using FPGA

well as effective solve the first question. We carry on the regional growing on the basis of the
watershed segmentation algorithm; this method effectively solved the second questions.
Domain decomposition technique makes seed region continually split into four rectangular
regions until the internal of every region is similar. Region merging is often combines with
the region growing and domain decomposition in order to merge the similar sub-region into a
domain as large as possible. The disadvantage of domain decomposition technique may cause
destruction of the border.

2.1 ALGORITHM BASICS


The basic idea of region growing method is a collection of pixels with similar
properties to form a region. The steps are as follows: First, we need to find a seed pixel as a
started point for each of needed segmentation. And then merge the same or similar property
of pixel (Based on a pre-determined growing or similar formula to determine) with the seed
pixel around the seed pixel domain into the domain of seed pixel. These new pixels as a new
seed pixel to continue the above process until no more pixels that satisfy the condition can be
included. In the practical application of this method we need to address three questions:

a) Chose or determined a group of seed pixel which can correctly represent the
required region;

b) Fixed the formula which can contain the adjacent pixels in the growth;

c) Made rules or conditions to stop the growth process. The seed region growing
algorithm is proposed by Adams and Bischof, Metmert and Jackway further described the
dependency relationship between pixels in the seed growth:

i) The first order of dependence occurs when the number of pixels has the same
difference ratio as their vicinity.

ii) The second order of dependence occurs when a pixels has the same difference ratio
as their vicinity.

Frank and Shouxian Cheng applied the automatic seed selection method, they selected
seed which can represents needed segmentation region based on certain similarity criteria and
proposed a strategy to solve the above two pixels dependence. The method in this paper is
combines the watershed algorithm on the basis of Frank and Shouxian Cheng's as well

Dept. of ECE, VJCET Page 4


Colour Image Segmentation Using FPGA

proposes a new seed region growing method. The selection of growth criteria not only
depends on the specific issues themselves, but also depends on the type of practical image
data. For example, when the image is a colorized, the graphic will be effect by utilize
monochrome criteria. Therefore, we carry on seed selection and regional growth according to
the hue and saturation in the colour image in this paper.

Seed region growing (SRG) is an image segmentation method which proposed by


Adams and Bischof, this method is begun with a set of "seed" point and attached the adjacent
pixels which has the similar properties with the seed (such as the grayscale or the specific
range of colour) to every seed on the growth of the region [5]. This article proposes a new
region growing algorithm on the basis of traditional seed region growing algorithm, and then
started from the region which formed by watershed algorithm, automatically selected some
regions as seed region to carry on the growth of region according to certain rules.

Figure 2 – Steps in Algorithm

First, we use the watershed algorithm to initialize segmentation for the image as well
as form segmentation results; secondly, according to a certain rules, automatically select the
part of the region as a seed region; on this basis, engage in regional growth. Finally, the
regional merged.

Dept. of ECE, VJCET Page 5


Colour Image Segmentation Using FPGA

Chapter 3

BLOCK DIAGRAM

INPUT FPGA SEGMENTATION


IMAGE SPARTAN 3E

OUTPUT IMAGE UART OBJECT


RECOGNITION

Figure 3 – Block Schematic

3.1 BLOCK DIAGRAM DESCRIPTION

The entire block diagram can be divided into 2 segments – the raw data inputting &
the data processing.

INPUTTING IMAGE

The image inputted is either in .tif or .jpg format. The image is loaded using the
MATLAB interface. The inputted image is either of 64x64, 128x128 or 256x256 pixel size.
When the inputted image is of varying size, a resizing mechanism is done to obtain a standard
input image resolution. The inputted image is of RGB colour format. Thus there are 3 planes
of colour schemes – Red, Green & Blue. Here we split the 3 planes into separate planes and
the intensity of each and every plane is considered separately. The intensity variations in each
and every pixel are converted into grayscale. The grayscale values range from 0 to 255,
where 0 represents the Black & 255 represents the White. Each pixel value thus ranging
between 0 and 255 is stored as a header file. Thus the inputted images are converted into raw
bit streams.

Dept. of ECE, VJCET Page 6


Colour Image Segmentation Using FPGA

DATA PROCESSING

The raw bit stream generated using the MATLAB is transferred to the Spartan 3E
FPGA processor. The algorithm code developed using the Xilinx XPS/IDE is then loaded
into the Spartan 3E using the serial bus. The Spartan 3E performs the algorithm and
segmentation. The entire data processing is done at the Spartan 3E processor. The
thresholding as well as the region growing algorithm is performed at the Spartan. Once the
region of interest has been jotted out using the algorithms, the object recognition is being
performed to determine the nature of the object for our concern. Once the target object has
been identified, the region under interest is outputted back to the system using the UART
Serial Bus Interface.

Dept. of ECE, VJCET Page 7


Colour Image Segmentation Using FPGA

HARDWARE

Dept. of ECE, VJCET Page 8


Colour Image Segmentation Using FPGA

Chapter 4
SPARTAN 3 FPGA TRAINER KIT

4.1 Block Diagram

Figure 4 – Spartan EDK Trainer Kit Block Diagram

Dept. of ECE, VJCET Page 9


Colour Image Segmentation Using FPGA

The FPGA used in the kit is SPARTAN 3E which is a product of Xilinx and it has
got 500k gate. The Spartan 3E used in the trainer kit has a QFP packaging and got a
temperature range of 0 to 85degree Celsius. Generally FPGA is available in two speed grade
high and standard speed grade. It has got a clock generator of 50 kHz which is given as input
to the DCM of FPGA. The trainer kit provides an external SRAM for loading program and
other functions. The SRAM is of 256*16k size. Also others feature like 7segment display,
dip switch, LED, LCD display are provided in the kit. There point for the Vcc to connect
and regulator to regulate the voltage. Generally a 5v adapter is used to connect the kit.
JTAG and RS232 connectors are provided in the kit for programing of the Spartan 3E.
Details of each component are discussed below.

Dept. of ECE, VJCET Page 10


Colour Image Segmentation Using FPGA

4.2 FPGA SPARTAN3E XC3S500E

4.2.1 Introduction

The Spartan-3E family of Field-Programmable Gate Arrays (FPGAs) is specifically


designed to meet the needs of high volume, cost-sensitive consumer electronic applications.
The five-member family offers densities ranging from100,000 to 1.6 million system gates.
The Spartan-3E family builds on the success of the earlierSpartan-3 family by increasing the
amount of logic per I/O, significantly reducing the cost per logic cell. New features improve
system performance and reduce the cost of configuration. These Spartan-3E FPGA
enhancements, combined with advanced 90 nm process technology, deliver more
functionality and bandwidth per dollar than was previously possible, setting new standards in
the programmable logic industry, Because of their exceptionally low cost, Spartan-3E FPGAs
are ideally suited to a wide range of consumer electronics applications, including broadband
access, home networking ,display/projection, and digital television equipment. The Spartan-
3E family is a superior alternative to mask programmed ASICs. FPGAs avoid the high initial
cost, the lengthy development cycles, and the inherent inflexibility of conventional ASICs.
Also, FPGA programmability permits design upgrades in the field with no hardware
replacement necessary, an impossibility with ASICs.

4.2.2 Features
I/O CAPABILITIES of Spartan 3E
The Spartan-3E FPGA Select IO interface supports many popular single-ended and
differential standards.

Spartan-3E FPGAs support the single-ended standards like 3.3V low-voltage TTL
(LVTTL) , Low-voltage CMOS (LVCMOS) at 3.3V, 2.5V, 1.8V,1.5V, or 1.2V,3V PCI at 33
MHz, and in some devices, 66 MHz , HSTL I and III at 1.8V, commonly used in memory
applications, SSTL I at 1.8V and 2.5V, commonly used for memory applications. Spartan-3E
FPGAs also support most low voltage differential I/O standards like LVDS which is called
as low voltage differential signaling , Bus LVDS, mini-LVDS, RSDS, Differential HSTL
(1.8V, Types I and III), Differential SSTL (2.5V and 1.8V, Type I), 2.5V LVPECL inputs.

Dept. of ECE, VJCET Page 11


Colour Image Segmentation Using FPGA

4.2.3 Architectural Overview

Figure 5 FPGA Architecture

Configurable Logic Blocks

The Configurable Logic Blocks (CLBs) constitute the main logic resource for
implementing synchronous as well as combinatorial circuits. Each CLB contains four slices,
and each slice contains two Look-Up Tables (LUTs) to implement logic and two dedicated
storage elements that can be used as flip-flops or latches. The LUTs can be used as a 16x1
memory (RAM16) or as a 16-bit shift register and additional multiplexers and carry logic
simplify wide logic and arithmetic functions. Each CLB is identical, and the Spartan-3E
family CLB structure is identical to that for the Spartan-3 family.

Dept. of ECE, VJCET Page 12


Colour Image Segmentation Using FPGA

Figure 6 - CLBs

In case of XC3S500E total 1164 CLBs which is arranged in 46 rows and 34 columns

Input/ Output Blocks

The Input /Output Block (IOB) provide a programmable, unidirectional or


bidirectional interface between a package pin and the FPGA’s internal logic. There are three
main signal paths within the IOB and they are the output path, input path, and 3-state path.
Each path has its own pair of storage elements that can act as either registers or latches. The
IOB is similar to that of the Spartan-3 family with the following differences:

A. Input-only blocks are added

In Spartan -XC3S500E there are a total of 232 I/O and 56 among them are input only pin.
Dedicated Inputs are IOBs which are used only as inputs. Pin names designate a Dedicated
Input if the name starts with IP, for example, IP or IP_Lxxx_x. Dedicated inputs retain the
full functionality of the IOB for input functions with a single exception for differential inputs
(IP_Lxxx_x). For the differential Dedicated Inputs, the on-chip differential termination is not
available.

B. Programmable input delays are added to all blocks


Each IOB has a programmable delay block that optionally delays the input signal. In
Figure below the signal path has a coarse delay element that can be bypassed. The input

Dept. of ECE, VJCET Page 13


Colour Image Segmentation Using FPGA

signal then feeds a 6-tap delay line. The coarse and tap delays vary. All six taps are available
via a multiplexer for use as an asynchronous input directly into the FPGA fabric. In this way,
the delay is programmable in 12 steps. Three of the six taps are also available via a
multiplexer to the D inputs of the synchronous storage elements. The delay inserted in the
path to the storage element can be varied in six steps. The first, coarse delay element is
common to both asynchronous and synchronous paths, and must be either used or not used
for both paths

Figure 7 - Programmable Delay Element

The delay values are set up in the silicon once at configuration time. They are non-
modifiable in device operation. The primary use for the input delay element is to adjust the
input delay path to ensure that there is no hold time requirement when using the input flip-
flops with a global clock.

C. DDR flip-flops can be shared between adjacent IOBs


Double-Data-Rate (DDR) transmission describes the technique of synchronizing signals
to both the rising and falling edges of the clock signal. And these flip-flops can be shared by
two IOBs

Dept. of ECE, VJCET Page 14


Colour Image Segmentation Using FPGA

IOBs Organization

The Spartan-3E architecture organizes IOBs into four I/O banks as shown in Figure below.
Each bank maintains separate VCCO and VREF supplies. The separate supplies allow each
bank to independently set VCCO. Similarly, the VREF supplies can be set for each bank.

Figure 8 - IOB Banks

Supply Voltages for the IOBs

The IOBs are powered by three supplies:

1. The VCCO supplies, one for each of the FPGA’s I/O banks, power the output drivers. The
voltage on the VCCO pins determines the voltage swing of the output signal.

2. VCCINT is the main power supply for the FPGA’s internal logic.

3. VCCAUX is an auxiliary source of power, primarily to optimize the performance of


various FPGA functions such as I/O switching.

Block RAM

Spartan-3E devices incorporate 4 to 36 dedicated block RAMs, which are organized as dual-
port configurable 18 Kbit blocks. Block RAM synchronously stores large amounts of data
while distributed RAM, is better suited for buffering small amounts of data anywhere along
signal paths. In case XC3S500E ,we have 20 no of block ram with addressable bits of
368,640 in column of 2.

Dept. of ECE, VJCET Page 15


Colour Image Segmentation Using FPGA

Internal Structure of the Block RAM

The block RAM has a dual port structure. The two identical data ports called A and B permit
independent access to the common block RAM, which has a maximum capacity of 18,432
bits. Each port has its own dedicated set of data, control, and clock lines for synchronous read
and write operations.

There are four basic data paths, as shown in Figure below:

1. Write to and read from Port A

2. Write to and read from Port B

3. Data transfer from Port A to Port B

4. Data transfer from Port B to Port A

Figure 9 Block Ram Data Path

Multiplier Blocks

The Spartan-3E devices provide 4 to 36 dedicated multiplier blocks per device. The
multipliers are located together with the block RAM in one or two columns depending on
device density. The multiplier blocks primarily perform two’s complement numerical
multiplication but can also perform some less obvious applications, such as simple data

Dept. of ECE, VJCET Page 16


Colour Image Segmentation Using FPGA

storage and barrel shifting. Logic slices also implement efficient small multipliers and
thereby supplement the dedicated multipliers.

Digital Clock Manager Blocks

This block Provide self-calibrating, fully digital solutions for distributing, delaying,
multiplying, dividing, and phase-shifting clock signals. The DCM supports three major
functions:

A. Clock-skew Elimination:

Clock skew within a system occurs due to the different arrival times of a clock signal
at different points on the die, typically caused by the clock signal distribution network. Clock
skew increases setup and hold time requirements and increases clock-to-out times, all of
which are undesirable in high frequency applications. The DCM eliminates clock skew by
phase-aligning the output clock signal that it generates with the incoming clock signal. This
mechanism effectively cancels out the clock distribution delays.

B. Frequency Synthesis:

The DCM can generate a wide range of different output clock frequencies derived from
the incoming clock signal. This is accomplished by either multiplying or dividing the
frequency of the input clock signal by any of several different factors.

C. Phase Shifting:

The DCM provides the ability to shift the phase of its entire output clock signals with
respect to the input clock signal.it has the provision to shift the phase 90 and 180

Dept. of ECE, VJCET Page 17


Colour Image Segmentation Using FPGA

Figure 10 - DCM Functional Blocks and Associated Signals

Dept. of ECE, VJCET Page 18


Colour Image Segmentation Using FPGA

Chapter 5

J TAG

JTAG is an acronym that stands for “Joint Test Action Group”. The group was
a consortium of vendors focused on problems found when testing electronic circuit boards.
Key members included: TI, Intel and others. JTAG is the informal name often used to describe
the standard that resulted from the work of this group. Specifically, the standard is known
as IEEE1149.1 Boundary-Scan. The term JTAG is used to describe test and debug interfaces
based on the specifications brought about by this group. JTAG emulators leverage extended
registers and boundary-scan instructions , put on-chip by the processor manufacturer. These
extra features, allow the JTAG connector to be used to control a microprocessor that is run,
stop, step and read/write memory and registers. JTAG Boundary-Scan Test tools allow the
hardware level debugging, programming and testing of circuit boards.

The main problem that the JTAG group set out to solve was that traditional In-Circuit
Test or ICT, was no longer as effective as it once was for board test. This change was due to
the rise in use of surface mount devices such as Ball Grid Array (BGA) devices. The ever
decreasing size of modern electronic circuits and the rise of multi-layer printed circuit boards
were also key drivers for JTAG. These new devices had their pins (called balls) on the
bottom of the chip. When soldered down to a circuit board the pins could not be accessed as
they were covered by the chip itself. As many of these modern ICs had many hundreds of
pins it quickly became impractical to add test points for all the new pins. The pins that
comprise the TAP interface are used to control the access to a long chain of I/O cells at each
pin of a device. By clocking data patterns in and reading values out, it is possible to set and
determine the state of a pin. By extension, since other non-JTAG devices may be connected
to these pins, those devices can often be tested as well.

5.1 Standard JTAG Connector Signals

JTAG utilizes a standard set of signals to communicate with the IC or ICs under test.
These signals taken together are known as the TAP or Test Access Port. This standard

Dept. of ECE, VJCET Page 19


Colour Image Segmentation Using FPGA

configuration is typically used for FPGA (such as Xilinx) JTAG programming adapters. The
various signals are:

1. TDI( Test Data In )

This is the data from the JTAG tool into the device under test or (DUT)

2. TDO( Test Data Out)

This is the data out of the last IC in the chain on the DUT back to the test tool

3. TCLK( Test Clock )

This is the JTAG System CLOCK from the tool into the device

4. TMS( Test Mode Select)

This signal from the tool to the device is used to manipulate the internal state
machine that controls Boundary-Scan operation

5. TRST (Test Reset)

This is an optional signal, often used with JTAG emulation or when multiple
chains must be tied together

5.1.1 Boundary Scan

Other than downloading data to the chip, an important function the j tag can be used to
perform is boundary scan. It can be used in both chip level and board level testing

A. Chip Level Boundary Scan Testing

Boundary Scan allows to do the following types of chip level testing like Presence of the
device – Is the device on the board, did it get soldered on, Orientation of the device, Is it
oriented correctly, is it rotated, shifted, the wrong package, Is it bonded to the board, Is it
soldered properly or are their issues with the solder joint, is the internal pin to amplifier
interconnect damaged, Read the devices ID register (get chip revision level information)

Board Level Testing Boundary Scan is Testing at the board level adds inter-device and
board-level testing such a, Ability to verify the presence and integrity of the entire scan chain

Dept. of ECE, VJCET Page 20


Colour Image Segmentation Using FPGA

and each device on it, Device interconnect tests, Open, Short and Stuck At (0,1) failures.
When bringing up and debugging your new hardware, Boundary Scan comes to the rescue in
several important ways such as

B. Testing Partially Populated hardware

When you get your initial boards, not all devices may be fitted. As we only need good
power, ground and at least one part on the JTAG Chain to begin testing and we should be
able to ID the part on the chain and then test for opens and shorts for any board area that is
touched by this device.

C. Initializing and Programming Devices

You may also be able to do initial device programming. For example, if the device on the
chain is a microprocessor or DSP, most likely you will have access to RAM and FLASH
memory via the address, data and control bus. This can allow you to ID and Program and test
these so-called devices like FPGAs, CPLDs.

D. Finding Assembly Defects

Prototypes are often rushed through assembly in order to make engineering deadlines.
As a result, assembly and manufacturing problems will exist. Boundary Scan is prefect for
testing for common problems like unfitted or ill fitted devices, solder issues (cold or hot
Joints), as well as open, shorts, stuck at and device functional failures.

E. Improving Debug Productivity

As will be able to focus your debug efforts on the new release of firmware, knowing
full well that your hardware is good initial firmware or diagnostics are written for your new
hardware, Boundary Scan can be used to rule out bad hardware.

Dept. of ECE, VJCET Page 21


Colour Image Segmentation Using FPGA

Chapter 6

RS 232

In telecommunications, RS-232 is the traditional name for a series of standards


for serial binary single-ended data and control signals connecting between a DTE (Data
Terminal Equipment) and a DCE (Data Circuit-terminating Equipment). It is commonly used
in computer serial ports. The standard defines the electrical characteristics and timing of
signals, the meaning of signals, and the physical size and pin out of connectors. In RS-232,
user data is sent as a time-series of bits. Both synchronous and asynchronous transmissions
are supported by the standard. In addition to the data circuits, the standard defines a number
of control circuits used to manage the connection between the DTE and DCE. Each data or
control circuit only operates in one direction that is, signaling from a DTE to the attached
DCE or the reverse. Since transmit data and receive data are separate circuits, the interface
can operate in a full duplex manner, supporting concurrent data flow in both directions. The
standard does not define character framing within the data stream, or character encoding.

6.1 Voltage Levels

The RS-232 standard defines the voltage levels that correspond to logical one and
logical zero levels for the data transmission and the control signal lines. Valid signals are plus
or minus 3 to 15 volt. The ±3 V range near zero volts is not a valid RS-232 level. The
standard specifies a maximum open-circuit voltage of 25 volts, signal levels of ±5 V, ±10 V,
±12 V, and ±15 V are all commonly seen depending on the power supplies available within a
device. RS-232 drivers and receivers must be able to withstand indefinite short circuit to
ground or to any voltage level up to ±25 volts.

Dept. of ECE, VJCET Page 22


Colour Image Segmentation Using FPGA

6.2 RS 232 DB9 connector PINOUT

Pin number Name


1 CD - Carrier Detect
2 RXD - Receive Data
3 TXD - Transmit Data
4 DTR - Data Terminal Ready
5 GND - Signal Ground
6 DSR - Data Set Ready
7 RTS - Request To Send
8 CTS - Clear To Send
9 RI - Ring Indicator
  Shield

Table 1 – Pins of RS232

Dept. of ECE, VJCET Page 23


Colour Image Segmentation Using FPGA

SOFTWARE

Dept. of ECE, VJCET Page 24


Colour Image Segmentation Using FPGA

Chapter 7
SOFTWARE DEVELOPMENT TOOLS

The software part of our project involves two sections. They are:

7.1 MATLAB
The algorithm of the MATLAB programming is:-
I. The picture is given as input to MATLAB
II. Input image is resize to required size
III. The input image is converted to grey scale
IV. Image is then converted to bit stream

The output of the MATLAB is the bit stream of the input of the image, which contains all the
information about the pixel values .these is then given to the Spartan trainer kit using a j-tag.

7.2 System C (Xilinx XPS)

The Programming for image segmentation is done in system c. The code is


combinations of many algorithms like region growing, histogram approach, and edge
detection. The flowchart of the program is given below

Dept. of ECE, VJCET Page 25


Colour Image Segmentation Using FPGA

7.3 FLOWCHART

Input Image

Divide into different


parts

Estimate seed point

Compare with
neighboring pixels

YES If
difference
>
threshold

NO

Group the pixels

Figure 11 – Algorithm Flow Chart

Dept. of ECE, VJCET Page 26


Colour Image Segmentation Using FPGA

7.4 ALGORITHM

I. The image is divided into segments for easy analysis


II. The median value for each segment is calculated
III. The edge of the image detected by comparison with the default image and all
the pixel value outside the image is made zero
IV. Now using the threshold values, pixel with similar value is grouped with in the
image
V. The pixel above threshold is made 256 and below is made zero

7.5 MATLAB CODE

function varargout = segment(varargin)

gui_Singleton = 1;

gui_State = struct('gui_Name', mfilename, ...

'gui_Singleton', gui_Singleton, ...

'gui_OpeningFcn', @segment_OpeningFcn, ...

'gui_OutputFcn', @segment_OutputFcn, ...

'gui_LayoutFcn', [] , ...

'gui_Callback', []);

if nargin && ischar(varargin{1})

gui_State.gui_Callback = str2func(varargin{1});

end

if nargout

[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});

Else

gui_mainfcn(gui_State, varargin{:});

end

Dept. of ECE, VJCET Page 27


Colour Image Segmentation Using FPGA

% End initialization code

% --- Executes just before segment is made visible.

function segment_OpeningFcn(hObject, eventdata, handles, varargin)

handles.output = hObject;

guidata(hObject, handles);

function varargout = segment_OutputFcn(hObject, eventdata, handles)

varargout{1} = handles.output;

% --- Executes on button press in pushbutton1.

function pushbutton1_Callback(hObject, eventdata, handles)

[filename, pathname] = uigetfile('*.avi', 'Pick an video');

if isequal(filename,0) | isequal(pathname,0)

warndlg('User pressed cancel')

else

a=aviread(filename);

axes(handles.axes1);

movie(a);

handles.filename=filename;

guidata(hObject, handles);

end

% --- Executes on button press in pushbutton2.

function pushbutton2_Callback(hObject, eventdata, handles)

filename=handles.filename;

str1='frame';

str2='.bmp';

Dept. of ECE, VJCET Page 28


Colour Image Segmentation Using FPGA

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%frame seperation% q=2; %%%%


%%%%%%%%%%%% quantization value

file=aviinfo(filename); % to get inforamtaion abt videodct file

frm_cnt=file.NumFrames % No.of frames in the videodct file

handles.frm_cnt=frm_cnt;

h = waitbar(0,'Please wait...');

for i=1:frm_cnt;

frm(i)=aviread(filename,i); % read the Video file

frm_name=frame2im(frm(i)); % Convert Frame to image file

filename1=strcat(strcat(num2str(i)),str2);

imwrite(frm_name,filename1); % Write image file

waitbar(i/10,h)

end

close(h);

guidata(hObject, handles);

warndlg('process completed');

% --- Executes on button press in pushbutton3.

function pushbutton3_Callback(hObject, eventdata, handles)

[filename, pathname] = uigetfile('*.*', 'Pick an Image');

if isequal(filename,0) | isequal(pathname,0)

warndlg('User pressed cancel')

else

filename=strcat(pathname,filename);

a=imread(filename);

b=imresize(a,[64 64]);

imshow(b);

handles.a=b;

Dept. of ECE, VJCET Page 29


Colour Image Segmentation Using FPGA

% imwrite(a,'test.bmp');

% Update handles structure

guidata(hObject, handles);

end

% --- Executes on button press in pushbutton4.

function pushbutton4_Callback(hObject, eventdata, handles)

res= handles.a;

[r c p]=size(res);

if (p==3)

res=rgb2gray(res);

imwrite(res,'test.bmp');

else

imwrite(res,'test.bmp');

end

[r c]=size(res);

res=double(res);

as='unsigned char Input[64][64]=';

fid=fopen('Image2.h','wt');

fprintf(fid,'%c',as);

fprintf(fid,'\n%c\n','{');

% as=8;

for i=1:c;

te=res(i,:);

fprintf(fid,'%c','{');

fprintf(fid,'%d,',te);

fprintf(fid,'%c','}');

fprintf(fid,'%c\n',',');

Dept. of ECE, VJCET Page 30


Colour Image Segmentation Using FPGA

end;

fprintf(fid,'%c %c','}',';');

% fprintf(fid,'%c',';');

fclose(fid);

helpdlg('Files created Succesfully');

% --- Executes on button press in pushbutton5.

function pushbutton5_Callback(hObject, eventdata, handles)

delete('image3.h');

delete('test.bmp');

warndlg('Files Deleted Succesfully');

% --- Executes on button press in pushbutton6.

function pushbutton6_Callback(hObject, eventdata, handles)

close all;

exit;

7.6 SYSTEM C CODE

#include <stdio.h>
#include <math.h>
#include "Image4.h"
#include "Image3.h"

int INPUT1[64][64];
int diff1[64][64];
int INPUT2[64][64];
int SEG1[64][64];
int Sliding_windowr[3][3];
int FIRSTFILTER[64][64];
int EDGEIMAGE[64][64];
int n=8,t;
int Cen[8];
int Value_Check,count;
int SR1,SR2,SR3,SR4,SR5,SR6,SR7,SR8,SR9;
int TH1=255;
int TH2=0;
int CHECK1;

Dept. of ECE, VJCET Page 31


Colour Image Segmentation Using FPGA

int CHECK2;
int CHECK3;
int CHECK4;
int CHECK5;
int CHECK6;
int CHECK7;
int CHECK8;

float bbr;
float median1();

float median1(int SR1,int SR2,int SR3,int SR4,int SR5,int SR6,int SR7,int SR8)
{
int i,j;
float median;
int arr[8];
arr[0]=SR1;
arr[1]=SR2;
arr[2]=SR3;
arr[3]=SR4;
arr[4]=SR5;
arr[5]=SR6;
arr[6]=SR7;
arr[7]=SR8;

for (i=0;i<=n-1;i++)
{
for(j=0;j<=n-1;j++)
{
if (arr[j]<=arr[j+1])
{
t=arr[j];
arr[j]=arr[j+1];
arr[j+1]=t;
}
else
continue;
}//for
}/// for

if (n%2==0)
{
median=(arr[n/2]+arr[n/2+1])/2.0;
}

Dept. of ECE, VJCET Page 32


Colour Image Segmentation Using FPGA

else
{
median=arr[n/2+1];
}
return (median);

}//end of median

void main()
{
int i,j,count,k,l,m,n;
int temp1,temp2,temp3;

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{
INPUT1[i][j]=InputImage3[i][j];
// printf("%d \n",INPUT1[i][j]);
}

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{
INPUT2[i][j]=InputImage4[i][j];
printf("%d \n",INPUT2[i][j]);
}

for (i=0;i<64;i++)
{
for (j=0;j<64;j++)
{
diff1[i][j]= (INPUT1[i][j]- INPUT2[i][j]);
printf("%d\n",diff1[i][j]);
}
}

Dept. of ECE, VJCET Page 33


Colour Image Segmentation Using FPGA

for(i=0;i<64;i++)
{
for (j=0;j<64;j++)
{
Value_Check=diff1[i][j];

if (Value_Check>20)
{
SEG1[i][j]=255;

}
else
{
SEG1[i][j]=0;
}

printf("%d \n",SEG1[i][j]);
}
}

for(i=0;i<64;i++)
{
for (j=0;j<64;j++)
{
if (i==0 & j==0)
{

else if (i==0 & j==127)


{

}
else if (i==127 & j==0)
{

}
else if (j==127 & i==127)
{

}
else if (i==1)
{

Dept. of ECE, VJCET Page 34


Colour Image Segmentation Using FPGA

}
else if (i==127)
{

}
else if (j==127)
{

}
else if (j==1)
{

}
else
{

Sliding_windowr[1][1]=SEG1[i-1][j-1];
Sliding_windowr[1][2]=SEG1[i-1][i];
Sliding_windowr[1][3]=SEG1[i-1][i+1];
Sliding_windowr[2][1]=SEG1[i][j-1];
Sliding_windowr[2][2]=SEG1[i][j];
Sliding_windowr[2][3]=SEG1[i][j+1];
Sliding_windowr[3][1]=SEG1[i+1][j-1];
Sliding_windowr[3][2]=SEG1[i+1][j];
Sliding_windowr[3][3]=SEG1[i+1][j+1];

// disp(Sliding_window);

SR1=Sliding_windowr[1][1];
SR2=Sliding_windowr[1][2];
SR3=Sliding_windowr[1][3];
SR4=Sliding_windowr[2][1];
SR5=Sliding_windowr[2][3];
///SR6=Sliding_windowr[2][2];
SR6=Sliding_windowr[3][1];
SR7=Sliding_windowr[3][2];
SR8=Sliding_windowr[3][3];

bbr=median1(SR1,SR2,SR3,SR4,SR5,SR6,SR7,SR8);
}
FIRSTFILTER[i][j]=bbr;
printf("%d\n",FIRSTFILTER[i][j]);
}

Dept. of ECE, VJCET Page 35


Colour Image Segmentation Using FPGA

}//end of for loop

///////////////////////EDGE DETECTION

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{
EDGEIMAGE[j][i]=255;

}
////////////////////////////DETECTING ONES/////////////////

for(i=0;i<64;i++)
{
for (j=0;j<64;j++)
{

if ((i==0)&&(j==0))
{

CHECK1=FIRSTFILTER[j][i+1];

CHECK2=FIRSTFILTER[j+1][i+1];

CHECK3=FIRSTFILTER[j+1][i];

if (CHECK1==TH1)
{
if (CHECK2==TH1)
{
if (CHECK3==TH1)
{
EDGEIMAGE[j][i]=0;

Dept. of ECE, VJCET Page 36


Colour Image Segmentation Using FPGA

}
else if ((i==0)&&(j==127))
{

CHECK1=FIRSTFILTER[j][i+1];

CHECK2=FIRSTFILTER[j+1][i+1];

CHECK3=FIRSTFILTER[j+1][i];

if (FIRSTFILTER[j+1][i]==TH1)
{
if (FIRSTFILTER[j+1][i-1]==TH1)
{
if (FIRSTFILTER[j][i-1]==TH1)
{
EDGEIMAGE[j][i]=0;

}
else if ((i==127 )&&( j==0))
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i+1];

CHECK3=FIRSTFILTER[j][i+1];

if (CHECK1==TH1)
{
if (CHECK2==TH1)
{
if (CHECK3==TH1)

Dept. of ECE, VJCET Page 37


Colour Image Segmentation Using FPGA

EDGEIMAGE[j][i]=0;

}
else if ((j==127)&&(i==127))
{

CHECK1=FIRSTFILTER[j][i-1];

CHECK2=FIRSTFILTER[j-1][i-1];

CHECK3=FIRSTFILTER[j-1][i];

if (CHECK1==TH1)
{
if (CHECK2==TH1)
{
if (CHECK3==TH1)

EDGEIMAGE[j][i]=0;

}
else if (i==1)
{

CHECK1=FIRSTFILTER[j][i-1];

CHECK2=FIRSTFILTER[j+1][i-1];

Dept. of ECE, VJCET Page 38


Colour Image Segmentation Using FPGA

CHECK3=FIRSTFILTER[j+1][i];

CHECK4=FIRSTFILTER[j+1][i+1];

CHECK5=FIRSTFILTER[j][i+1];

if (CHECK1==TH1)
{
if (CHECK2==TH1)

if (CHECK3==TH1)

if (CHECK4==TH1)

if (CHECK5==TH1)

EDGEIMAGE[j][i]=0;

}
else if (i==127)
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i-1];

Dept. of ECE, VJCET Page 39


Colour Image Segmentation Using FPGA

CHECK3=FIRSTFILTER[j][i-1];

CHECK4=FIRSTFILTER[j+1][i-1];

CHECK5=FIRSTFILTER[j+1][i];

if (CHECK1==TH1)
{
if (CHECK2==TH1)
{
if (CHECK3==TH1)
{

if (CHECK4==TH1)

{
if (CHECK5==TH1)

EDGEIMAGE[j][i]=0;

}
}
}
}
}
}
else if (j==127)
{
CHECK1=FIRSTFILTER[j][i-1];

CHECK2=FIRSTFILTER[j-1][i-1];

CHECK3=FIRSTFILTER[j-1][i];

CHECK4=FIRSTFILTER[j-1][i+1];

CHECK5=FIRSTFILTER[j][i+1];

if (CHECK1==TH1)
{

Dept. of ECE, VJCET Page 40


Colour Image Segmentation Using FPGA

if (CHECK2==TH1)
{
if (CHECK3==TH1)
{

if (CHECK4==TH1)

if (CHECK5==TH1)

EDGEIMAGE[j][i]=0;
}

}
}
}
else if (j==1)
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i+1];

CHECK3=FIRSTFILTER[j][i+1];

CHECK4=FIRSTFILTER[j+1][i+1];

CHECK5=FIRSTFILTER[j+1][i];

if (CHECK1==TH1)
{
if (CHECK2==TH1)
{
if (CHECK3==TH1)

Dept. of ECE, VJCET Page 41


Colour Image Segmentation Using FPGA

if (CHECK4==TH1)

if (CHECK5==TH1)

EDGEIMAGE[j][i]=0;

}
}

}
}

}
}
else
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i-1];

CHECK3=FIRSTFILTER[j][i-1];

CHECK4=FIRSTFILTER[j+1][i-1];

CHECK5=FIRSTFILTER[j+1][i];

CHECK6=FIRSTFILTER[j][i-1];

CHECK7=FIRSTFILTER[j+1][i-1];

CHECK8=FIRSTFILTER[j+1][i];

if (CHECK1==TH1)
{
if (CHECK2==TH1)
{
if (CHECK3==TH1)

Dept. of ECE, VJCET Page 42


Colour Image Segmentation Using FPGA

if (CHECK4==TH1)

if (CHECK5==TH1)

if (CHECK6==TH1)

if (CHECK7==TH1)

if (CHECK8==TH1)

EDGEIMAGE[j][i]=0;

} //1

} //2

} //3

}//4
}//5
}//6
}//7

}//8
}
}
}

Dept. of ECE, VJCET Page 43


Colour Image Segmentation Using FPGA

////////////////////////////////////DETECTING ZEROS

for(i=0;i<64;i++)
{
for (j=0;j<64;j++)
{
if ((i==0)&&(j==0))
{

CHECK1=FIRSTFILTER[j]
[i+1];

CHECK2=FIRSTFILTER[j+1][i+1];

CHECK3=FIRSTFILTER[j+1][i];

if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)
{

EDGEIMAGE[j][i]=0;
}

}
else if ((i==0)&&(j==127))
{

CHECK1=FIRSTFILTER[j][i+1];

CHECK2=FIRSTFILTER[j+1][i+1];

CHECK3=FIRSTFILTER[j+1][i];

if (FIRSTFILTER[j+1][i]==TH2)
{

Dept. of ECE, VJCET Page 44


Colour Image Segmentation Using FPGA

if (FIRSTFILTER[j+1][i-1]==TH2)
{
if (FIRSTFILTER[j][i-1]==TH2)
{

EDGEIMAGE[j][i]=0;

}
}

}
}
else if ((i==127 )&&( j==0))
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i+1];

CHECK3=FIRSTFILTER[j][i+1];

if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)

EDGEIMAGE[j][i]=0;

}
}

}
else if ((j==127)&&(i==127))
{

CHECK1=FIRSTFILTER[j][i-1];

CHECK2=FIRSTFILTER[j-1][i-1];

Dept. of ECE, VJCET Page 45


Colour Image Segmentation Using FPGA

CHECK3=FIRSTFILTER[j-1][i];

if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)

EDGEIMAGE[j][i]=0;

}
}
else if (i==1)
{

CHECK1=FIRSTFILTER[j][i-1];

CHECK2=FIRSTFILTER[j+1][i-1];

CHECK3=FIRSTFILTER[j+1][i];

CHECK4=FIRSTFILTER[j+1][i+1];

CHECK5=FIRSTFILTER[j][i+1];

if (CHECK1==TH2)
{
if (CHECK2==TH2)

if (CHECK3==TH2)
{

Dept. of ECE, VJCET Page 46


Colour Image Segmentation Using FPGA

if (CHECK4==TH2)

if (CHECK5==TH2)

EDGEIMAGE[j][i]=0;
}

} }

}
}
else if (i==127)
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i-1];

CHECK3=FIRSTFILTER[j][i-1];

CHECK4=FIRSTFILTER[j+1][i-1];

CHECK5=FIRSTFILTER[j+1][i];

if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)

if (CHECK4==TH2)

Dept. of ECE, VJCET Page 47


Colour Image Segmentation Using FPGA

if (CHECK5==TH2)

EDGEIMAGE[j][i]=0;

}
}
}
}
}
else if (j==127)
{
CHECK1=FIRSTFILTER[j][i-
1];

CHECK2=FIRSTFILTER[j-1][i-1];

CHECK3=FIRSTFILTER[j-1][i];

CHECK4=FIRSTFILTER[j-1][i+1];

CHECK5=FIRSTFILTER[j][i+1];

if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)

if (CHECK4==TH2)

{
if (CHECK5==TH2)

EDGEIMAGE[j][i]=0;
}

Dept. of ECE, VJCET Page 48


Colour Image Segmentation Using FPGA

}
}
}
}
else if (j==1)
{

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i+1];

CHECK3=FIRSTFILTER[j][i+1];

CHECK4=FIRSTFILTER[j+1][i+1];

CHECK5=FIRSTFILTER[j+1][i];
if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)

if (CHECK4==TH2)
{

if (CHECK5==TH2)

EDGEIMAGE[j][i]=0;
}
}
}
}
}
}
else
{

Dept. of ECE, VJCET Page 49


Colour Image Segmentation Using FPGA

CHECK1=FIRSTFILTER[j-1][i];

CHECK2=FIRSTFILTER[j-1][i-1];

CHECK3=FIRSTFILTER[j][i-1];

CHECK4=FIRSTFILTER[j+1][i-1];

CHECK5=FIRSTFILTER[j+1][i];

CHECK6=FIRSTFILTER[j][i-1];

CHECK7=FIRSTFILTER[j+1][i-1];

CHECK8=FIRSTFILTER[j+1][i];

if (CHECK1==TH2)
{
if (CHECK2==TH2)
{
if (CHECK3==TH2)

if (CHECK4==TH2)

if (CHECK5==TH2)

if (CHECK6==TH2)

if (CHECK7==TH2)

if (CHECK8==TH2)

Dept. of ECE, VJCET Page 50


Colour Image Segmentation Using FPGA

EDGEIMAGE[j][i]=0;

} //1

} //2

} //3

} //4

} //5

} //6

} //7

}//8
}

}
//////////////////////////////////////
//printf("output");

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{

printf("%d \n",EDGEIMAGE[i][j]);

Dept. of ECE, VJCET Page 51


Colour Image Segmentation Using FPGA

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{
if (FIRSTFILTER[i][j]==255)
{
count=count+1;
}
}
}
// printf("%d \n",count);

Dept. of ECE, VJCET Page 52


Colour Image Segmentation Using FPGA

Chapter 8

APPLICATIONS

In computer vision, segmentation refers to the process of partitioning a digital


image into multiple segments (sets of pixels, also known as superpixels). The goal of
segmentation is to simplify and/or change the representation of an image into something that
is more meaningful and easier to analyze. Image segmentation is typically used to locate
objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is
the process of assigning a label to every pixel in an image such that pixels with the same label
share certain visual characteristics.

The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image (see edge detection). Each of the pixels
in a region is similar with respect to some characteristic or computed property, such
as color, intensity, or texture. Adjacent regions are significantly different with respect to the
same characteristic(s). When applied to a stack of images, typical in Medical imaging, the
resulting contours after image segmentation can be used to create 3D reconstructions with the
help of interpolation algorithms like Marching cubes.

8.1 AUTOMATIC SURVILLENCE – Project Application

Surveillance is the monitoring of the behavior, activities, or other changing


information, usually of people and often in a surreptitious manner.

The word surveillance may be applied to observation from a distance by means of


electronic equipment (such as CCTV cameras), or interception of electronically transmitted
information (such as Internet traffic or phone calls). It may also refer to simple, relatively no-
or low-technology methods such as human intelligence agents and postal interception.

Surveillance is very useful to governments and law enforcement to maintain social


control, recognize and monitor threats, and prevent/investigate criminal activity. With the
advent of programs such as the Total Information Awareness program and ADVISE,
technologies such as high speed surveillance computers and biometrics software, and laws

Dept. of ECE, VJCET Page 53


Colour Image Segmentation Using FPGA

such as the Communications Assistance For Law Enforcement Act, governments now


possess an unprecedented ability to monitor the activities of their subjects.

Surveillance cameras are video cameras used for the purpose of observing an area.
They are often connected to a recording device, IP network, and/or watched by a security
guard/law enforcement officer. Cameras and recording equipment used to be relatively
expensive and required human personnel to monitor camera footage. Now with cheaper
production techniques, it is simple and inexpensive enough to be used in home security
systems, and for everyday surveillance. Analysis of footage is made easier by automated
software that organizes digital video footage into a searchable database, and by automated
video analysis software (such as VIRAT and HumanID) . The amount of footage is also
drastically reduced by motion sensors which only record when motion is detected.

Image segmentation can be effectively employed in surveillance applications for


detecting human presence by segmenting out human figure. The fig.12 shows the frame
without human and figure 13 shows the frame with human intervention. The presence of
human being in the surveillance area can be easily identified by image segmentation.

Figure 12 - Camera Image 1

Dept. of ECE, VJCET Page 54


Colour Image Segmentation Using FPGA

Figure 13 - Camera Image 2

8.2 Other Fields of Applications

Some of the other practical applications of image segmentation are:

8.2.1 Medical Imaging

 Locate tumors and other pathologies


 Measure tissue volumes
 Computer-guided surgery
 Diagnosis
 Treatment planning
 Study of anatomical structure

Medical imaging is the technique and process used to create images of the human
body (or parts and function thereof) for clinical purposes (medical procedures seeking to
reveal, diagnose or examine disease) or medical science (including the study of
normal anatomy and physiology). Although imaging of removed organs and tissues can be

Dept. of ECE, VJCET Page 55


Colour Image Segmentation Using FPGA

performed for medical reasons, such procedures are not usually referred to as medical
imaging, but rather are a part of pathology.

As a discipline and in its widest sense, it is part of biological imaging and


incorporates radiology (in the wider sense), nuclear medicine, investigative radiological
sciences, endoscopy, (medical)thermography, medical photography and microscopy (e.g. for
human pathological investigations).

Measurement and recording techniques which are not primarily designed to


produce imagessuchas electroencephalography (EEG), magnetoencephalography (MEG), Ele
ctrocardiography (EKG) and others, but which produce data susceptible to be represented
as maps (i.e. containing positional information), can be seen as forms of medical imaging.

Up until 2010, 5 billion medical imaging studies had been conducted


worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total
ionizing radiation exposure in the United States.

In the clinical context, "invisible light" medical imaging is generally equated


to radiology or "clinical imaging" and the medical practitioner responsible for interpreting
(and sometimes acquiring) the image is a radiologist. "Visible light" medical imaging
involves digital video or still pictures that can be seen without special equipment.
Dermatology and wound care are two modalities that utilize visible light imagery.
Diagnostic radiography designates the technical aspects of medical imaging and in particular
the acquisition of medical images. The radiographer or radiologic technologist is usually
responsible for acquiring medical images of diagnostic quality, although some radiological
interventions are performed by radiologists. While radiology is an evaluation of anatomy,
nuclear medicine provides functional assessment.

As a field of scientific investigation, medical imaging constitutes a sub-discipline


of biomedical engineering, medical physics or medicine depending on the context: Research
and development in the area of instrumentation, image acquisition (e.g. radiography),
modelling and quantification are usually the preserve of biomedical engineering, medical
physics and computer science; Research into the application and interpretation of medical
images is usually the preserve of radiology and the medical sub-discipline relevant to
medical condition or area of medical science (neuroscience, cardiology,
psychiatry, psychology, etc.) under investigation. Many of the techniques developed for
medical imaging also have scientific and industrial applications.

Dept. of ECE, VJCET Page 56


Colour Image Segmentation Using FPGA

Medical imaging is often perceived to designate the set of techniques that


noninvasively produce images of the internal aspect of the body. In this restricted sense,
medical imaging can be seen as the solution of mathematical inverse problems. This means
that cause (the properties of living tissue) is inferred from effect (the observed signal). In the
case of ultrasonography the probe consists of ultrasonic pressure waves and echoes inside the
tissue show the internal structure. In the case of projection radiography, the probe is X-
ray radiation which is absorbed at different rates in different tissue types such as bone,
muscle and fat.

The term non-invasive is a term based on the fact that following medical imaging
modalities do not penetrate the skin physically. But on the electromagnetic and radiation
level, they are quite invasive. From the high energy photons in X-Ray Computed
Tomography, to the 2+ Tesla coils of an MRI device, these modalities alter the physical and
chemical environment of the body in order to obtain data.

Figure14 - A brain MRI Representation

A magnetic resonance imaging instrument (MRI scanner), or "nuclear magnetic


resonance (NMR) imaging" scanner as it was originally known, uses powerful magnets to
polarize and excite hydrogen nuclei (single proton) in water molecules in human tissue,
producing a detectable signal which is spatially encoded, resulting in images of the body. The
MRI machine emits an RF (radio frequency) pulse that is specifically binds only to hydrogen.
The system sends the pulse to the area of the body to be examined. The pulse makes the
protons in that area absorb the energy needed to make them spin in a different direction. This

Dept. of ECE, VJCET Page 57


Colour Image Segmentation Using FPGA

is the “resonance” part of MRI. The RF pulse makes them (only the one or two extra
unmatched protons per million) spin at a specific frequency, in a specific direction. The
particular frequency of resonance is called the Larmour frequency and is calculated based on
the particular tissue being imaged and the strength of the main magnetic field. MRI uses
three electromagnetic fields: a very strong (on the order of units of Teslas) static magnetic
field to polarize the hydrogen nuclei, called the static field; a weaker time-varying (on the
order of 1 kHz) field(s) for spatial encoding, called the gradient field(s); and a weak
radio (RF) field for manipulation of the hydrogen nuclei to produce measurable signals,
collected through an RF antenna.

Like CT, MRI traditionally creates a two dimensional image of a thin "slice" of the
body and is therefore considered a tomographic imaging technique. Modern MRI instruments
are capable of producing images in the form of 3D blocks, which may be considered a
generalization of the single-slice, tomographic, concept. Unlike CT, MRI does not involve
the use of ionizing radiation and is therefore not associated with the same health hazards. For
example, because MRI has only been in use since the early 1980s, there are no known long-
term effects of exposure to strong static fields (this is the subject of some debate; see 'Safety'
in MRI) and therefore there is no limit to the number of scans to which an individual can be
subjected, in contrast with X-ray and CT. However, there is well-identified health risks
associated with tissue heating from exposure to the RF field and the presence of implanted
devices in the body, such as pace makers. These risks are strictly controlled as part of the
design of the instrument and the scanning protocols used.

Since Because CT and MRI are sensitive to different tissue properties, the appearance
of the images obtained with the two techniques differ markedly. In CT, X-rays must be
blocked by some form of dense tissue to create an image, so the image quality when looking
at soft tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be used, the
proton of the hydrogen atom remains the most widely used, especially in the clinical setting,
because it is so ubiquitous and returns a large signal. This nucleus, present in water
molecules, allows the excellent soft-tissue contrast achievable with MRI.

8.2.2 Face Recognition

A facial recognition system is a computer application for automatically identifying or


verifying a person from a digital image or a video frame from a video source. One of the
ways to do this is by comparing selected facial features from the image and a facial database.

Dept. of ECE, VJCET Page 58


Colour Image Segmentation Using FPGA

It is typically used in security systems and can be compared to other biometrics such


as fingerprint or eye iris recognition systems.

Some facial recognition algorithms identify faces by extracting landmarks, or


features, from an image of the subject's face. For example, an algorithm may analyze the
relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are
then used to search for other images with matching features. Other algorithms normalize a
gallery of face images and then compress the face data, only saving the data in the image that
is useful for face detection. A probe image is then compared with the face data. One of the
earliest successful systems is based on template matching techniques applied to a set of
salient facial features, providing a sort of compressed face representation.

8.2.3 Fingerprint Recognition

Fingerprint recognition or fingerprint authentication refers to the automated method of


verifying a match between two human fingerprints. Fingerprints are one of many forms
of biometrics used to identify individuals and verify their identity. This article touches on two
major classes of algorithms (minutia and pattern) and four sensor designs (optical, ultrasonic,
passive capacitance, and active capacitance).

Optical fingerprint imaging involves capturing a digital image of the print


using visible light. This type of sensor is, in essence, a specialized digital camera. The top
layer of the sensor, where the finger is placed, is known as the touch surface. Beneath this
layer is a light-emitting phosphor layer which illuminates the surface of the finger. The light
reflected from the finger passes through the phosphor layer to an array of solid state pixels
(a charge-coupled device) which captures a visual image of the fingerprint. A scratched or
dirty touch surface can cause a bad image of the fingerprint. A disadvantage of this type of
sensor is the fact that the imaging capabilities are affected by the quality of skin on the finger.
For instance, a dirty or marked finger is difficult to image properly. Also, it is possible for an
individual to erode the outer layer of skin on the fingertips to the point where the fingerprint
is no longer visible. It can also be easily fooled by an image of a fingerprint if not coupled
with a "live finger" detector. However, unlike capacitive sensors, this sensor technology is
not susceptible to electrostatic discharge damage. 

Dept. of ECE, VJCET Page 59


Colour Image Segmentation Using FPGA

Pattern based algorithms compare the basic fingerprint patterns (arch, whorl, and
loop) between a previously stored template and a candidate fingerprint. This requires that the
images be aligned in the same orientation. To do this, the algorithm finds a central point in
the fingerprint image and centers on that. In a pattern-based algorithm, the template contains
the type, size, and orientation of patterns within the aligned fingerprint image. The candidate
fingerprint image is graphically compared with the template to determine the degree to which
they match.

8.2.4 Machine Vision

Machine vision (MV) is a branch of engineering that uses computer vision in the


context of manufacturing. While the scope of MV is broad and a comprehensive definition is
difficult to distil, a "generally accepted definition of machine vision is '... the analysis of
images to extract data for controlling a process or activity. Put another way, MV processes
are targeted at "recognizing the actual objects in an image and assigning properties to those
objects--understanding what they mean." The main categories into which MV applications
fall are quality assurance, sorting, material handling, robot guidance, and calibration.

As of 2006, there was little standardization in the processes used in MV. Nonetheless,
the first step in the MV process is acquisition of an image, typically using cameras, lenses,
and lighting that has been designed to provide the differentiation required by subsequent
processing. MV software packages then employ various digital image processing techniques
to allow the hardware to recognize what it is looking at.

Techniques used in MV include: thresholding (converting an image with gray tones to black


and white), segmentation, blob extraction, pattern recognition, barcode reading, optical
character recognition, gauging (measuring object dimensions), edge detection, and template
matching (finding, matching, and/or counting specific patterns).

Dept. of ECE, VJCET Page 60


Colour Image Segmentation Using FPGA

Chapter 9

MERITS AND DEMERITS

9.1 ADVANTAGES

1. Higher accuracy due to the combination of different algorithm.

2. Since hardware is FPGA as technology increase software part can be updated with
ease.

3. Low cost as we have used Spartan FPGA.

9.2 LIMITATIONS

1. Higher processing time.


2. To implement image processing In 3 planes we need higher FPGA series like
vertex series.
3. As the image size increases the programing become more complex and lengthy.

Dept. of ECE, VJCET Page 61


Colour Image Segmentation Using FPGA

Chapter 10

SCOPE

10.1 FUTURE EXPANSION

1. Implementation of the algorithm in 3 planes.


2. To develop a Real time surveillance system.

10.2 CONCLUSION

The colour image segmentation algorithm was implemented successfully on FPGA


Spartan 3E kit. The project was implemented section by section and the desired output of
each one was verified.

We are so happy presenting our project, colour image segmentation using FPGA
successfully. With the successful completion of our project, we were able to broaden the
horizon of our knowledge.

Dept. of ECE, VJCET Page 62


Colour Image Segmentation Using FPGA

Chapter 11
BIBLIOGRAPHY

 [1] Jun Tung “Colour Image Segmentation based on Region Growing Approach”
[Xi'an Shiyou University]

 [2] Yining Deng, B. S. Manjunath and Hyundoo Shin “Colour Image Segmentation”
[University of Caloifornia]

 [3] Olivier Faugeras “Image Segmentation – A Perspective”

 [4] R. Nicole, “Study on the MATLAB segmentation image segmentation,” J. Name


Stand. Abbrev., in press.

 [5] Bergmann, E.V., B.K. Walker, and B.K. Levy, The Research on Stereo Matching
Algorithm based on Region-growth. Journal of Guidance, Control, and Dynamics,
1987. 10(5): p. 483-491.

 [6] Bergmann, E. and J. Dzielski, computer vision and image understanding


Dynamics, 1990. 13(1): p. 99-103.

 [7] Tanygin, S. image dense stereo matching by technique of region growing,. Journal
of Guidance, Control, and Dynamics, 1997.20(4): p. 625-632.

 [8] Lee, A.Y. and J.A. Wertz, Harris operator is used to improve the exact position of
point feature, 2002. 39(1): p. 153-155

 [9] Palimaka, J. and B.V. burlton, fast computation of matching value for binocular
stereo vision. 1992: Hilton Head Island. p. 21-26.

 [10] Peck, M.ACell Image Segmentation of Gastric Cancer Based on Region-


Growing and Watersheds. 2002. Quebec City, Que.: Univelt Inc.

 www.xilinx.com
 www.edaboard.com
 www.cs.toronto.edu
 www.mathworks.com

Dept. of ECE, VJCET Page 63

You might also like