Machine Vision

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 50

Memorial University of Newfoundland

Faculty of Engineering & Applied Science

Engineering 7854
Industrial Machine Vision

INTRODUCTION TO MACHINE VISION


Prof. Nick Krouglicof

LECTURE OUTLINE
Elements of a Machine Vision System
Lens-camera model 2D versus 3D machine vision

Image segmentation pixel classification


Thresholding Connected component labeling Chain and crack coding for boundary representations Contour tracking / border following

Object recognition
Blob analysis, generalized moments, compactness Evaluation of form parameters from chain and crack codes

Industrial application
10/19/2013 Introduction to Machine Vision

MACHINE VISION SYSTEM

10/19/2013

Introduction to Machine Vision

LENS-CAMERA MODEL

10/19/2013

Introduction to Machine Vision

HOW CAN WE RECOVER THE DEPTH INFORMATION?


Stereoscopic approach: Identify the same point in two different views of the object and apply triangulation. Employ structured lighting.

If the form (i.e., size) of the object is known, its position and orientation can be determined from a single perspective view. Employ an additional range sensor (ultrasonic, optical).
10/19/2013 Introduction to Machine Vision

3D MACHINE VISION SYSTEM


XY Table

Laser Projector

Digital Camera

Field of View

Plane of Laser Light


P(x,y,z)

Granite Surface Plate

10/19/2013

Introduction to Machine Vision

3D MACHINE VISION SYSTEM

10/19/2013

Introduction to Machine Vision

3D MACHINE VISION SYSTEM

10/19/2013

Introduction to Machine Vision

3D MACHINE VISION SYSTEM

10/19/2013

Introduction to Machine Vision

3D MACHINE VISION SYSTEM

10/19/2013

Introduction to Machine Vision

2D MACHINE VISION SYSTEMS


2D machine vision deals with image analysis. The goal of this analysis is to generate a high level description of the input image or scene that can be used (for example) to:
Identify objects in the image (e.g., character recognition) Determine the position and orientation of the objects in the image (e.g., robot assembly) Inspect the objects in the image (e.g., PCB inspection)

In all of these examples, the description refers to specific objects or regions in the image.

To generate the description of the image, it is first necessary to segment the image into these regions.
10/19/2013 Introduction to Machine Vision

IMAGE SEGMENTATION
How many objects are there in the image below? Assuming the answer is 4, what exactly defines an object?
Zoom In

10/19/2013

Introduction to Machine Vision

8 BIT GRAYSCALE IMAGE


197 191 193 196 192 192 194 194 193 192 193 194 195 196 195 196 194 197 192 195 197 193 194 195 193 193 190 193 195 194 197 195 195 192 197 193 196 197 194 196 195 194 194 189 194 195 195 196 196 195 192 195 194 196 197 193 195 194 194 195 188 193 195 196 195 196 193 193 195 195 195 196 194 195 194 191 193 178 189 193 193 197 194 194 193 194 195 196 195 195 195 195 191 184 140 170 191 192 195 192 195 194 193 194 195 194 196 195 194 188 156 92 125 183 192 195 194 194 195 193 193 194 193 195 196 193 172 112 68 85 153 190 194 194 191 195 194 191 194 193 195 195 184 134 77 57 63 107 173 192 194 191 195 194 189 193 193 192 189 155 94 60 52 55 76 123 179 194 192 195 194 187 191 192 189 167 107 68 53 50 54 60 83 143 190 193 195 193 190 192 188 176 124 73 56 52 50 54 55 63 99 168 193 195 193 191 191 183 144 87 60 51 51 52 55 54 57 69 117 179 194 191 190 187 161 102 68 55 51 53 53 58 54 53 58 78 134 187 190 186 175 121 72 57 53 53 56 56 63 55 51 56 61 90 156 188 181 145 86 59 53 55 57 58 57 66 57 54 56 54 66 110 185 158 105 67 56 52 60 57 58 60 67 57 59 59 51 53 74 174 119 73 59 53 51 60 58 58 60 68 56 62 58 51 50 57 142 86 58 52 51 51 58 56 56 58 64 55 57 55 52 47 51 101 66 51 48 52 50 54 54 53 54 59 53 54 54 52 46 46

194
196 192

193
194 190

192
194 189

192
195 189

192
195 192

194
196 192

194
195 192

193
194 191

192
193 192

193
193 190

193
193 192

192
194 194

192
193 194

192
195 194

189
194 193

173
192 192

129
185 192

84
150 187

62
99 163

52
69 114

10/19/2013

Introduction to Machine Vision

8 BIT GRAYSCALE IMAGE

200

150 100 50 0

10/19/2013

Introduction to Machine Vision

GRAY LEVEL THRESHOLDING


Many images consist of two regions that occupy different gray level ranges. Such images are characterized by a bimodal image histogram. An image histogram is a function h defined on the set of gray levels in a given image. The value h(k) is given by the number of pixels in the image having image intensity k.

10/19/2013

Introduction to Machine Vision

GRAY LEVEL THRESHOLDING (DEMO)

10/19/2013

Introduction to Machine Vision

BINARY IMAGE

10/19/2013

Introduction to Machine Vision

IMAGE SEGMENTATION CONNECTED COMPONENT LABELING


Segmentation can be viewed as a process of pixel classification; the image is segmented into objects or regions by assigning individual pixels to classes. Connected Component Labeling assigns pixels to specific classes by verifying if an adjoining pixel (i.e., neighboring pixel) already belongs to that class. There are two standard definitions of pixel connectivity: 4 neighbor connectivity and 8 neighbor connectivity.

10/19/2013

Introduction to Machine Vision

IMAGE SEGMENTATION CONNECTED COMPONENT LABELING


4 Neighbor Connectivity 8 Neighbor Connectivity

Pi,j

Pi,j

10/19/2013

Introduction to Machine Vision

CONNECTED COMPONENT LABELING: FIRST PASS


A A
A A A

EQUIVALENCE: B=C

B B B B B

C C C C

B B B

B B B

10/19/2013

Introduction to Machine Vision

CONNECTED COMPONENT LABELING: SECOND PASS


A A
A A A

TWO OBJECTS!

B B B B B

C B C B C B C B

B B B

B B B

10/19/2013

Introduction to Machine Vision

CONNECTED COMPONENT LABELING: EXAMPLE (DEMO)

10/19/2013

Introduction to Machine Vision

CONNECTED COMPONENT LABELING: TABLE OF EQUIVALENCES


2 5 2 5 5 5 5 5 5 16 = = = = = = = = = = 5 9 5 10 10 10 12 16 18 23 16 5 27 16 5 5 40 5 34 34 = = = = = = = = = = 27 28 34 37 39 39 41 39 46 46 16 5 34 5 34 5 34 34 50 50 = = = = = = = = = = 50 39 51 39 46 66 72 72 76 81 50 50 50 5 111 112 112 112 112 112 = = = = = = = = = = 81 86 86 87 112 113 119 120 120 122 112 112 112 112 = = = = 127 134 137 138

10/19/2013

Introduction to Machine Vision

CONNECTED COMPONENT LABELING: TABLE OF EQUIVALENCES


2 2 2 2 2 2 2 2 2 2 = = = = = = = = = = 5 9 10 12 16 18 23 27 28 34 2 2 40 2 2 2 2 2 2 2 = = = = = = = = = = 37 39 41 46 50 51 66 72 76 81 2 2 111 111 111 111 111 111 111 111 = = = = = = = = = = 86 87 112 113 119 120 122 127 134 137 111 = 138

10/19/2013

Introduction to Machine Vision

IS THERE A MORE COMPUTATIONALLY EFFICIENT TECHNIQUE FOR SEGMENTING THE OBJECTS IN THE IMAGE?
Contour tracking/border following identify the pixels that fall on the boundaries of the objects, i.e., pixels that have a neighbor that belongs to the background class or region. There are two standard code definitions used to represent boundaries: code definitions based on 4connectivity (crack code) and code definitions based on 8-connectivity (chain code).

10/19/2013

Introduction to Machine Vision

BOUNDARY REPRESENTATIONS: 4-CONNECTIVITY (CRACK CODE)


3 0 2

1
CRACK CODE:
10111211222322333300 103300

10/19/2013

Introduction to Machine Vision

BOUNDARY REPRESENTATIONS: 8-CONNECTIVITY (CHAIN CODE)


7 0 6 5 4

CHAIN CODE:
12232445466601760

10/19/2013

Introduction to Machine Vision

CONTOUR TRACKING ALGORITHM FOR GENERATING CRACK CODE


Identify a pixel P that belongs to the class objects and a neighboring pixel (4 neighbor connectivity) Q that belongs to the class background. Depending on the relative position of Q relative to P, identify pixels U and V as follows:
CODE 0 CODE 1 CODE 2 CODE 3

V Q U P

Q P V U

P U Q V

U V P Q

10/19/2013

Introduction to Machine Vision

CONTOUR TRACKING ALGORITHM


Assume that a pixel has a value of 1 if it belongs to the class object and 0 if it belongs to the class background. Pixels U and V are used to determine the next move (i.e., the next element of crack code) as summarized in the following truth table:
U X 1 0 V 1 0 0 P V U P Q Q V U TURN RIGHT NONE LEFT CODE* CODE-1 CODE CODE+1

*Implement

as a modulo 4 counter

10/19/2013

Introduction to Machine Vision

CONTOUR TRACKING ALGORITHM


3 0 1 2

V Q P U P Q V V P Q U U

CODE 0

CODE 1

CODE 2

CODE 3

V Q U P
U X 1 0 V 1 0 0

Q P V U
P V U P

P U Q V
Q Q V U TURN RIGHT NONE LEFT

U V P Q
CODE* CODE-1 CODE CODE+1

V P Q U V U

*Implement

as a modulo 4 counter

10/19/2013

Introduction to Machine Vision

CONTOUR TRACKING ALGORITHM FOR GENERATING CRACK CODE


Software Demo!

10/19/2013

Introduction to Machine Vision

CONTOUR TRACKING ALGORITHM FOR GENERATING CHAIN CODE


Identify a pixel P that belongs to the class objects and a neighboring pixel (4 neighbor connectivity) R0 that belongs to the class background. Assume that a pixel has a value of 1 if it belongs to the class object and 0 if it belongs to the class background.

Assign the 8-connectivity neighbors of P to R0, R1, , R7 as follows: R7 R6 R5

R0

R4

R1 R2 R3
10/19/2013 Introduction to Machine Vision

CONTOUR TRACKING ALGORITHM FOR GENERATING CHAIN CODE


7 0 1 2 6 5 4 3
R7 R6 R5 R7 R R6 R P5 R4 0 P6 R R0 R R2 4 7 R 5 R3 1 R7 P6 1 R 2 R5 3 0 4 R R0 P R6 1 R 2 R 3 R5 4 7 R1 R R0 R P3 R 2 R 4 R 7 6 R5 R R R1 P2 R R3 0 4 R1 R2 R3
10/19/2013 Introduction to Machine Vision

ALGORITHM: i=0 WHILE (Ri==0) { i++ } Move P to Ri Set i=6 for next search

OBJECT RECOGNITION BLOB ANALYSIS


Once the image has been segmented into classes representing the objects in the image, the next step is to generate a high level description of the various objects. A comprehensive set of form parameters describing each object or region in an image is useful for object recognition. Ideally the form parameters should be independent of the objects position and orientation as well as the distance between the camera and the object (i.e., scale factor).

10/19/2013

Introduction to Machine Vision

What are some examples of form parameters that would be useful in identifying the objects in the image below?

10/19/2013

Introduction to Machine Vision

OBJECT RECOGNITION BLOB ANALYSIS


Examples of form parameters that are invariant with respect to position, orientation, and scale: Number of holes in the object Compactness or Complexity: (Perimeter)2/Area Moment invariants All of these parameters can be evaluated during contour following.

10/19/2013

Introduction to Machine Vision

GENERALIZED MOMENTS
Shape features or form parameters provide a high level description of objects or regions in an image

Many shape features can be conveniently represented in terms of moments. The (p,q)th moment of a region R defined by the function f(x,y) is given by:

m pq x y f ( x, y)dxdy
p q R

10/19/2013

Introduction to Machine Vision

GENERALIZED MOMENTS
In the case of a digital image of size n by m pixels, this equation simplifies to:

M ij x y f ( x, y )
i j x 1 y 1

For binary images the function f(x,y) takes a value of 1 for pixels belonging to class object and 0 for class background.
10/19/2013 Introduction to Machine Vision

GENERALIZED MOMENTS

M ij x i y j f ( x, y )
x 1 y 1

X
0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6

i 0 1 0

j 0 0 1

Mij 7
Area

33 20
159

2
0 1
Introduction to Machine Vision

0
2 1

64
93

Moment of Inertia

Y
10/19/2013

SOME USEFUL MOMENTS


The center of mass of a region can be defined in terms of generalized moments as follows:

M 10 X M 00

M 01 Y M 00

10/19/2013

Introduction to Machine Vision

SOME USEFUL MOMENTS


The moments of inertia relative to the center of mass can be determined by applying the general form of the parallel axis theorem:

M 02

M M 02 M 00

2 01

M 20

M M 20 M 00

2 10

M 10 M 01 M 11 M 11 M 00
10/19/2013 Introduction to Machine Vision

SOME USEFUL MOMENTS


The principal axis of an object is the axis passing through the center of mass which yields the minimum moment of inertia. This axis forms an angle with respect to the X axis. The principal axis is useful in robotics for determining the orientation of randomly placed objects.

2 M 11 TAN 2 M 20 M 02
10/19/2013 Introduction to Machine Vision

Example
X

Principal Axis

Center of Mass

Y
10/19/2013 Introduction to Machine Vision

SOME (MORE) USEFUL MOMENTS


The minimum/maximum moment of inertia about an axis passing through the center of mass are given by:

I MIN

( M 02 M 20 ) 4M M 02 M 20 2 2
2

2 11

I MAX

( M 02 M 20 ) 4M M 02 M 20 2 2
2

2 11

10/19/2013

Introduction to Machine Vision

SOME (MORE) USEFUL MOMENTS


The following moments are independent of position, orientation, and reflection. They can be used to identify the object in the image.

1 M 20 M 02

2 (M 20 M 02 ) 4M
2
10/19/2013 Introduction to Machine Vision

2 11

SOME (MORE) USEFUL MOMENTS


The following moments are normalized with respect to area. They are independent of position, orientation, reflection, and scale.

1
M
2 00

2
M
4 00

10/19/2013

Introduction to Machine Vision

EVALUATING MOMENTS DURING CONTOUR TRACKING


Generalized moments are computed by evaluating a double (i.e., surface) integral over a region of the image. The surface integral can be transformed into a line integral around the boundary of the region by applying Greens Theorem. The line integral can be easily evaluated during contour tracking.

The process is analogous to using a planimeter to graphically evaluate the area of a geometric figure.
10/19/2013 Introduction to Machine Vision

EVALUATING MOMENTS DIRECTLY FROM CRACK CODE DURING CONTOUR TRACKING


{ switch ( code [i] ) { case 0: m00 = m00 - y; m01 = m01 - sum_y; m02 = m02 - sum_y2; x = x - 1; sum_x = sum_x - x; sum_x2 = sum_x2 - x*x; m11 = m11 - (x*sum_y); break; case 1: sum_y = sum_y + y; sum_y2 = sum_y2 + y*y; y = y + 1; m10 = m10 - sum_x; m20 = m20 - sum_x2; break;

10/19/2013

Introduction to Machine Vision

EVALUATING MOMENTS DIRECTLY FROM CRACK CODE DURING CONTOUR TRACKING

case 2:
m00 = m00 + y; m01 = m01 + sum_y; m02 = m02 + sum_y2; m11 = m11 + (x*sum_y); sum_x = sum_x + x; sum_x2 = sum_x2 + x*x; x = x + 1; break;

case 3:
y = y - 1; sum_y = sum_y - y; sum_y2 = sum_y2 - y*y; m10 = m10 + sum_x; m20 = m20 + sum_x2; break; } }

10/19/2013

Introduction to Machine Vision

QUESTIONS

10/19/2013

Introduction to Machine Vision

You might also like