Lectr-08 - Image Segmentationpart-1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 42

Image

segmentation
Dr. Muhammad Hanif
Classification vs
segmentation
•• Segmentation means to divide up the image into a patchwork of
regions, each of which is “homogeneous”, that is, the “same” in some
sense
• – Intensity, texture, colour, …
• • Classification means to assign to each point in the image a class,
where the classes are agreed in advance – Grey matter (GM), White
matter (WM), cerebrospinal fluid (CSF), air, … in the case of the brain
• • Note that the problems are inter-linked: a classifier implicitly
segments an image, and a segmentation implies a classification
Image
segmentation
• The purpose of image segmentation is to partition an image into
meaningful regions with respect to a particular application.
• The segmentation is based on measurements taken from the image
and might be grey level, colour, texture, depth or motion.
• Usually image segmentation is an initial and vital step in a series of
processes aimed at overall image understanding
Image
segmenation
 Let R represent the entire spatial region occupied by an image.
 Image segmentation is a process that partitions R into n sub-
regions, R1, R2, …, Rn, such that

[ Where S is a similarity criteria ]


Image
segmentation
Image
segmentation
Image
segmentation
Applications:
Image
segmentation
Image

segmentation
Methodologies:
Image
segmentation
 Segmentation algorithms for monochrome images generally are based
on one of two basic properties of gray-scale values:
 Discontinuity
The approach is to partition an image based on abrupt changes in gray-
scale levels.
The principal areas of interest within this category are detection of isolated
points, lines, and edges in an image.

 Similarity
The principal approaches in this category are based on thresholding, region
growing, and region splitting/merging.
Monochrome image
segmentation
• Detection of discontinuities:

 There are three basic types of gray-level discontinuities:


points , lines , edges
 The common way is to run a mask through the image
 The detection is based on convoluting the image with a
spatial mask.
Point
detection
 A point has been detected at the location p(i,j) on which
the mask is centered if |R |>T, where T is a nonnegative
threshold, and R is obtained with the following mask

 The idea is that the gray level of an isolated point will be


quite different from the gray level of its neighbors
Point
detection
Line
detection

Note: the preferred direction


of each mask is weighted with
a larger coefficient (i.e.,2) than
other possible directions.
Line
detection
 If we are interested in detecting all lines in an image in the
direction defined by a given mask, we simply run the mask through
the image and threshold the absolute value of the result.
 The points that are left are the strongest responses, which, for
lines one pixel thick, correspond closest to the direction defined by
the mask
Edge
detection
• It locates sharp changes in the intensity function.
• • Edges are pixels where brightness changes abruptly.
• Edges found by looking at neighboring pixels.
• • A change of the image function can be described by a gradient that
points in the direction of the largest growth of the image function.
• • An edge is a property attached to an individual pixel and is
calculated from the image function behavior in a neighborhood of
the pixel.
• • Magnitude of the first derivative detects the presence of the
edge.
• • Sign of the second derivative determines whether the edge pixel
lies on the dark sign or light side.
Edge
Detection
Edge
detection
Edge
detection
Edge
detection
Edge
detection
Using image
gradient
Using image
gradient
Derivatives and
noise
Derivatives and
noise
Edge detection using 2nd
derivative
Edge detection using 2nd
derivative
1st vs 2nd derivative
1st vs 2nd derivative
Edge
detection

You might also like