Digital Image Processing: Segmentation-5
Digital Image Processing: Segmentation-5
Digital Image Processing: Segmentation-5
Processing
Segmentation-5
Instructor Name
Dr. Muhammad Sharif
Material Reference
3
What is Clustering?
Organizing data into classes such that there is
high intra-class similarity
low inter-class similarity
Finding the class labels and the number of classes
directly from the data (in contrast to classification).
More informally, finding natural groupings among
objects.
4
Clustering
Clustering in another very important unsupervised
learning technique used in image processing for the
purpose of segmentation.
It works by identifying the groups of pixels within
an image that have similarities.
In clustering, an image is divided into various
disjoint groups called clusters.
Clustering of objects is done on the basis of
attributes.
5
Building Clusters
1. Select a distance measure
2. Select a clustering algorithm
3. Define the distance between two clusters
4. Determine the number of clusters
5. Validate the analysis
7
Examples of Distances
8
Clustering Example
Image segmentation
Goal: Break up the image into meaningful or perceptually similar
regions
9
Clustering Example
10
Clustering Example
11
Clustering
Unsupervised learning
Requires data, but no labels
12
Approaches of Clustering
Clustering algorithms are unsupervised algorithms
as the target is unknown therefore result is
unknown to user.
There are a few clustering methods that are
frequently used by researchers:
Partitional Clustering (K-Means)
Hierarchical Clustering
13
Approaches of Clustering…
14
Hierarchical Clustering
These find successive clusters using previously
established clusters.
Agglomerative ("bottom-up"):
Agglomerative algorithms begin with each element as
a separate cluster and merge them into successively
larger clusters.
Divisive ("top-down"):
Divisive algorithms begin with the whole set and
proceed to divide it into successively smaller clusters.
15
Partitional Clustering
Partitional clustering is a primary clustering
technique.
Its works by making small parts of data based on
their central resemblance to their neighbors.
At the end, all the parts are combined to get the end
result.
K means is one of the widely used algorithm that
falls under this category.
16
K-Means Clustering
K means clustering is a widely used unsupervised
clustering algorithm used to segment the object
from background.
It is simple to implement and also computationally
efficient.
It works by grouping the provided data into clusters
based on centroids.
The algorithm is implemented on unlabeled whose
classes and labels are unknown and certain groups
are identified based on similarities and those
groups are represented by K.
17
K-Means Clustering
The objective of K Means is to group similar data
points together and discover underlying patterns.
To achieve this objective, K Means looks for a
fixed number (k) of clusters in a dataset which are
defined initially.
The K-means algorithm identifies k number of
centroids, and then allocates every data point to the
nearest cluster, while keeping the centroids as
small as possible.
18
K-Means: Steps
Choose the number of clusters K.
Initially, select centroids at random K points.
Assign each data point to the closest centroid to
form K clusters.
Compute and assign new centroid of each cluster.
Reassign each data point to new centroid.
If any further reassigning is necessary then repeat
above step otherwise model is ready.
19
K-Means: Steps
20
K-Means: Example 1
Suppose there is some data from 3 different tumor
cell types that can be plotted on a line and we need
to put it into 3 Clusters to classify it?
21
Example 1: Cont…
Step 1:
Select the total number of clusters (k) to identify in
the data. In this case, we will select (k=3). That is
to say we want to identify 3 clusters.
Step 2:
Randomly select 3 data points as initial clusters.
22
Example 1: Cont...
Step 3:
Measure the distance between 1st point and the
three initial clusters.
1st Point
23
Example 1: Cont…
Step 4:
Assign the 1st point to the nearest cluster. In this
case the nearest cluster is Blue cluster.
Next Point
24
Example 1: Cont…
Find the distance of next point from all centroids.
25
Example 1: Cont…
Now we need to figure out which cluster does the
3rd point belongs to. 3rd Point
26
Example 1: Cont…
Assign the point to the nearest cluster.
27
Example 1: Cont…
Now that all the points are in clusters, we go on
further.
Step 5:
Calculate the mean of each cluster.
28
Example 1: Cont.
Clustering result remains same and does not
change since the last time, so the process of K
Means is done.
29
Example 1: Cont…
Let’s repeat the same process using different initial
clusters.
30
Example 1: Cont.
The clusters using the newly calculated means
finally result in following clusters:
Total Variation
within the clusters
This result is better than the previous as the
variation is evenly spread among clusters.
31
Selecting Ideal “k”
Now let’s see this example using different number
of clusters (k).
K=1
K=2
32
Selecting Ideal “k”: Cont.
K=3
K=4
33
Plotting Variance
The total variation within each cluster is less when
k=3.
Each time we add a new cluster, the total variation
within each cluster is smaller than before.
34
Advantages of K Means
Relatively simple to implement.
Very simple and intuitive.
Scales to large data sets.
Guarantees convergence.
Easily adapts to new examples.
Good classification if the number of samples is
large enough.
35
Disadvantages of K-Means
Choosing k may be tricky.
Test stage is computationally expensive.
No training stage, all the work is done during the test
stage.
Different initial partitions can result in different final
clusters.
It does not work well with clusters of different size
and different density.
36
Applications of K-Means
Clustering
Optical Character Recognition
Biometrics
Diagnostic Systems
Military Applications
Document Clustering
Identifying crime-prone areas
Customer Segmentation
Insurance Fraud Detection
Public Transport Data Analysis
Image Segmentation 37
K-Means: Code
%Read image into workspace
I = imread('water_scene.png');
%Show original image
figure,imshow(I),title('Original Image');
%Segment the image into three regions using k-means clustering
[L,Centers] = imsegkmeans(I,11);
%Label the clustered regions for easy visualization
B = labeloverlay(I,L);
%Show segmented & clustered image
figure,imshow(B),title('Labeled Image');
38
K=3 K=9
39
Code: Results Original K=6
Summary
Clustering
K Means Clustering
40
Next Lecture
Example of K-Means Clustering
Hierarchical Clustering
41
Slide Credits and References
Wilhelm Burger and Mark J. Burge, Digital Image Processing, Springer, 2008
University of Utah, CS 4640: Image Processing Basics, Spring 2012
Rutgers University, CS 334, Introduction to Imaging and Multimedia, Fall 2012
https://www.slideshare.net/VikasGupta24/image-segmentation-66118502
https://www.slideshare.net/tawosetimothy/image-segmentation-34430371?
next_slideshow=1
https://www.ques10.com/p/34966/explain-image-seg
https://en.wikipedia.org/wiki/Image_segmentation
https://www.slideshare.net/guest49d49/segmentation-presentation
https://www.slideshare.net/kasunrangawijeweera/k-means-clustering-algorithm
https://www.slideshare.net/guest49d49/segmentation-presentation
https://webdocs.cs.ualberta.ca/~zaiane/courses/cmput695/F07/exercises/Exercises695C
lus-solution.pdf
http://disp.ee.ntu.edu.tw/meeting/%E6%98%B1%E7%BF%94/Segmentation%20tutori
al.pdf
https://www.youtube.com/watch?v=4b5d3muPQmA
https://www.analyticsvidhya.com/blog/2019/04/introduction-image-segmentation-tech
niques-python/
http://people.csail.mit.edu/dsontag/courses/ml12/slides/lecture14.pdf
42
THANK YOU
43