CS490D: Introduction To Data Mining: Prof. Chris Clifton

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 109

CS490D:

Introduction to Data Mining


Prof. Chris Clifton
February 22, 2004
Clustering
CS490D 2
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 3
What is Cluster Analysis?
Cluster: a collection of data objects
Similar to one another within the same cluster
Dissimilar to the objects in other clusters
Cluster analysis
Grouping a set of data objects into clusters
Clustering is unsupervised classification: no
predefined classes
Typical applications
As a stand-alone tool to get insight into data
distribution
As a preprocessing step for other algorithms
CS490D 4
General Applications of
Clustering
Pattern Recognition
Spatial Data Analysis
create thematic maps in GIS by clustering feature
spaces
detect spatial clusters and explain them in spatial
data mining
Image Processing
Economic Science (especially market research)
WWW
Document classification
Cluster Weblog data to discover groups of similar
access patterns
CS490D 5
Examples of Clustering
Applications
Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
Land use: Identification of areas of similar land use in an earth
observation database
Insurance: Identifying groups of motor insurance policy holders with
a high average claim cost
City-planning: Identifying groups of houses according to their house
type, value, and geographical location
Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
CS490D 6
What Is Good Clustering?
A good clustering method will produce high quality
clusters with
high intra-class similarity
low inter-class similarity
The quality of a clustering result depends on both the
similarity measure used by the method and its
implementation.
The quality of a clustering method is also measured by
its ability to discover some or all of the hidden patterns.
CS490D 7
Requirements of Clustering in
Data Mining
Scalability
Ability to deal with different types of attributes
Discovery of clusters with arbitrary shape
Minimal requirements for domain knowledge to determine input
parameters
Able to deal with noise and outliers
Insensitive to order of input records
High dimensionality
Incorporation of user-specified constraints
Interpretability and usability
CS490D 8
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 9
Data Structures
Data matrix
(two modes)



Dissimilarity matrix
(one mode)
(
(
(
(
(
(
(

np
x ...
nf
x ...
n1
x
... ... ... ... ...
ip
x ...
if
x ...
i1
x
... ... ... ... ...
1p
x ...
1f
x ...
11
x
(
(
(
(
(
(

0 ... ) 2 , ( ) 1 , (
: : :
) 2 , 3 ( )
... n d n d
0 d d(3,1
0 d(2,1)
0
CS490D 10
Measure the Quality of
Clustering
Dissimilarity/Similarity metric: Similarity is expressed in
terms of a distance function, which is typically metric:
d(i, j)
There is a separate quality function that measures the
goodness of a cluster.
The definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinal
and ratio variables.
Weights should be associated with different variables
based on applications and data semantics.
It is hard to define similar enough or good enough
the answer is typically highly subjective.
CS490D 11
Type of data in clustering
analysis
Interval-scaled variables:
Binary variables:
Nominal, ordinal, and ratio variables:
Variables of mixed types:
CS490D 12
Interval-valued variables
Standardize data
Calculate the mean absolute deviation:

where
Calculate the standardized measurement (z-score)

Using mean absolute deviation is more robust than using
standard deviation

. ) ...
2 1
1
nf f f f
x x (x
n
m + + + =
|) | ... | | | (|
1
2 1 f nf f f f f f
m x m x m x
n
s + + + =
f
f if
if
s
m x
z

=
CS490D 13
Similarity and Dissimilarity
Between Objects
Distances are normally used to measure the similarity or
dissimilarity between two data objects
Some popular ones include: Minkowski distance:

where i = (x
i1
, x
i2
, , x
ip
) and j = (x
j1
, x
j2
, , x
jp
) are two p-
dimensional data objects, and q is a positive integer
If q = 1, d is Manhattan distance
q
q
p p
q q
j
x
i
x
j
x
i
x
j
x
i
x j i d ) | | ... | | | (| ) , (
2 2 1 1
+ + + =
| | ... | | | | ) , (
2 2 1 1 p p j
x
i
x
j
x
i
x
j
x
i
x j i d + + + =
CS490D 14
Similarity and Dissimilarity
Between Objects (Cont.)
If q = 2, d is Euclidean distance:

Properties
d(i,j) > 0
d(i,i) = 0
d(i,j) = d(j,i)
d(i,j) s d(i,k) + d(k,j)
Also, one can use weighted distance, parametric
Pearson product moment correlation, or other disimilarity
measures
) | | ... | | | (| ) , (
2 2
2 2
2
1 1 p p j
x
i
x
j
x
i
x
j
x
i
x j i d + + + =
CS490D 15
Binary Variables
A contingency table for binary data




Simple matching coefficient (invariant, if the binary variable is
symmetric):
Jaccard coefficient (noninvariant if the binary variable is
asymmetric):
d c b a
c b
j i d
+ + +
+
= ) , (
p d b c a sum
d c d c
b a b a
sum
+ +
+
+
0
1
0 1
c b a
c b
j i d
+ +
+
= ) , (
Object i
Object j
16
Dissimilarity between Binary
Variables
Example




gender is a symmetric attribute
the remaining attributes are asymmetric binary
let the values Y and P be set to 1, and the value N be set to 0
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
75 . 0
2 1 1
2 1
) , (
67 . 0
1 1 1
1 1
) , (
33 . 0
1 0 2
1 0
) , (
=
+ +
+
=
=
+ +
+
=
=
+ +
+
=
mary jim d
jim jack d
mary jack d
CS490D 17
Nominal Variables
A generalization of the binary variable in that it can take
more than 2 states, e.g., red, yellow, blue, green
Method 1: Simple matching
m: # of matches, p: total # of variables


Method 2: use a large number of binary variables
creating a new binary variable for each of the M nominal states
p
m p
j i d

= ) , (
CS490D 18
Ordinal Variables
An ordinal variable can be discrete or continuous
Order is important, e.g., rank
Can be treated like interval-scaled
replace x
if
by their rank
map the range of each variable onto [0, 1] by replacing i-th object
in the f-th variable by


compute the dissimilarity using methods for interval-scaled
variables
1
1

=
f
if
if
M
r
z
} ,..., 1 {
f if
M r e
CS490D 19
Ratio-Scaled Variables
Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as Ae
Bt
or Ae
-Bt

Methods:
treat them like interval-scaled variablesnot a good choice!
(why?the scale can be distorted)
apply logarithmic transformation
y
if
= log(x
if
)
treat them as continuous ordinal data treat their rank as interval-
scaled
CS490D 20
Variables of Mixed
Types
A database may contain all the six types of variables
symmetric binary, asymmetric binary, nominal, ordinal, interval
and ratio
One may use a weighted formula to combine their
effects

f is binary or nominal:
d
ij
(f)
= 0 if x
if
= x
jf
, or d
ij
(f)
= 1 o.w.
f is interval-based: use the normalized distance
f is ordinal or ratio-scaled
compute ranks r
if
and
and treat z
if
as interval-scaled
) (
1
) ( ) (
1
) , (
f
ij
p
f
f
ij
f
ij
p
f
d
j i d
o
o
=
=
E
E
=
1
1

=
f
if
M
r
z
if
CS490D 21
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 22
Major Clustering Approaches
Partitioning algorithms: Construct various partitions and then
evaluate them by some criterion
Hierarchy algorithms: Create a hierarchical decomposition of the set
of data (or objects) using some criterion
Density-based: based on connectivity and density functions
Grid-based: based on a multiple-level granularity structure
Model-based: A model is hypothesized for each of the clusters and
the idea is to find the best fit of that model to each other
CS490D:
Introduction to Data Mining
Prof. Chris Clifton
February 24, 2004
Clustering
CS490D 24
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 25
Partitioning Algorithms: Basic
Concept
Partitioning method: Construct a partition of a database
D of n objects into a set of k clusters
Given a k, find a partition of k clusters that optimizes the
chosen partitioning criterion
Global optimal: exhaustively enumerate all partitions
Heuristic methods: k-means and k-medoids algorithms
k-means (MacQueen67): Each cluster is represented by the
center of the cluster
k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw87): Each cluster is represented by one of the
objects in the cluster
CS490D 26
The K-Means Clustering
Method
Given k, the k-means algorithm is implemented in four
steps:
Partition objects into k nonempty subsets
Compute seed points as the centroids of the clusters of the
current partition (the centroid is the center, i.e., mean point, of
the cluster)
Assign each object to the cluster with the nearest seed point
Go back to Step 2, stop when no more new assignment
CS490D 27
The K-Means Clustering
Method
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrarily choose K
object as initial
cluster center
Assign
each
objects
to most
similar
center
Update
the
cluster
means
Update
the
cluster
means
reassign reassign
CS490D 28
Comments on the K-Means
Method
Strength: Relatively efficient: O(tkn), where n is # objects, k is # clusters,
and t is # iterations. Normally, k, t << n.
Comparing: PAM: O(k(n-k)
2
), CLARA: O(ks
2
+ k(n-k))
Comment: Often terminates at a local optimum. The global optimum may be
found using techniques such as: deterministic annealing and genetic
algorithms
Weakness
Applicable only when mean is defined, then what about categorical data?
Need to specify k, the number of clusters, in advance
Unable to handle noisy data and outliers
Not suitable to discover clusters with non-convex shapes
CS490D 29
Variations of the K-Means
Method
A few variants of the k-means which differ in
Selection of the initial k means
Dissimilarity calculations
Strategies to calculate cluster means
Handling categorical data: k-modes (Huang98)
Replacing means of clusters with modes
Using new dissimilarity measures to deal with categorical objects
Using a frequency-based method to update modes of clusters
A mixture of categorical and numerical data: k-prototype method
CS490D 30
What is the problem of k-
Means Method?
The k-means algorithm is sensitive to outliers !
Since an object with an extremely large value may substantially distort
the distribution of the data.
K-Medoids: Instead of taking the mean value of the object in a
cluster as a reference point, medoids can be used, which is the
most centrally located object in a cluster.
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CS490D 31
The K-Medoids Clustering
Method
Find representative objects, called medoids, in clusters
PAM (Partitioning Around Medoids, 1987)
starts from an initial set of medoids and iteratively replaces one of the
medoids by one of the non-medoids if it improves the total distance of
the resulting clustering
PAM works effectively for small data sets, but does not scale well for
large data sets
CLARA (Kaufmann & Rousseeuw, 1990)
CLARANS (Ng & Han, 1994): Randomized sampling
Focusing + spatial data structure (Ester et al., 1995)
CS490D 32
Typical k-medoids algorithm
(PAM)

0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 20
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrary
choose k
object as
initial
medoids
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Assign
each
remainin
g object
to
nearest
medoids
Randomly select a
nonmedoid object,O
ramdom
Compute
total cost of
swapping
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 26
Swapping O
and O
ramdom
If quality is
improved.
Do loop
Until no
change
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CS490D 33
PAM (Partitioning Around
Medoids) (1987)
PAM (Kaufman and Rousseeuw, 1987), built in Splus
Use real object to represent the cluster
Select k representative objects arbitrarily
For each pair of non-selected object h and selected object i,
calculate the total swapping cost TC
ih

For each pair of i and h,
If TC
ih
< 0, i is replaced by h
Then assign each non-selected object to the most similar
representative object
repeat steps 2-3 until there is no change
PAM Clustering: Total swapping
cost TC
ih
=
j
C
jih
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
j
i
h
t
C
jih
= 0
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
t
i h
j
C
jih
=d(j, h) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
h
i
t
j
C
jih
=d(j, t) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
t
i
h j
C
jih
=d(j, h) - d(j, t)
CS490D 35
What is the problem with
PAM?
Pam is more robust than k-means in the presence of
noise and outliers because a medoid is less influenced
by outliers or other extreme values than a mean
Pam works efficiently for small data sets but does not
scale well for large data sets.
O(k(n-k)
2
) for each iteration
where n is # of data,k is # of clusters
Sampling based method,
CLARA(Clustering LARge Applications)
CS490D 36
CLARA (Clustering Large
Applications) (1990)
CLARA (Kaufmann and Rousseeuw in 1990)
Built in statistical analysis packages, such as S+
It draws multiple samples of the data set, applies PAM on each
sample, and gives the best clustering as the output
Strength: deals with larger data sets than PAM
Weakness:
Efficiency depends on the sample size
A good clustering based on samples will not necessarily represent a
good clustering of the whole data set if the sample is biased
CS490D 37
CLARANS (Randomized
CLARA) (1994)
CLARANS (A Clustering Algorithm based on Randomized Search)
(Ng and Han94)
CLARANS draws sample of neighbors dynamically
The clustering process can be presented as searching a graph
where every node is a potential solution, that is, a set of k medoids
If the local optimum is found, CLARANS starts with new randomly
selected node in search for a new local optimum
It is more efficient and scalable than both PAM and CLARA
Focusing techniques and spatial access structures may further
improve its performance (Ester et al.95)
CS490D 38
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 39
Hierarchical Clustering
Use distance matrix as clustering criteria. This
method does not require the number of clusters
k as an input, but needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
CS490D 40
AGNES (Agglomerative
Nesting)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g., Splus
Use the Single-Link method and the dissimilarity matrix.
Merge nodes that have the least dissimilarity
Go on in a non-descending fashion
Eventually all nodes belong to the same cluster
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CS490D 41
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram.

A clustering of the data objects is obtained by cutting the
dendrogram at the desired level, then each connected
component forms a cluster.
A Dendrogram Shows How the
Clusters are Merged Hierarchically
CS490D 42
DIANA (Divisive Analysis)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g., Splus
Inverse order of AGNES
Eventually each node forms a cluster on its own
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CS490D:
Introduction to Data Mining
Prof. Chris Clifton
March 1, 2004
Clustering
CS490D 44
More on Hierarchical
Clustering Methods
Major weakness of agglomerative clustering methods
do not scale well: time complexity of at least O(n
2
), where n is
the number of total objects
can never undo what was done previously
Integration of hierarchical with distance-based clustering
BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
CURE (1998): selects well-scattered points from the cluster and
then shrinks them towards the center of the cluster by a specified
fraction
CHAMELEON (1999): hierarchical clustering using dynamic
modeling
CS490D 45
BIRCH (1996)
Birch: Balanced Iterative Reducing and Clustering using Hierarchies,
by Zhang, Ramakrishnan, Livny (SIGMOD96)
Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent clustering
structure of the data)
Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes
of the CF-tree
Scales linearly: finds a good clustering with a single scan and
improves the quality with a few additional scans
Weakness: handles only numeric data, and sensitive to the order of
the data record.
CS490D 46
Clustering Feature: CF =(N, LS, SS)
N: Number of data points
LS:
N
i=1
=X
i
SS:
N
i=1
=X
i
2

0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CF = (5, (16,30),(54,190))
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
Clustering Feature Vector
CS490D 47
CF-Tree in BIRCH
Clustering feature:
summary of the statistics for a given subcluster: the 0-th, 1st and
2nd moments of the subcluster from the statistical point of view.
registers crucial measurements for computing cluster and utilizes
storage efficiently
A CF tree is a height-balanced tree that stores the
clustering features for a hierarchical clustering
A nonleaf node in a tree has descendants or children
The nonleaf nodes store sums of the CFs of their children
A CF tree has two parameters
Branching factor: specify the maximum number of children.
threshold: max diameter of sub-clusters stored at the leaf nodes
CF Tree
CF
1
child
1
CF
3
child
3
CF
2
child
2
CF
6
child
6
CF
1
child
1
CF
3
child
3
CF
2
child
2
CF
5
child
5
CF
1
CF
2
CF
6
prev next
CF
1
CF
2
CF
4
prev next
B = 7
L = 6
Root
Non-leaf node
Leaf node Leaf node
CS490D 49
CURE (Clustering Using
REpresentatives )
CURE: proposed by Guha, Rastogi & Shim, 1998
Stops the creation of a cluster hierarchy if a level consists of k
clusters
Uses multiple representative points to evaluate the distance
between clusters, adjusts well to arbitrary shaped clusters and
avoids single-link effect
CS490D 50
Drawbacks of Distance-
Based Method
Drawbacks of square-error based clustering method
Consider only one point as representative of a cluster
Good only for convex shaped, similar size and density, and if k
can be reasonably estimated
CS490D 51
Cure: The Algorithm
Draw random sample s.
Partition sample to p partitions with size s/p
Partially cluster partitions into s/pq clusters
Eliminate outliers
By random sampling
If a cluster grows too slow, eliminate it.
Cluster partial clusters.
Label data in disk
CS490D 52
Data Partitioning and
Clustering
s = 50
p = 2
s/p = 25
x
x
x
y
y y
y
x
y
x
s/pq = 5
CS490D 53
Cure: Shrinking
Representative Points
Shrink the multiple representative points towards the
gravity center by a fraction of o.
Multiple representatives capture the shape of the cluster
x
y
x
y
CS490D 54
Clustering Categorical Data:
ROCK
ROCK: Robust Clustering using linKs,
by S. Guha, R. Rastogi, K. Shim (ICDE99).
Use links to measure similarity/proximity
Not distance based
Computational complexity:
Basic ideas:
Similarity function and neighbors:
Let T
1
= {1,2,3}, T
2
={3,4,5}
O n nm m n n
m a
( log )
2 2
+ +
Sim T T
T T
T T
( , )
1 2
1 2
1 2
=

Sim T T ( , )
{ }
{ , , , , }
. 1 2
3
1 2 3 4 5
1
5
0 2 = = =
CS490D 55
Rock: Algorithm
Links: The number of common neighbours for
the two points.




Algorithm
Draw random sample
Cluster with links
Label data in disk
{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}
{1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}
{1,2,3} {1,2,4}
3
CS490D 56
CHAMELEON (Hierarchical
clustering using dynamic modeling)
CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar99
Measures the similarity based on a dynamic model
Two clusters are merged only if the interconnectivity and closeness
(proximity) between two clusters are high relative to the internal
interconnectivity of the clusters and closeness of items within the
clusters
Cure ignores information about interconnectivity of the objects,
Rock ignores information about the closeness of two clusters
A two-phase algorithm
1. Use a graph partitioning algorithm: cluster objects into a large number
of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering algorithm: find the
genuine clusters by repeatedly combining these sub-clusters
CS490D 57
Overall Framework of
CHAMELEON



Construct
Sparse Graph
Partition the Graph
Merge Partition
Final Clusters
Data Set
CS490D 58
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 59
Density-Based Clustering
Methods
Clustering based on density (local cluster criterion), such
as density-connected points
Major features:
Discover clusters of arbitrary shape
Handle noise
One scan
Need density parameters as termination condition
Several interesting studies:
DBSCAN: Ester, et al. (KDD96)
OPTICS: Ankerst, et al (SIGMOD99).
DENCLUE: Hinneburg & D. Keim (KDD98)
CLIQUE: Agrawal, et al. (SIGMOD98)
CS490D 60
Density Concepts
Core object (CO)object with at least M objects within a
radius E-neighborhood
Directly density reachable (DDR)x is CO, y is in xs E-
neighborhood
Density reachablethere exists a chain of DDR objects
from x to y
Density based clusterdensity connected objects
maximum w.r.t. reachability

CS490D 61
Density-Based Clustering:
Background
Two parameters:
Eps: Maximum radius of the neighbourhood
MinPts: Minimum number of points in an Eps-neighbourhood of
that point
N
Eps
(p): {q belongs to D | dist(p,q) <= Eps}
Directly density-reachable: A point p is directly density-
reachable from a point q wrt. Eps, MinPts if
1) p belongs to N
Eps
(q)
2) core point condition:
|N
Eps
(q)| >= MinPts
p
q
MinPts = 5
Eps = 1 cm
CS490D 62
Density-Based Clustering:
Background (II)
Density-reachable:
A point p is density-reachable from
a point q wrt. Eps, MinPts if there is
a chain of points p
1
, , p
n
, p
1
= q, p
n

= p such that p
i+1
is directly density-
reachable from p
i

Density-connected
A point p is density-connected to a
point q wrt. Eps, MinPts if there is a
point o such that both, p and q are
density-reachable from o wrt. Eps
and MinPts.
p
q
p
1
p q
o
CS490D 63
DBSCAN: Density Based Spatial
Clustering of Applications with Noise
Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
Discovers clusters of arbitrary shape in spatial
databases with noise
Core
Border
Outlier
Eps = 1cm
MinPts = 5
CS490D 64
DBSCAN: The Algorithm
Arbitrary select a point p
Retrieve all points density-reachable from p wrt Eps and MinPts.
If p is a core point, a cluster is formed.
If p is a border point, no points are density-reachable from p and
DBSCAN visits the next point of the database.
Continue the process until all of the points have been processed.
CS490D 65
OPTICS: A Cluster-Ordering
Method (1999)
OPTICS: Ordering Points To Identify the
Clustering Structure
Ankerst, Breunig, Kriegel, and Sander (SIGMOD99)
Produces a special order of the database wrt its
density-based clustering structure
This cluster-ordering contains info equiv to the
density-based clusterings corresponding to a broad
range of parameter settings
Good for both automatic and interactive cluster
analysis, including finding intrinsic clustering structure
Can be represented graphically or using visualization
techniques
66
OPTICS: Some Extension from
DBSCAN
Index-based:
k = number of
dimensions
N = 20
p = 75%
M = N(1-p) = 5
Complexity: O(kN
2
)
Core Distance

Reachability Distance

D
p2
MinPts = 5
c = 3 cm
Max (core-distance (o), d (o, p))
r(p1, o) = 2.8cm. r(p2,o) = 4cm
o
o
p1
c
c
Reachability
-distance
Cluster-order
of the objects
undefined
c

Density-Based Cluster analysis:
OPTICS & Its Applications
CS490D 69
DENCLUE: Using density
functions
DENsity-based CLUstEring by Hinneburg & Keim
(KDD98)
Major features
Solid mathematical foundation
Good for data sets with large amounts of noise
Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
Significant faster than existing algorithm (faster than DBSCAN
by a factor of up to 45)
But needs a large number of parameters
CS490D 70
Denclue: Technical Essence
Uses grid cells but only keeps information about grid cells that do
actually contain data points and manages these cells in a tree-based
access structure.
Influence function: describes the impact of a data point within its
neighborhood.
Overall density of the data space can be calculated as the sum of
the influence function of all data points.
Clusters can be determined mathematically by identifying density
attractors.
Density attractors are local maximal of the overall density function.
CS490D 71
Gradient: The steepness of
a slope
Example

=
N
i
x x d
D
Gaussian
i
e x f
1
2
) , (
2
2
) (
o

= V
N
i
x x d
i i
D
Gaussian
i
e x x x x f
1
2
) , (
2
2
) ( ) , (
o
f x y e
Gaussian
d x y
( , )
( , )
=

2
2
2o
CS490D 72
Density Attractor
CS490D 73
Center-Defined and
Arbitrary
CS490D:
Introduction to Data Mining
Prof. Chris Clifton
March 5, 2004
Clustering
CS490D 75
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 76
Grid-Based Clustering
Method
Using multi-resolution grid data structure
Several interesting methods
STING (a STatistical INformation Grid approach) by Wang, Yang
and Muntz (1997)
WaveCluster by Sheikholeslami, Chatterjee, and Zhang
(VLDB98)
A multi-resolution clustering approach using wavelet method
CLIQUE: Agrawal, et al. (SIGMOD98)

CS490D 77
STING: A Statistical
Information Grid Approach
Wang, Yang and Muntz (VLDB97)
The spatial area area is divided into rectangular cells
There are several levels of cells corresponding to
different levels of resolution

CS490D 78
STING: A Statistical
Information Grid Approach (2)
Each cell at a high level is partitioned into a number of smaller
cells in the next lower level
Statistical info of each cell is calculated and stored beforehand
and is used to answer queries
Parameters of higher level cells can be easily calculated from
parameters of lower level cell
count, mean, s, min, max
type of distributionnormal, uniform, etc.
Use a top-down approach to answer spatial data queries
Start from a pre-selected layertypically with a small number of
cells
For each cell in the current level compute the confidence interval

CS490D 79
STING: A Statistical
Information Grid Approach (3)
Remove the irrelevant cells from further consideration
When finish examining the current layer, proceed to the next
lower level
Repeat this process until the bottom layer is reached
Advantages:
Query-independent, easy to parallelize, incremental update
O(K), where K is the number of grid cells at the lowest level
Disadvantages:
All the cluster boundaries are either horizontal or vertical, and
no diagonal boundary is detected
CS490D 80
WaveCluster (1998)
Sheikholeslami, Chatterjee, and Zhang (VLDB98)
A multi-resolution clustering approach which applies
wavelet transform to the feature space
A wavelet transform is a signal processing technique that
decomposes a signal into different frequency sub-band.
Both grid-based and density-based
Input parameters:
# of grid cells for each dimension
the wavelet, and the # of applications of wavelet transform.
CS490D 82
WaveCluster (1998)
How to apply wavelet transform to find clusters
Summaries the data by imposing a multidimensional
grid structure onto data space
These multidimensional spatial data objects are
represented in a n-dimensional feature space
Apply wavelet transform on feature space to find the
dense regions in the feature space
Apply wavelet transform multiple times which result in
clusters at different scales from fine to coarse
CS490D 83
Wavelet Transform
Decomposes a signal into different
frequency subbands. (can be applied to n-
dimensional signals)
Data are transformed to preserve relative
distance between objects at different
levels of resolution.
Allows natural clusters to become more
distinguishable
CS490D 84
What Is Wavelet (2)?
CS490D 85
Quantization
CS490D 86
Transformation
CS490D 87
WaveCluster (1998)
Why is wavelet transformation useful for clustering
Unsupervised clustering
It uses hat-shape filters to emphasize region where points
cluster, but simultaneously to suppress weaker information in
their boundary
Effective removal of outliers
Multi-resolution
Cost efficiency
Major features:
Complexity O(N)
Detect arbitrary shaped clusters at different scales
Not sensitive to noise, not sensitive to input order
Only applicable to low dimensional data
CS490D 88
CLIQUE (Clustering In QUEst)
Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD98).
Automatically identifying subspaces of a high dimensional data
space that allow better clustering than original space
CLIQUE can be considered as both density-based and grid-based
It partitions each dimension into the same number of equal length
interval
It partitions an m-dimensional data space into non-overlapping
rectangular units
A unit is dense if the fraction of total data points contained in the unit
exceeds the input model parameter
A cluster is a maximal set of connected dense units within a subspace
CS490D 89
CLIQUE: The Major Steps
Partition the data space and find the number of points
that lie inside each cell of the partition.
Identify the subspaces that contain clusters using the
Apriori principle
Identify clusters:
Determine dense units in all subspaces of interests
Determine connected dense units in all subspaces of interests.
Generate minimal description for the clusters
Determine maximal regions that cover a cluster of connected
dense units for each cluster
Determination of minimal cover for each cluster
S
a
l
a
r
y

(
1
0
,
0
0
0
)

20 30 40 50 60
age
5

4

3

1

2

6

7

0

20 30 40 50 60
age
5

4

3

1

2

6

7

0

V
a
c
a
t
i
o
n
(
w
e
e
k
)

age
V
a
c
a
t
i
o
n

30 50
t = 3
CS490D 91
Strength and Weakness of
CLIQUE
Strength
It automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
It is insensitive to the order of records in input and
does not presume some canonical data distribution
It scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
Weakness
The accuracy of the clustering result may be
degraded at the expense of simplicity of the method
CS490D 92
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 93
Model-Based Clustering
Methods
Attempt to optimize the fit between the data and some
mathematical model
Statistical and AI approach
Conceptual clustering
A form of clustering in machine learning
Produces a classification scheme for a set of unlabeled objects
Finds characteristic description for each concept (class)
COBWEB (Fisher87)
A popular a simple method of incremental conceptual learning
Creates a hierarchical clustering in the form of a classification tree
Each node refers to a concept and contains a probabilistic
description of that concept
CS490D 94
COBWEB Clustering
Method
A classification tree
CS490D 95
More on Statistical-Based
Clustering
Limitations of COBWEB
The assumption that the attributes are independent of each
other is often too strong because correlation may exist
Not suitable for clustering large database data skewed tree
and expensive probability distributions
CLASSIT
an extension of COBWEB for incremental clustering of
continuous data
suffers similar problems as COBWEB
AutoClass (Cheeseman and Stutz, 1996)
Uses Bayesian statistical analysis to estimate the number of
clusters
Popular in industry
CS490D 96
Other Model-Based
Clustering Methods
Neural network approaches
Represent each cluster as an exemplar, acting as a
prototype of the cluster
New objects are distributed to the cluster whose
exemplar is the most similar according to some
dostance measure
Competitive learning
Involves a hierarchical architecture of several units
(neurons)
Neurons compete in a winner-takes-all fashion for
the object currently being presented
CS490D 97
Model-Based Clustering
Methods
CS490D 98
Self-organizing feature
maps (SOMs)
Clustering is also performed by having several
units competing for the current object
The unit whose weight vector is closest to the
current object wins
The winner and its neighbors learn by having
their weights adjusted
SOMs are believed to resemble processing that
can occur in the brain
Useful for visualizing high-dimensional data in 2-
or 3-D space
CS490D 99
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 100
What Is Outlier Discovery?
What are outliers?
The set of objects are considerably dissimilar from the
remainder of the data
Example: Sports: Michael Jordon, Wayne Gretzky, ...
Problem
Find top n outlier points
Applications:
Credit card fraud detection
Telecom fraud detection
Customer segmentation
Medical analysis
Outlier Discovery:
Statistical
Approaches
Assume a model underlying distribution that
generates data set (e.g. normal distribution)
Use discordancy tests depending on
data distribution
distribution parameter (e.g., mean, variance)
number of expected outliers
Drawbacks
most tests are for single attribute
In many cases, data distribution may not be known
CS490D 102
Outlier Discovery: Distance-
Based Approach
Introduced to counter the main limitations
imposed by statistical methods
We need multi-dimensional analysis without knowing
data distribution.
Distance-based outlier: A DB(p, D)-outlier is an
object O in a dataset T such that at least a
fraction p of the objects in T lies at a distance
greater than D from O
Algorithms for mining distance-based outliers
Index-based algorithm
Nested-loop algorithm
Cell-based algorithm
CS490D 103
Outlier Discovery:
Deviation-Based Approach
Identifies outliers by examining the main characteristics
of objects in a group
Objects that deviate from this description are
considered outliers
sequential exception technique
simulates the way in which humans can distinguish unusual
objects from among a series of supposedly like objects
OLAP data cube technique
uses data cubes to identify regions of anomalies in large
multidimensional data
CS490D 104
Cluster Analysis
What is Cluster Analysis?
Types of Data in Cluster Analysis
A Categorization of Major Clustering Methods
Partitioning Methods
Hierarchical Methods
Density-Based Methods
Grid-Based Methods
Model-Based Clustering Methods
Outlier Analysis
Summary
CS490D 105
Problems and Challenges
Considerable progress has been made in scalable clustering
methods
Partitioning: k-means, k-medoids, CLARANS
Hierarchical: BIRCH, CURE
Density-based: DBSCAN, CLIQUE, OPTICS
Grid-based: STING, WaveCluster
Model-based: Autoclass, Denclue, Cobweb
Current clustering techniques do not address all the requirements
adequately
Constraint-based clustering analysis: Constraints exist in data space
(bridges and highways) or in user queries
CS490D 106
Constraint-Based Clustering
Analysis
Clustering analysis: less parameters but more user-
desired constraints, e.g., an ATM allocation problem
CS490D 107
Clustering With Obstacle
Objects
Taking obstacles into account Not Taking obstacles into account
CS490D 108
Summary
Cluster analysis groups objects based on their similarity
and has wide applications
Measure of similarity can be computed for various types
of data
Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
There are still lots of research issues on cluster analysis,
such as constraint-based clustering
CS490D 109
References (1)
R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high
dimensional data for data mining applications. SIGMOD'98
M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the
clustering structure, SIGMOD99.
P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996
M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in
large spatial databases. KDD'96.
M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing
techniques for efficient class identification. SSD'95.
D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-
172, 1987.
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. In Proc. VLDB98.
S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases.
SIGMOD'98.
A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
CS490D 110
References (2)
L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis.
John Wiley & Sons, 1990.
E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB98.
G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering.
John Wiley and Sons, 1988.
P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.
E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets.
Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.
G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering
approach for very large spatial databases. VLDB98.
W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining,
VLDB97.
T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very
large databases. SIGMOD'96.

You might also like