Survey of Clustering Data Mining Techniques: Pavel Berkhin
Survey of Clustering Data Mining Techniques: Pavel Berkhin
Survey of Clustering Data Mining Techniques: Pavel Berkhin
Pavel Berkhin
Accrue Software, Inc. Clustering is a division of data into groups of similar objects. Representing the data by fewer clusters necessarily loses certain fine details, but achieves simplification. It models data by its clusters. Data modeling puts clustering in a historical perspective rooted in mathematics, statistics, and numerical analysis. From a machine learning perspective clusters correspond to hidden patterns, the search for clusters is unsupervised learning, and the resulting system represents a data concept. From a practical perspective clustering plays an outstanding role in data mining applications such as scientific data exploration, information retrieval and text mining, spatial database applications, Web analysis, CRM, marketing, medical diagnostics, computational biology, and many others. Clustering is the subject of active research in several fields such as statistics, pattern recognition, and machine learning. This survey focuses on clustering in data mining. Data mining adds to clustering the complications of very large datasets with very many attributes of different types. This imposes unique computational requirements on relevant clustering algorithms. A variety of algorithms have recently emerged that meet these requirements and were successfully applied to real-life data mining problems. They are subject of the survey. Categories and Subject Descriptors: I.2.6. [Artificial Intelligence]: Learning Concept learning; I.4.6 [Image Processing]: Segmentation; I.5.1 [Pattern Recognition]: Models; I.5.3 [Pattern Recognition]: Clustering. General Terms: Algorithms, Design Additional Key Words and Phrases: Clustering, partitioning, data mining, unsupervised learning, descriptive learning, exploratory data analysis, hierarchical clustering, probabilistic clustering, k-means Content:
1. Introduction
1.1. Notations 1.2. Clustering Bibliography at Glance 1.3. Classification of Clustering Algorithms 1.4. Plan of Further Presentation
Authors address: Pavel Berkhin, Accrue Software, 1045 Forest Knoll Dr., San Jose, CA, 95129; e-mail: [email protected] 1
2. Hierarchical Clustering 2.1. Linkage Metrics 2.2. Hierarchical Clusters of Arbitrary Shapes 2.3. Binary Divisive Partitioning 2.4. Other Developments 3. Partitioning Relocation Clustering 3.1. Probabilistic Clustering 3.2. K-Medoids Methods 3.3. K-Means Methods 4. Density-Based Partitioning 4.1. Density-Based Connectivity 4.5. Density Functions 5. Grid-Based Methods 6. Co-Occurrence of Categorical Data 7. Other Clustering Techniques 7.1. Constraint-Based Clustering 7.2. Relation to Supervised Learning 7.3. Gradient Descent and Artificial Neural Networks 7.4. Evolutionary Methods 7.5. Other Developments 9. Scalability and VLDB Extensions 10. Clustering High Dimensional Data 10.1. Dimensionality Reduction 10.2. Subspace Clustering 10.3. Co-Clustering 10. General Algorithmic Issues 10.1. Assessment of Results 10.2. How Many Clusters? 10.3. Data Preparation 10.4. Proximity Measures 10.5. Handling Outliers Acknowledgements References 1. Introduction The goal of this survey is to provide a comprehensive review of different clustering techniques in data mining. Clustering is a division of data into groups of similar objects. Each group, called cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Representing data by fewer clusters necessarily loses certain fine details (akin to lossy data compression), but achieves simplification. It represents many data objects by few clusters, and hence, it models data by its clusters. Data modeling puts clustering in a historical perspective rooted in mathematics, statistics, and numerical analysis. From a machine learning perspective clusters correspond to hidden patterns, the search for clusters is unsupervised learning, and the resulting system represents a data concept. Therefore, clustering is unsupervised learning of a hidden data 2
concept. Data mining deals with large databases that impose on clustering analysis additional severe computational requirements. These challenges led to the emergence of powerful broadly applicable data mining clustering methods surveyed below. 1.1. Notations To fix the context and to clarify prolific terminology, we consider a dataset X consisting of data points (or synonymously, objects, instances, cases, patterns, tuples, transactions) xi = ( xi1 ,..., xid ) A in attribute space A, where i = 1:N , and each component xil Al is a numerical or nominal categorical attribute (or synonymously, feature, variable, dimension, component, field). For a discussion of attributes data types see [Han & Kamber 2001]. Such point-by-attribute data format conceptually corresponds to a N d matrix and is used by the majority of algorithms reviewed below. However, data of other formats, such as variable length sequences and heterogeneous data, is becoming more and more popular. The simplest attribute space subset is a direct Cartesian product of subranges C = Cl A, Cl Al , l = 1 : d , called a segment (also cube, cell, region). A
unit is an elementary segment whose sub-ranges consist of a single category value, or of a small numerical bin. Describing the numbers of data points per every unit represents an extreme case of clustering, a histogram, where no actual clustering takes place. This is a very expensive representation, and not a very revealing one. User driven segmentation is another commonly used practice in data exploration that utilizes expert knowledge regarding the importance of certain sub-domains. We distinguish clustering from segmentation to emphasize the importance of the automatic learning process. The ultimate goal of clustering is to assign points to a finite system of k subsets, clusters. Usually subsets do not intersect (this assumption is sometimes violated), and their union is equal to a full dataset with possible exception of outliers
X = C1 ... Ck Couliers , C j1 C j 2 = o /.
1.2. Clustering Bibliography at Glance
General references regarding clustering include [Hartigan 1975; Spath 1980; Jain & Dubes 1988; Kaufman & Rousseeuw 1990; Dubes 1993; Everitt 1993; Mirkin 1996; Jain et al. 1999; Fasulo 1999; Kolatch 2001; Han et al. 2001; Ghosh 2002]. A very good introduction to contemporary data mining clustering techniques can be found in the textbook [Han & Kamber 2001]. There is a close relationship between clustering techniques and many other disciplines. Clustering has always been used in statistics [Arabie & Hubert 1996] and science [Massart & Kaufman 1983]. The classic introduction into pattern recognition framework is given in [Duda & Hart 1973]. Typical applications include speech and character recognition. Machine learning clustering algorithms were applied to image segmentation and computer vision [Jain & Flynn 1996]. For statistical approaches to pattern recognition see [Dempster et al. 1977] and [Fukunaga 1990]. Clustering can be viewed as a density estimation problem. This is the subject of traditional multivariate statistical estimation [Scott 1992]. Clustering is also widely used for data compression in image processing, which is also known as vector quantization [Gersho & Gray 1992]. Data
3
fitting in numerical analysis provides still another venue in data modeling [Daniel & Wood 1980]. This surveys emphasis is on clustering in data mining. Such clustering is characterized by large datasets with many attributes of different types. Though we do not even try to review particular applications, many important ideas are related to the specific fields. Clustering in data mining was brought to life by intense developments in information retrieval and text mining [Cutting et al. 1992; Steinbach et al. 2000; Dhillon et al. 2001], spatial database applications, for example, GIS or astronomical data, [Xu et al. 1998; Sander et al. 1998; Ester et al. 2000], sequence and heterogeneous data analysis [Cadez et al. 2001], Web applications [Cooley et al. 1999; Heer & Chi 2001; Foss et al. 2001], DNA analysis in computational biology [Ben-Dor & Yakhini 1999], and many others. They resulted in a large amount of application-specific developments that are beyond our scope, but also in some general techniques. These techniques and classic clustering algorithms that relate to them surveyed below.
1.3. Classification of Clustering Algorithms
Categorization of clustering algorithms is neither straightforward, nor canonical. In reality, groups below overlap. For readers convenience we provide a classification closely followed by this survey. Corresponding terms are explained below.
Clustering Algorithms Hierarchical Methods Agglomerative Algorithms Divisive Algorithms Partitioning Methods Relocation Algorithms Probabilistic Clustering K-medoids Methods K-means Methods Density-Based Algorithms Density-Based Connectivity Clustering Density Functions Clustering Grid-Based Methods Methods Based on Co-Occurrence of Categorical Data Constraint-Based Clustering Clustering Algorithms Used in Machine Learning Gradient Descent and Artificial Neural Networks Evolutionary Methods Scalable Clustering Algorithms Algorithms For High Dimensional Data Subspace Clustering Projection Techniques Co-Clustering Techniques
Traditionally clustering techniques are broadly divided in hierarchical and partitioning. Hierarchical clustering is further subdivided into agglomerative and divisive. The basics of hierarchical clustering include Lance-Williams formula, idea of conceptual clustering, now classic algorithms SLINK, COBWEB, as well as newer algorithms CURE and CHAMELEON. We survey them in the section Hierarchical Clustering. While hierarchical algorithms build clusters gradually (as crystals are grown), partitioning algorithms learn clusters directly. In doing so, they either try to discover clusters by iteratively relocating points between subsets, or try to identify clusters as areas highly populated with data. Algorithms of the first kind are surveyed in the section Partitioning Relocation Methods. They are further categorized into probabilistic clustering (EM framework, algorithms SNOB, AUTOCLASS, MCLUST), k-medoids methods (algorithms PAM, CLARA, CLARANS, and its extension), and k-means methods (different schemes, initialization, optimization, harmonic means, extensions). Such methods concentrate on how well points fit into their clusters and tend to build clusters of proper convex shapes. Partitioning algorithms of the second type are surveyed in the section Density-Based Partitioning. They try to discover dense connected components of data, which are flexible in terms of their shape. Density-based connectivity is used in the algorithms DBSCAN, OPTICS, DBCLASD, while the algorithm DENCLUE exploits space density functions. These algorithms are less sensitive to outliers and can discover clusters of irregular shapes. They usually work with low-dimensional data of numerical attributes, known as spatial data. Spatial objects could include not only points, but also extended objects (algorithm GDBSCAN). Some algorithms work with data indirectly by constructing summaries of data over the attribute space subsets. They perform space segmentation and then aggregate appropriate segments. We discuss them in the section Grid-Based Methods. They frequently use hierarchical agglomeration as one phase of processing. Algorithms BANG, STING, WaveCluster, and an idea of fractal dimension are discussed in this section. Grid-based methods are fast and handle outliers well. Grid-based methodology is also used as an intermadiate step in many other algorithms (for example, CLIQUE, MAFIA). Categorical data is intimately connected with transactional databases. The concept of a similarity alone is not sufficient for clustering such data. The idea of categorical data cooccurrence comes to rescue. The algorithms ROCK, SNN, and CACTUS are surveyed in the section Co-Occurrence of Categorical Data. The situation gets even more aggravated with the growth of the number of items involved. To help with this problem an effort is shifted from data clustering to pre-clustering of items or categorical attribute values. Development based on hyper-graph partitioning and the algorithm STIRR exemplify this approach. Many other clustering techniques are developed, primarily in machine learning, that either have theoretical significance, are used traditionally outside the data mining community, or do not fit in previously outlined categories. The boundary is blurred. In the section Other Clustering Techniques we discuss relationship to supervised learning, gradient descent and ANN (LKMA, SOM), evolutionary methods (simulated annealing, 5
genetic algorithms (GA)), and the algorithm AMOEBA. We start, however, with the emerging field of constraint-based clustering that is influenced by requirements of realworld data mining applications. Data Mining primarily works with large databases. Clustering large datasets presents scalability problems reviewed in the section Scalability and VLDB Extensions. Here we talk about algorithms like DIGNET, about BIRCH and other data squashing techniques, and about Hoffding or Chernoff bounds. Another trait of real-life data is its high dimensionality. Corresponding developments are surveyed in the section Clustering High Dimensional Data. The trouble comes from a decrease in metric separation when the dimension grows. One approach to dimensionality reduction uses attributes transformations (DFT, PCA, wavelets). Another way to address the problem is through subspace clustering (algorithms CLIQUE, MAFIA, ENCLUS, OPTIGRID, PROCLUS, ORCLUS). Still another approach clusters attributes in groups and uses their derived proxies to cluster objects. This double clustering is known as coclustering. Issues that are common to different clustering methods are overviewed in the section General Algorithmic Issues. We talk about assessment of results, determination of appropriate number of clusters to build, data preprocessing (attribute selection, data scaling, special data indices), proximity measures, and handling outliers.
1.5. Important Issues
What are the properties of clustering algorithms we are concerned with in data mining? These properties include: Type of attributes algorithm can handle Scalability to large datasets Ability to work with high dimensional data Ability to find clusters of irregular shape Handling outliers Time complexity (when there is no confusion, we use the term complexity) Data order dependency Labeling or assignment (hard or strict vs. soft of fuzzy) Reliance on a priori knowledge and user defined parameters Interpretability of results While we try to keep these issues in mind, realistically, we mention only few with every algorithm we discuss. The above list is in no way exhaustive. For example, we also discuss such properties as ability to work in pre-defined memory buffer, ability to restart and ability to provide an intermediate solution.
2. Hierarchical Clustering Hierarchical clustering builds a cluster hierarchy or, in other words, a tree of clusters, also known as a dendrogram. Every cluster node contains child clusters; sibling clusters partition the points covered by their common parent. Such an approach allows exploring
data on different levels of granularity. Hierarchical clustering methods are categorized into agglomerative (bottom-up) and divisive (top-down) [Jain & Dubes 1988; Kaufman & Rousseeuw 1990]. An agglomerative clustering starts with one-point (singleton) clusters and recursively merges two or more most appropriate clusters. A divisive clustering starts with one cluster of all data points and recursively splits the most appropriate cluster. The process continues until a stopping criterion (frequently, the requested number k of clusters) is achieved. Advantages of hierarchical clustering include: Embedded flexibility regarding the level of granularity Ease of handling of any forms of similarity or distance Consequently, applicability to any attribute types Disadvantages of hierarchical clustering are related to: Vagueness of termination criteria The fact that most hierarchical algorithms do not revisit once constructed (intermediate) clusters with the purpose of their improvement The classic approaches to hierarchical clustering are presented in the sub-section Linkage Metrics. Hierarchical clustering based on linkage metrics results in clusters of proper (convex) shapes. Active contemporary efforts to build cluster systems that incorporate our intuitive concept of clusters as connected components of arbitrary shape, including the algorithms CURE and CHAMELEON, are surveyed in the sub-section Hierarchical Clusters of Arbitrary Shapes. Divisive techniques based on binary taxonomies are presented in the sub-section Binary Divisive Partitioning. The sub-section Other Developments contains information related to incremental learning, model-based clustering, and cluster refinement. In hierarchical clustering our regular point-by-attribute data representation is sometimes of secondary importance. Instead, hierarchical clustering frequently deals with the N N matrix of distances (dissimilarities) or similarities between training points. It is sometimes called connectivity matrix. Linkage metrics are constructed (see below) from elements of this matrix. The requirement of keeping such a large matrix in memory is unrealistic. To relax this limitation different devices are used to introduce into the connectivity matrix some sparsity. This can be done by omitting entries smaller than a certain threshold, by using only a certain subset of data representatives, or by keeping with each point only a certain number of its nearest neighbors. For example, nearest neighbor chains have decisive impact on memory consumption [Olson 1995]. A sparse matrix can be further used to represent intuitive concepts of closeness and connectivity. Notice that the way we process original (dis)similarity matrix and construct a linkage metric reflects our a priori ideas about the data model. With the (sparsified) connectivity matrix we can associate the connectivity graph G = ( X , E ) whose vertices X are data points, and edges E and their weights are pairs of points and the corresponding positive matrix entries. This establishes a connection between hierarchical clustering and graph partitioning.
One of the most striking developments in hierarchical clustering is the algorithm BIRCH. Since scalability is the major achievement of this blend strategy, this algorithm is discussed in the section Scalable VLDB Extensions. However, data squashing used by
BIRCH to achieve scalability, has independent importance. Hierarchical clustering of large datasets can be very sub-optimal, even if data fits in memory. Compressing data may improve performance of hierarchical algorithms.
2.1. Linkage Metrics
Hierarchical clustering initializes a cluster system as a set of singleton clusters (agglomerative case) or a single cluster of all points (divisive case) and proceeds iteratively with merging or splitting of the most appropriate cluster(s) until the stopping criterion is achieved. The appropriateness of a cluster(s) for merging/splitting depends on the (dis)similarity of cluster(s) elements. This reflects a general presumption that clusters consist of similar points. An important example of dissimilarity between two points is the distance between them. Other proximity measures are discussed in the section General Algorithm Issues. To merge or split subsets of points rather than individual points, the distance between individual points has to be generalized to the distance between subsets. Such derived proximity measure is called a linkage metric. The type of the linkage metric used significantly affects hierarchical algorithms, since it reflects the particular concept of closeness and connectivity. Major inter-cluster linkage metrics [Murtagh 1985, Olson 1995] include single link, average link, and complete link. The underlying dissimilarity measure (usually, distance) is computed for every pair of points with one point in the first set and another point in the second set. A specific operation such as minimum (single link), average (average link), or maximum (complete link) is applied to pair-wise dissimilarity measures:
d (C1 , C2 ) = operation{d ( x, y ) | x C1 , y C2 } .
Early examples include the algorithm SLINK [Sibson 1973], which implements single link, Voorhees method [Voorhees 1986], which implements average link, and the algorithm CLINK [Defays 1977], which implements complete link. Of these SLINK is referenced the most. It is related to the problem of finding the Euclidean minimal spanning tree [Yao 1982] and has O( N 2 ) complexity. The methods using inter-cluster distances defined in terms of pairs with points in two respective clusters (subsets) are called graph methods. They do not use any cluster representation other than a set of points. This name naturally relates to the connectivity graph G = ( X , E ) introduced above, since every data partition corresponds to a graph partition. Such methods can be appended by so-called geometric methods in which a cluster is represented by its central point. It results in centroid, median, and minimum variance linkage metrics. Under the assumption of numerical attributes, the center point is defined as a centroid or an average of two cluster centroids subject to agglomeration. All of the above linkage metrics can be derived as instances of the Lance-Williams updating formula [Lance & Williams 1967]
d (Ci C j , Ck )= a(i )d (Ci , Ck ) + a(k )d (C j , Ck ) + bd (Ci , C j ) + c d (Ci , Ck ) d (C j , Ck ) . Here a,b,c are coefficients corresponding to a particular linkage. This formula expresses a linkage metric between the union of the two clusters and the third cluster in terms of
underlying components. The Lance-Williams formula has an utmost importance since it makes manipulation with dis(similarity) computationally feasible. Survey of linkage metrics can be found in [Murtagh 1983; Day & Edelsbrunner 1984]. When the base measure is distance, these methods capture inter-cluster closeness. However, a similaritybased view that results in intra-cluster connectivity considerations is also possible. This is how original average link agglomeration (Group-Average Method) [Jain & Dubes 1988] was introduced. Linkage metrics-based hierarchical clustering suffers from time complexity. Under reasonable assumptions, such as reducibility condition (graph methods satisfy this condition), linkage metrics methods have O( N 2 ) complexity [Olson 1995]. Despite the unfavorable time complexity, these algorithms are widely used. An example is algorithm AGNES (AGlomerative NESting) [Kaufman & Rousseeuw 1990] used in S-Plus. When the connectivity N N matrix is sparsified, graph methods directly dealing with the connectivity graph G can be used. In particular, hierarchical divisive MST (Minimum Spanning Tree) algorithm is based on graph partitioning [Jain & Dubes 1988].
2.2. Hierarchical Clusters of Arbitrary Shapes
Linkage metrics based on Euclidean distance for hierarchical clustering of spatial data naturally predispose to clusters of proper convex shapes. Meanwhile, visual scanning of spatial images frequently attests clusters with curvy appearance. Guha et al. [1998] introduced the hierarchical agglomerative clustering algorithm CURE (Clustering Using REpresentatives). This algorithm has a number of novel features of general significance. It takes special care with outliers and with label assignment stage. It also uses two devices to achieve scalability. The first one is data sampling (section Scalability and VLDB Extensions). The second device is data partitioning in p partitions, so that fine granularity clusters are constructed in partitions first. A major feature of CURE is that it represents a cluster by a fixed number c of points scattered around it. The distance between two clusters used in the agglomerative process is equal to the minimum of distances between two scattered representatives. Therefore, CURE takes a middleground approach between the graph (all-points) methods and the geometric (one centroid) methods. Single and average link closeness is replaced by representatives aggregate closeness. Selecting representatives scattered around a cluster makes it possible to cover non-spherical shapes. As before, agglomeration continues until requested number k of clusters is achieved. CURE employs one additional device: originally selected scattered points are shrunk to the geometric centroid of the cluster by user-specified factor . Shrinkage suppresses the affect of the outliers since outliers happen to be located further from the cluster centroid than the other scattered representatives. CURE is capable of finding clusters of different shapes and sizes, and it is insensitive to outliers. Since CURE uses sampling, estimation of its complexity is not straightforward. For low-dimensional 2 ) defined in terms of sample size. data authors provide a complexity estimate of O( N sample More exact bounds depend on input parameters: shrink factor , number of representative points c, number of partitions, and sample size. Figure 1 illustrates agglomeration in Cure. Three clusters, each with three representatives, are shown before and after the merge and shrinkage. Two closest representatives are connected by arrow. 9
While the algorithm CURE works with numerical attributes (particularly low dimensional spatial data), the algorithm ROCK developed by the same researchers [Guha et al. 1999] targets hierarchical agglomerative clustering for categorical attributes. It is surveyed in the section Co-Occurrence of Categorical Data. The hierarchical agglomerative algorithm CHAMELEON [Karypis et al. 1999a] utilizes dynamic modeling in cluster aggregation. It uses the connectivity graph G corresponding to the K-nearest neighbor model sparsification of the connectivity matrix: the edges of K most similar points to any given point are preserved, the rest are pruned. CHAMELEON has two stages. In the first stage small tight clusters are built to ignite the second stage. This involves a graph partitioning [Karypis & Kumar 1999]. In the second stage agglomerative process is performed. It utilizes measures of relative inter-connectivity RI (Ci , C j ) and relative closeness RC (Ci , C j ) ; both are locally normalized by quantities related to clusters Ci , C j . In this sense the modeling is dynamic. Normalization involves certain non-obvious graph operations [Karypis & Kumar 1999]. CHAMELEON strongly relies on graph partitioning implemented in the library HMETIS (see the section Co-Occurrence of Categorical Data). Agglomerative process depends on user provided thresholds. A decision to merge is made based on the combination
RI (Ci , C j ) RC (Ci , C j )
of local relative measures. The algorithm does not depend on assumptions about the data model. This algorithm is proven to find clusters of different shapes, densities, and sizes in 2D (two-dimensional) space. It has a complexity of O( Nm + N log( N ) + m 2 log(m)) , where m is number of sub-clusters built during first initialization phase. Figure 2 (analogous to the one in [Karypis & Kumar 1999]) presents a choice of four clusters (a)(d) for a merge. While Cure would merge clusters (a) and (b), CHAMELEON makes intuitively better choice of merging (c) and (d).
(a)
Before After
(b)
(c)
(d)
In linguistics, information retrieval, and document clustering applications binary taxonomies are very useful. Linear algebra methods, based on singular value decomposition (SVD) are used for this purpose in collaborative filtering and information retrieval [Berry & Browne 1999]. SVD application to hierarchical divisive clustering of document collections resulted in the PDDP (Principal Direction Divisive Partitioning)
10
algorithm [Boley 1998]. In our notations, object x is a document, lth attribute corresponds to a word (index term), and matrix entry xil is a measure (as TF-IDF) of l-term frequency in a document x. PDDP constructs SVD decomposition of the matrix C = ( X e x), x = 1 N i =1:N xi , e = (1,...1)T R d .
This algorithm bisects data in Euclidean space by a hyperplane that passes through data centroid orthogonally to eigenvector with the largest singular value. The k-way splitting is also possible if the k largest singular values are considered. Bisecting is a good way to categorize documents and it results in a binary tree. When k-means (2-means) is used for bisecting, the dividing hyperplane is orthogonal to a line connecting two centroids. The comparative study of both approaches [Savaresi & Boley 2001] can be used for further references. Hierarchical divisive bisecting k-means was proven [Steinbach et al. 2000] to be preferable for document clustering. While PDDP or 2-means are concerned with how to split a cluster, the problem of which cluster to split is also important. Casual strategies are: (1) split each node at a given level, (2) split the cluster with highest cardinality, and, (3) split the cluster with the largest intra-cluster variance. All three strategies have problems. For analysis regarding this subject and better alternatives, see [Savaresi et al. 2002].
2.4. Other Developments
Wards method [Ward 1963] implements agglomerative clustering based not on linkage metric, but on an objective function used in k-means (sub-section K-Means Methods). The merger decision is viewed in terms of its effect on the objective function. The popular hierarchical clustering algorithm for categorical data COBWEB [Fisher 1987] has two very important qualities. First, it utilizes incremental learning. Instead of following divisive or agglomerative approaches, it dynamically builds a dendrogram by processing one data point at a time. Second, COBWEB belongs to conceptual or modelbased learning. This means that each cluster is considered as a model that can be described intrinsically, rather than as a collection of points assigned to it. COBWEBs dendrogram is called a classification tree. Each tree node C, a cluster, is associated with the conditional probabilities for categorical attribute-values pairs,
Pr( xl = vlp | C ), l = 1 : d , p = 1 : Al .
This easily can be recognized as a C-specific Nave Bayes classifier. During the classification tree construction, every new point is descended along the tree and the tree is potentially updated (by an insert/split/merge/create operation). Decisions are based on an analysis of a category utility [Corter & Gluck 1992]
CU {C1 ,..., Ck } =
(
j =1:k
CU (C j ) / k ,
CU (C j ) = l , p (Pr( xl = vlp | C j ) 2 (Pr( xl = vlp ) 2 ) similar to GINI index. It rewards clusters C j for increases in predictability of the categorical attribute values vlp . Being incremental, COBWEB is fast with a complexity
11
of O(tN ) , though it depends non-linearly on tree characteristics packed into a constant t. There is the similar incremental hierarchical algorithm for all numerical attributes called CLASSIT [Gennari et al. 1989]. CLASSIT associates normal distributions with cluster nodes. Both algorithms can result in highly unbalanced trees. Chiu et al. [2001] proposed another conceptual or model-based approach to hierarchical clustering. This development contains several different useful features, such as the extension of BIRCH-like preprocessing to categorical attributes, outliers handling, and a two-step strategy for monitoring the number of clusters including BIC (defined below). The model associated with a cluster covers both numerical and categorical attributes and constitutes a blend of Gaussian and multinomial models. Denote corresponding multivariate parameters by . With every cluster C, we associate a logarithm of its (classification) likelihood lC = x C log p(xi | )
i
The algorithm uses maximum likelihood estimates for parameter . The distance between two clusters is defined (instead of linkage metric) as a decrease in log-likelihood d (C1 , C2 ) = lC1 + lC 2 lC1 C 2 caused by merging of the two clusters under consideration. The agglomerative process continues until the stopping criterion is satisfied. As such, determination of the best k is automatic. This algorithm has the commercial implementation (in SPSS Clementine). The complexity of the algorithm is linear in N for the summarization phase. Traditional hierarchical clustering is inflexible due to its greedy approach: after a merge or a split is selected it is not refined. Though COBWEB does reconsider its decisions, it is so inexpensive that the resulting classification tree can also have sub-par quality. Fisher [1996] studied iterative hierarchical cluster redistribution to improve once constructed dendrogram. Karypis et al. [1999b] also researched refinement for hierarchical clustering. In particular, they brought attention to a relation of such a refinement to a well-studied refinement of k-way graph partitioning [Kernighan & Lin 1970]. For references related to parallel implementation of hierarchical clustering see [Olson 1995].
3. Partitioning Relocation Clustering
In this section we survey data partitioning algorithms, which divide data into several subsets. Because checking all possible subset systems is computationally infeasible, certain greedy heuristics are used in the form of iterative optimization. Specifically, this means different relocation schemes that iteratively reassign points between the k clusters. Unlike traditional hierarchical methods, in which clusters are not revisited after being constructed, relocation algorithms gradually improve clusters. With appropriate data, this results in high quality clusters. One approach to data partitioning is to take a conceptual point of view that identifies the cluster with a certain model whose unknown parameters have to be found. More
12
specifically, probabilistic models assume that the data comes from a mixture of several populations whose distributions and priors we want to find. Corresponding algorithms are described in the sub-section Probabilistic Clustering. One clear advantage of probabilistic methods is the interpretability of the constructed clusters. Having concise cluster representation also allows inexpensive computation of intra-clusters measures of fit that give rise to a global objective function (see log-likelihood below). Another approach starts with the definition of objective function depending on a partition. As we have seen (sub-section Linkage Metrics), pair-wise distances or similarities can be used to compute measures of iter- and intra-cluster relations. In iterative improvements such pair-wise computations would be too expensive. Using unique cluster representatives resolves the problem: now computation of objective function becomes linear in N (and in a number of clusters k << N ). Depending on how representatives are constructed, iterative optimization partitioning algorithms are subdivided into k-medoids and k-means methods. K-medoid is the most appropriate data point within a cluster that represents it. Representation by k-medoids has two advantages. First, it presents no limitations on attributes types, and, second, the choice of medoids is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is lesser sensitive to the presence of outliers. In k-means case a cluster is represented by its centroid, which is a mean (usually weighted average) of points within a cluster. This works conveniently only with numerical attributes and can be negatively affected by a single outlier. On the other hand, centroids have the advantage of clear geometric and statistical meaning. The corresponding algorithms are reviewed in the sub-sections KMedoids Methods and K-Means Methods.
3.1. Probabilistic Clustering
In the probabilistic approach, data is considered to be a sample independently drawn from a mixture model of several probability distributions [McLachlan & Basford 1988]. The main assumption is that data points are generated by, first, randomly picking a model j with probability j , j = 1 : k , and, second, by drawing a point x from a corresponding distribution. The area around the mean of each (supposedly unimodal) distribution constitutes a natural cluster. So we associate the cluster with the corresponding distributions parameters such as mean, variance, etc. Each data point carries not only its (observable) attributes, but also a (hidden) cluster ID (class in pattern recognition). Each point x is assumed to belong to one and only one cluster, and we can estimate the probabilities of the assignment Pr(C j | x) to jth model. The overall likelihood of the training data is its probability to be drawn from a given mixture model L( X | C ) = i =1: N j =1:k j Pr( xi | C j ) Log-likelihood log( L( X | C )) serves as an objective function, which gives rise to the Expectation-Maximization (EM) method. For a quick introduction to EM, see [Mitchell 1997]. Detailed descriptions and numerous references regarding this topic can be found in [Dempster et al. 1977; McLachlan & Krishnan 1997]. EM is a two-step iterative optimization. Step (E) estimates probabilities Pr( x | C j ) , which is equivalent to a soft
13
(fuzzy) reassignment. Step (M) finds an approximation to a mixture model, given current soft assignments. This boils down to finding mixture model parameters that maximize log-likelihood. The process continues until log-likelihood convergence is achieved. Restarting and other tricks are used to facilitate finding better local optimum. Moore [1999] suggested acceleration of EM method based on a special data index, KD-tree. Data is divided at each node into two descendents by splitting the widest attribute at the center of its range. Each node stores sufficient statistics (including covariance matrix) similar to BIRCH. Approximate computing over a pruned tree accelerates EM iterations. Probabilistic clustering has some important features: It can be modified to handle recodes of complex structure It can be stopped and resumed with consecutive batches of data, since clusters have representation totally different from sets of points At any stage of iterative process the intermediate mixture model can be used to assign cases (on-line property) It results in easily interpretable cluster system Because the mixture model has clear probabilistic foundation, the determination of the most suitable number of clusters k becomes a more tractable task. From a data mining perspective, excessive parameter set causes overfitting, while from a probabilistic perspective, number of parameters can be addressed within the Bayesian framework. See the sub-section How Many Clusters? for more details including terms MML and BIC used in the next paragraph. The algorithm SNOB [Wallace & Dowe 1994] uses a mixture model in conjunction with the MML principle. Algorithm AUTOCLASS [Cheeseman & Stutz 1996] utilizes a mixture model and covers a broad variety of distributions, including Bernoulli, Poisson, Gaussian, and log-normal distributions. Beyond fitting a particular fixed mixture model, AUTOCLASS extends the search to different models and different k. To do this AUTOCLASS heavily relies on Bayesian methodology, in which a model complexity is reflected through certain coefficients (priors) in the expression for the likelihood previously dependent only on parameters values. This algorithm has a history of industrial usage. The algorithm MCLUST [Fraley & Raftery 1999] is a software package (commercially linked with S-PLUS) for hierarchical, mixture model clustering, and discriminant analysis using BIC for estimation of goodness of fit. MCLUST uses Gaussian models with ellipsoids of different volumes, shapes, and orientations. An important property of probabilistic clustering is that mixture model can be naturally generalized to clustering heterogeneous data. This is important in practice, where an individual (data object) has multivariate static data (demographics) in combination with variable length dynamic data (customer profile) [Smyth 1999]. The dynamic data can consist of finite sequences subject to a first-order Markov model with a transition matrix dependent on a cluster. This framework also covers data objects consisting of several sequences, where number ni of sequences per xi is subject to geometric distribution [Cadez et al. 2000]. To emulate sessions of different lengths, finite-state Markov model (transitional probabilities between Web site pages) has to be augmented with a special end state. Cadez et al. [2001] used mixture model for customer profiling based on transactional information.
14
Model-based clustering is also used in a hierarchical framework: COBWEB, CLASSIT and development by Chiu et al. [2001] were already presented above. Another early example of conceptual clustering is algorithm CLUSTER/2 [Michalski & Stepp 1983].
3.2. K-Medoids Methods
In k-medoids methods a cluster is represented by one of its points. We have already mentioned that this is an easy solution since it covers any attribute types and that medoids have embedded resistance against outliers since peripheral cluster points do not affect them. When medoids are selected, clusters are defined as subsets of points close to respective medoids, and the objective function is defined as the averaged distance or another dissimilarity measure between a point and its medoid. Two early versions of k-medoid methods are the algorithm PAM (Partitioning Around Medoids) and the algorithm CLARA (Clustering LARge Applications) [Kaufman & Rousseeuw 1990]. PAM is iterative optimization that combines relocation of points between perspective clusters with re-nominating the points as potential medoids. The guiding principle for the process is the effect on an objective function, which, obviously, is a costly strategy. CLARA uses several (five) samples, each with 40+2k points, which are each subjected to PAM. The whole dataset is assigned to resulting medoids, the objective function is computed, and the best system of medoids is retained. Further progress is associated with Ng & Han [1994] who introduced the algorithm CLARANS (Clustering Large Applications based upon RANdomized Search) in the context of clustering in spatial databases. Authors considered a graph whose nodes are the sets of k medoids and an edge connects two nodes if they differ by exactly one medoid. While CLARA compares very few neighbors corresponding to a fixed small sample, CLARANS uses random search to generate neighbors by starting with an arbitrary node and randomly checking maxneighbor neighbors. If a neighbor represents a better partition, the process continues with this new node. Otherwise a local minimum is found, and the algorithm restarts until numlocal local minima are found (value numlocal=2 is recommended). The best node (set of medoids) is returned for the formation of a resulting partition. The complexity of CLARANS is O( N 2 ) in terms of number of points. Ester et al. [1995] extended CLARANS to spatial VLDB. They used R*-trees [Beckmann 1990] to relax the original requirement that all the data resides in core memory, which allowed focusing exploration on the relevant part of the database that resides at a branch of the whole data tree.
3.3. K-Means Methods
The k-means algorithm [Hartigan 1975; Hartigan & Wong 1979] is by far the most popular clustering tool used in scientific and industrial applications. The name comes from representing each of k clusters C j by the mean (or weighted average) c j of its points, the so-called centroid. While this obviously does not work well with categorical attributes, it has the good geometric and statistical sense for numerical attributes. The sum of discrepancies between a point and its centroid expressed through appropriate distance is used as the objective function. For example, the L2 -norm based objective
15
function, the sum of the squares of errors between the points and the corresponding centroids, is equal to the total intra-cluster variance
E (C ) = j =1:k x C xi c j .
2
i j
The sum of the squares of errors can be rationalized as (a negative of) log-likelihood for normally distributed mixture model and is widely used in statistics (SSE). Therefore, kmeans algorithm can be derived from general probabilistic framework (see sub-section Probabilistic Clustering) [Mitchell 1997]. Note that only means are estimated. A simple modification would normalize individual errors by cluster radii (cluster standard deviation), which makes a lot of sense when clusters have different dispersions. An objective function based on L2 -norm has many unique algebraic properties. For example, it coincides with pair-wise errors
E (C ) = 1 2 j =1:k xi , y i C j xi yi ,
2
and with the difference between the total data variance and the inter-cluster variance. Therefore, the cluster separation is achieved simultaneously with the cluster tightness. Two versions of k-means iterative optimization are known. The first version is similar to EM algorithm and consists of two-step major iterations that (1) reassign all the points to their nearest centroids, and (2) recompute centroids of newly assembled groups. Iterations continue until a stopping criterion is achieved (for example, no reassignments happen). This version is known as Forgys algorithm [Forgy 1965] and has many advantages: It easily works with any L p -norm