DM-Unit-I Introduction To Association-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 97

Data Mining

Introduction

1
Course Outline:
• Introduction: KDD Process
• Data Preprocessing
• Association Rule Mining
• Classification
• Clustering and Anomaly Detection
• Regression
• Case Studies
2
Why Data Mining?
• The Explosive Growth of Data: from terabytes to petabytes
– Data collection and data availability
• Automated data collection tools, database systems, Web, computerized society
– Major sources of abundant data
• Business: Web, e-commerce, transactions, stocks, …
• Science: Remote sensing, bioinformatics, scientific simulation, …
• Society and everyone: news, digital cameras, YouTube
• We are drowning in data, but starving for knowledge!
• “Necessity is the mother of invention”—Data mining—Automated analysis of massive data
What Is Data Mining?
• Data mining (knowledge discovery from data)
– Extraction of interesting (non-trivial, implicit, previously unknown and potentially
useful) patterns or knowledge from huge amount of data.
– Data mining is the process of extracting knowledge or insights from large amounts of
data using various statistical and computational techniques. The data can be structured,
semi-structured or unstructured, and can be stored in various forms such as databases,
data warehouses, and data lakes.
• Alternative names
– Knowledge discovery (mining) in databases (KDD), knowledge extraction, data/pattern
analysis, data archeology, data dredging, information harvesting, business intelligence,
etc.
• Watch out: Is everything “data mining”?
– Simple search and query processing
– (Deductive) expert systems

4
5
Data Mining: Confluence of Multiple Disciplines

6
Why Not Traditional Data Analysis?
• Tremendous amount of data
– Algorithms must be highly scalable to handle such as tera-bytes of data
• High-dimensionality of data
– Micro-array may have tens of thousands of dimensions
• High complexity of data
– Data streams and sensor data
– Time-series data, temporal data, sequence data
– Structure data, graphs, social networks and multi-linked data
– Heterogeneous databases and legacy databases
– Spatial, spatiotemporal, multimedia, text and Web data

7
Data Mining: On What Kinds of Data?
• Database-oriented data sets and applications
– Relational database, data warehouse, transactional database
• Advanced data sets and advanced applications
– Data streams and sensor data
– Time-series data, temporal data, sequence data (incl. bio-sequences)
– Structure data, graphs, social networks and multi-linked data
– Object-relational databases
– Heterogeneous databases and legacy databases
– Spatial data and spatiotemporal data
– Multimedia database
– Text databases
– The World-Wide Web

8
Data Mining: On What Kinds of Data?
• Datawarehouse

• Transactional Data

9
KDD(knowledge discovery from data)

10
Steps in KDD

11
Architecture of DM System

12
Data Mining Functionalities
• Data mining functionalities are used to specify the kinds of patterns to be found in data mining tasks.
• In general, such tasks can be classified into two categories: descriptive and predictive.
• Descriptive mining tasks characterize properties of the data in a target data set.
• Predictive mining tasks perform induction on the current data in order to make predictions.
1. Class/Concept Description: Characterization and Discrimination
2. Associations, and correlations
3. Classification and regression
4. Clustering analysis
5. Outlier analysis
1. Class/Concept Description: Characterization and Discrimination
• It can be useful to describe individual classes and concepts in summarized, concise, and yet precise terms. Such descriptions of a class or a concept are called class/concept
descriptions. These descriptions can be derived using (1) data characterization, by summarizing the data of the class under study (often called the target class) in general terms, or (2)
data discrimination, by comparison of the target class with one or a set of comparative classes (often called the contrasting classes), or (3) both data characterization and
discrimination. Data characterization is a summarization of the general characteristics or features of a target class of data. The data corresponding to the user-specified class are
typically collected by a query.
2. Associations and Correlations (Frequent patterns)
buys(X, “computer”) ⇒ buys(X, “software”) [support = 1%,confidence = 50%] // single-dimensional association rule
age(X, “20..29”) ∧ income(X, “40K..49K”) ⇒ buys(X, “laptop”) [support = 2%, confidence = 60%] // multidimensional association rule

3. Classification and prediction


– The derived model may be represented in various forms, such as classification rules (i.e., IF-THEN rules), decision trees, mathematical formulae, or neural networks(Figure 1.9). A
decision tree is a flowchart-like tree structure, where each node denotes a test on an attribute value, each branch represents an outcome of the test, and tree leaves represent
classes or class distributions.

13
Classification and prediction

14
Clustering analysis
• Clustering analyzes data objects without
consulting class labels.
• Clustering can be used to generate class
labels for a group of data.
• The objects are clustered or grouped
based on the principle of maximizing the
intraclass similarity and minimizing the
interclass similarity.
• That is, clusters of objects are formed so
that objects within a cluster have high
similarity in comparison to one another,
but are rather dissimilar to objects in other
clusters
15
Outlier analysis
• A data set may contain objects that do not comply with
the general behaviour or model of the data. These data
objects are outliers.
• Many data mining methods discard outliers as noise or
exceptions. However, in some applications (e.g., fraud
detection) the rare events can be more interesting than
the more regularly occurring ones.
• The analysis of outlier data is referred to as outlier
analysis or anomaly mining.

16
Major Issues in Data Mining

17
Major Issues in Data Mining
• Mining methodology
– Mining different kinds of knowledge from diverse data types, e.g., bio, stream, Web
– Performance: efficiency, effectiveness, and scalability
– Pattern evaluation: the interestingness problem
– Incorporation of background knowledge
– Handling noise and incomplete data
– Parallel, distributed and incremental mining methods
– Integration of the discovered knowledge with existing one: knowledge fusion
• User interaction
– Data mining query languages and ad-hoc mining
– Expression and visualization of data mining results
– Interactive mining of knowledge at multiple levels of abstraction
• Applications and social impacts
– Domain-specific data mining & invisible data mining
– Protection of data security, integrity, and privacy

18
Architecture: Typical Data Mining System
Graphical User Interface

Pattern Evaluation
Knowledge
Data Mining Engine -Base

Database or Data Warehouse


Server
data cleaning, integration, and selection

Database Data World-Wide Other Info


Warehouse Web Repositories

19
Classification of Data Mining System

20
Classification of Data Mining System

• Classification according to the applications adapted such


as finance, telecommunication, DNA, Stock Markets,
Banking, Retail, email etc.
21
Data Mining Techniques
• Neural Networks
• Decision Trees
• Nearest Neighbour Method
• Cluster Analysis
• Rule Induction
• Genetic Algorithms
• Data Visualization
22
KDD Process: Summary
• Learning the application domain
– relevant prior knowledge and goals of application
• Creating a target data set: data selection
• Data cleaning and preprocessing: (may take 60% of effort!)
• Data reduction and transformation
– Find useful features, dimensionality/variable reduction, invariant representation
• Choosing functions of data mining
– summarization, classification, regression, association, clustering
• Choosing the mining algorithm(s)
• Data mining: search for patterns of interest
• Pattern evaluation and knowledge presentation
– visualization, transformation, removing redundant patterns, etc.
• Use of discovered knowledge

23
End of Introduction

24
Data Preprocessing
• Data preprocessing is an important process of data mining.
In this process, raw data is converted into an
understandable format and made ready for further analysis.
The motive is to improve data quality and make it up to
mark for specific tasks.
Tasks in Data Preprocessing
1. Data cleaning
Data cleaning help us remove inaccurate, incomplete and
incorrect data from the dataset. Some techniques used in data
cleaning are −
1.1 Handling missing values
• Ignore the tuple, if the tuple contains several attributes
with missing values
• Fill in the missing value manually.
• Use a global constant to fill in the missing value: Replace
all missing attribute values by the same constant, such as
a label like “Unknown".
• Use the attribute mean to fill in the missing value.
• Use the most probable value to fill in the missing value.
25
1.2 Noisy Data
Data Preprocessing
• Noisy data are the data that cannot be
interpreted by machine and are
containing unnecessary faulty data.
• Binning − This method handle noisy
data to make it smooth. Data gets
divided equally and stored in form of
bins and then methods are applied to
smoothing or completing the tasks. The
methods are Smoothing by a bin mean
method(bin values are replaced by
mean values), Smoothing by bin
median(bin values are replaced by
median values) and Smoothing by bin
boundary(minimum/maximum bin
values are taken and replaced by
closest boundary values).
• Regression − Regression functions are
used to smoothen the data. Regression
can be linear(consists of one
independent variable) or
multiple(consists of multiple
independent variables).
• Clustering − It is used for grouping the
similar data in clusters and is used for
finding outliers.
1.3 Inconsistent Data
26
DATA INTEGRATION
• Data Integration is the method of
merging data derived from different
sources of data into a consistent
dataset.
• During data migration, there are
certain issues to be considered
namely:
1. Schema Integration and Object
Matching(Entity Identification
Problem)- CustomerId/ custID/CID
2. Redundancy (Correlation Coeff)
3. Tuple Duplication
4. Detection and Resolution of data
value conflicts.(units)

27
Data Transformation
• Smoothing
• Aggregation 1. Min – Max Normalization
2. Z – Score Normalization
• Generalization of data 3. Decimal Scaling
• Normalization Normalization
• Attribute Construction

28
Min – Max Normalization

29
Z – Score Normalization

30
Decimal Scaling Normalization

31
Data Reduction
• Data cube aggregation, which applies aggregation of data during
data cube construction.
• Attribute Subset Selection, which may be detected and removed by
obsolete, weakly significant or redundant measurements.
• Dimensionality reduction where encoding approaches are used to
reduce the data collection size to a minimum.
• Numerosity Reduction of numbers when alternative, smaller data
representation replaces or predicts information.
• Discretization and category hierarchy generation where raw data
values for the attributes are replaced by ranges or higher logical
levels.

32
Attribute Subset Selection

33
Data Mining
Data Preprocessing

34
What is Data?
• Collection of data objects and their Attributes
attributes
• An attribute is a property or Tid Refund Marital Taxable
characteristic of an object Status Income Cheat

– Examples: eye color of a person, 1 Yes Single 125K No

temperature, etc. 2 No Married 100K No

– Attribute is also known as variable, 3 No Single 70K No


4 Yes Married 120K No
field, characteristic, or feature
5 No Divorced 95K Yes
• A collection of attributes describe Objects 6 No Married 60K No
an object 7 Yes Divorced 220K No

– Object is also known as record, 8 No Single 85K Yes


point, case, sample, entity, or 9 No Married 75K No
instance 10
10 No Single 90K Yes
Types of Attributes
• There are different types of attributes
– Nominal
• Examples: ID numbers, eye color, zip codes
– Ordinal
• Examples: rankings (e.g., taste of potato chips on a scale from 1-
10), grades, height in {tall, medium, short}
– Interval
• Examples: calendar dates, temperatures in Celsius or Fahrenheit.
– Ratio
• Examples: temperature in Kelvin, length, time, counts
Properties of Attribute Values
• The type of an attribute depends on which of the following
properties it possesses:
– Distinctness: = 
– Order: < >
– Addition: + -
– Multiplication: */

– Nominal attribute: distinctness


– Ordinal attribute: distinctness & order
– Interval attribute: distinctness, order & addition
– Ratio attribute: all 4 properties
Attribute Description Examples Operations
Type
Nominal The values of a nominal attribute are just zip codes, employee mode, entropy,
different names, i.e., nominal attributes ID numbers, eye color, contingency
provide only enough information to
sex: {male, female} correlation, 2 test
distinguish one object from another. (=, )

Ordinal The values of an ordinal attribute hardness of minerals, median, percentiles,


provide enough information to order {good, better, best}, rank correlation,
objects (< >). grades, street numbers run tests, sign tests

Interval For interval attributes, the differences calendar dates, mean, standard
between values are meaningful, i.e., a unit temperature in Celsius deviation, Pearson's
of measurement exists.
or Fahrenheit correlation, t and F
(+, - )
tests
Ratio For ratio variables, both differences temperature in Kelvin, geometric mean,
and ratios are meaningful. (*, /) monetary quantities, harmonic mean,
counts, age, mass, length,
percent variation
electrical current
Discrete and Continuous Attributes
• Discrete Attribute
– Has only a finite or countably infinite set of values
– Examples: zip codes, counts, or the set of words in a collection of
documents
– Often represented as integer variables.
– Note: binary attributes are a special case of discrete attributes

• Continuous Attribute
– Has real numbers as attribute values
– Examples: temperature, height, or weight.
– Practically, real values can only be measured and represented using a
finite number of digits.
– Continuous attributes are typically represented as floating-point variables.
Types of data sets
• Record
– Data Matrix
– Document Data
– Transaction Data
• Graph
– World Wide Web
– Molecular Structures
• Ordered
– Spatial Data
– Temporal Data
– Sequential Data
– Genetic Sequence Data
Record Data
• Data that consists of a collection of records, each of which consists of a
fixed set of attributes Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Data Matrix
• If data objects have the same fixed set of numeric attributes, then
the data objects can be thought of as points in a multi-dimensional
space, where each dimension represents a distinct attribute

• Such data set can be represented by an m by n matrix, where there


are m rows, one for each object, and n columns, one for each
attribute
Projection Projection Distance Load T hickness
of x Load of y load

10.23 5.27 15.22 2.7 1.2


12.65 6.25 16.22 2.2 1.1
Text Data
• Each document becomes a `term' vector,
– each term is a component (attribute) of the vector,
– the value of each component is the number of times the corresponding term
occurs in the document.

timeout

season
coach

game
score
team

ball

lost
pla

wi
n
y
Document 1 3 0 5 0 2 6 0 2 0 2

Document 2 0 7 0 2 1 0 0 3 0 0

Document 3 0 1 0 0 1 2 2 0 3 0
Transaction Data
• A special type of record data, where
– each record (transaction) involves a set of items.
– For example, consider a grocery store. The set of products purchased
by a customer during one shopping trip constitute a transaction, while
the individual products that were purchased are the items.
TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk
Graph Data
• Examples: Facebook graph and HTML Links
2
5 1
2
5
Ordered Data
• Genomic sequence data
GGTTCCGCCTTCAGCCCCGCGCC
CGCAGGGCCCGCCCCGCGCCGTC
GAGAAGGGCCCGCCTGGCGGGCG
GGGGGAGGCGGGGCCGCCCGAGC
CCAACCGAGTCCGACCAGGTGCC
CCCTCTGCTCGGCCTAGACCTGA
GCTCATTAGGCGGCAGCGGACAG
GCCAAGTAGAACACGCGAAGCGC
TGGGCTGCCTGCTGCGACCAGGG
Data Quality
• What kinds of data quality problems?
• How can we detect problems with the data?
• What can we do about these problems?
• Examples of data quality problems:
– Noise and outliers
– missing values
– duplicate data
Noise
• Noise refers to modification of original values
– Examples: distortion of a person’s voice when talking on a poor phone
and “snow” on television screen

Two Sine Waves Two Sine Waves + Noise


Outliers
• Outliers are data objects with characteristics that are
considerably different than most of the other data objects in
the data set
Missing Values
• Reasons for missing values
– Information is not collected
(e.g., people decline to give their age and weight)
– Attributes may not be applicable to all cases
(e.g., annual income is not applicable to children)

• Handling missing values


– Eliminate Data Objects
– Estimate Missing Values
– Ignore the Missing Value During Analysis
– Replace with all possible values (weighted by their probabilities)
Duplicate Data
• Data set may include data objects that are duplicates, or
almost duplicates of one another
– Major issue when merging data from heterogenous sources

• Examples:
– Same person with multiple email addresses

• Data cleaning
– Process of dealing with duplicate data issues
Data Preprocessing
• Aggregation
• Sampling
• Dimensionality Reduction
• Feature subset selection
• Feature creation
• Discretization and Binarization
• Attribute Transformation
Aggregation
• Combining two or more attributes (or objects) into a
single attribute (or object)
• Purpose
– Data reduction
• Reduce the number of attributes or objects
– Change of scale
• Cities aggregated into regions, states, countries, etc
– More “stable” data
• Aggregated data tends to have less variability
Sampling
• Sampling is the main technique employed for data selection.
– It is often used for both the preliminary investigation of the data and the final
data analysis.

• Statisticians sample because obtaining the entire set of data of interest


is too expensive or time consuming.

• Sampling is used in data mining because processing the entire set of


data of interest is too expensive or time consuming.
Sample Size

8000 points 2000 Points 500 Points


Sampling …
• The key principle for effective sampling is the
following:
– using a sample will work almost as well as using the
entire data sets, if the sample is representative
– A sample is representative if it has approximately the
same property (of interest) as the original set of data
Types of Sampling
• Simple Random Sampling
– There is an equal probability of selecting any particular item

• Sampling without replacement


– As each item is selected, it is removed from the population

• Sampling with replacement


– Objects are not removed from the population as they are selected for the sample.
• In sampling with replacement, the same object can be picked up more than once

• Stratified sampling
– Split the data into several partitions; then draw random samples from each
partition
Curse of Dimensionality
• When dimensionality increases, data becomes increasingly sparse
in the space that it occupies

• Definitions of density and distance between points, which is


critical for clustering and outlier detection, become less
meaningful
Dimensionality Reduction
• Purpose:
– Avoid curse of dimensionality
– Reduce amount of time and memory required by data mining
algorithms
– Allow data to be more easily visualized
– May help to eliminate irrelevant features or reduce noise
• Techniques
– Principle Component Analysis
– Singular Value Decomposition
– Others: supervised and non-linear techniques
Discretization

Data Equal interval width

Equal frequency K-means


Attribute Transformation
• A function that maps the entire set of values of
a given attribute to a new set of replacement
values such that each old value can be
identified with one of the new values
– Simple functions: xk, log(x), ex, |x|
– Standardization and Normalization
Similarity and Dissimilarity
• Similarity
– Numerical measure of how alike two data objects are.
– Is higher when objects are more alike.
– Often falls in the range [0,1]
• Dissimilarity
– Numerical measure of how different are two data objects
– Lower when objects are more alike
– Minimum dissimilarity is often 0
– Upper limit varies
• Proximity refers to a similarity or dissimilarity
Similarity/Dissimilarity for Simple Attributes
p and q are the attribute values for two data objects.
Euclidean Distance
• Euclidean Distance
n
dist =  ( pk − qk )
2
k =1

Where n is the number of dimensions (attributes) and pk and qk are,


respectively, the kth attributes (components) or data objects p and q.

• Standardization is necessary, if scales differ.


Mahalanobis Distance
mahalanobis ( p, q) = ( p − q)  −1 ( p − q)T

 is the covariance matrix of the


input data X
1 n
 j ,k =  ( X ij − X j )( X ik − X k )
n − 1 i =1

For red points, the Euclidean distance is 14.7, Mahalanobis distance is 6.


Cosine Similarity
• If d1 and d2 are two document vectors, then
cos( d1, d2 ) = (d1 • d2) / ||d1|| ||d2|| ,
where • indicates vector dot product and || d || is the length of vector d.
• Example:
d1 = 3 2 0 5 0 0 0 2 0 0
d2 = 1 0 0 0 0 0 0 1 0 2

d1 • d2= 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5
||d1|| = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.481
||d2|| = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.245

cos( d1, d2 ) = .3150


Similarity Between Binary Vectors
• Common situation is that objects, p and q, have only
binary attributes
• Compute similarities using the following quantities
M01 = the number of attributes where p was 0 and q was 1
M10 = the number of attributes where p was 1 and q was 0
M00 = the number of attributes where p was 0 and q was 0
M11 = the number of attributes where p was 1 and q was 1
• Simple Matching and Jaccard Coefficients
SMC = number of matches / number of attributes
= (M11 + M00) / (M01 + M10 + M11 + M00)

J = number of 11 matches / number of not-both-zero attributes values


= (M11) / (M01 + M10 + M11)
Correlation
• Correlation measures the linear relationship between objects
• To compute correlation, we standardize data objects, p and q, and then
take their dot product

pk = ( pk − mean( p)) / std ( p)

qk = ( qk − mean( q)) / std ( q)


correlatio n( p, q) = p • q
Visually Evaluating Correlation

Scatter plots
showing the
similarity from –1
to 1.
End of Data Preprocessing
Data Mining
Association Rules

71
Association Rule Mining
• Given a set of transactions, find rules that will predict the
occurrence of an item based on the occurrences of other items in
the transaction

Market-Basket transactions
Example of Association Rules
TID Items
1 Bread, Milk {Diaper} → {Beer},
2 Bread, Diaper, Beer, Eggs {Milk, Bread} → {Eggs,Coke},
3 Milk, Diaper, Beer, Coke {Beer, Bread} → {Milk},
4 Bread, Milk, Diaper, Beer Implication means co-occurrence,
5 Bread, Milk, Diaper, Coke not causality!
Definition: Frequent Itemset
• Itemset
– A collection of one or more items
• Example: {Milk, Bread, Diaper}
– k-itemset TID Items

• An itemset that contains k items 1 Bread, Milk


• Support count () 2 Bread, Diaper, Beer, Eggs
– Frequency of occurrence of an itemset 3 Milk, Diaper, Beer, Coke
– E.g. ({Milk, Bread,Diaper}) = 2 4 Bread, Milk, Diaper, Beer
• Support 5 Bread, Milk, Diaper, Coke

– Fraction of transactions that contain an itemset


– E.g. s({Milk, Bread, Diaper}) = 2/5
• Frequent Itemset
– An itemset whose support is greater than or equal to a
minsup threshold
TID Items
1 Bread, Milk
Association Rule
2 Bread, Diaper, Beer, Eggs
– An implication expression of the form X → Y, where X
3 Milk, Diaper, Beer, Coke
and Y are itemsets
4 Bread, Milk, Diaper, Beer
– Example:
5 Bread, Milk, Diaper, Coke
{Milk, Diaper} → {Beer}
Rule Evaluation Metrics
– Support (s) Example:
◆ Fraction of transactions that contain both X and {Milk , Diaper }  Beer
Y
 (Milk, Diaper, Beer) 2
– Confidence (c) s= = = 0.4
◆ Measures how often items in Y
|T| 5
appear in transactions that  (Milk, Diaper, Beer) 2
contain X c= = = 0.67
 (Milk, Diaper) 3
Association Rule Mining Task
• Given a set of transactions T, the goal of association rule mining is
to find all rules having
– support ≥ minsup threshold
– confidence ≥ minconf threshold

• Brute-force approach:
– List all possible association rules
– Compute the support and confidence for each rule
– Prune rules that fail the minsup and minconf thresholds
 Computationally prohibitive!
Mining Association Rules
• Two-step approach:
1. Frequent Itemset Generation
– Generate all itemsets whose support  minsup
2. Rule Generation
– Generate high confidence rules from each frequent
itemset, where each rule is a binary partitioning of a
frequent itemset
• Frequent itemset generation is still
computationally expensive
Frequent Itemset Generation
null

A B C D E

AB AC AD AE BC BD BE CD CE DE

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

Given d items, there are


ABCD ABCE ABDE ACDE BCDE
2d possible candidate
itemsets
ABCDE
Frequent Itemset Generation
• Brute-force approach:
– Each itemset in the lattice is a candidate frequent itemset
– Count the support of each candidate by scanning the database
Transactions List of
Candidates
TID Items
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
N 3 Milk, Diaper, Beer, Coke M
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper, Coke
w
– Match each transaction against every candidate
– Complexity ~ O(NMw) => Expensive since M = 2d !!!
Frequent Itemset Generation Strategies
• Reduce the number of candidates (M)
– Complete search: M=2d
– Use pruning techniques to reduce M
• Reduce the number of transactions (N)
– Reduce size of N as the size of itemset increases
– Used by DHP and vertical-based mining algorithms

• Reduce the number of comparisons (NM)


– Use efficient data structures to store the candidates or transactions
– No need to match every candidate against every transaction
Reducing Number of Candidates
• Apriori principle:
– If an itemset is frequent, then all of its subsets must also be
frequent

• Apriori principle holds due to the following property of the


support measure:
X , Y : ( X  Y )  s( X )  s(Y )
– Support of an itemset never exceeds the support of its subsets
– This is known as the anti-monotone property of support
Illustrating Apriori Principle
null

A B C D E

AB AC AD AE BC BD BE CD CE DE

Found to be
Infrequent ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

Pruned ABCD ABCE ABDE ACDE BCDE

supersets
ABCDE
Illustrating Apriori Principle
Item Count Items (1-itemsets)
Bread 4
Coke 2
Milk
Beer
4
3
Itemset Count Pairs (2-itemsets)
{Bread,Milk} 3
Diaper 4 {Bread,Beer} 2
Eggs 1
{Bread,Diaper} 3 (No need to generate
{Milk,Beer}
{Milk,Diaper}
2
3 candidates involving Coke
{Beer,Diaper} 3 or Eggs)
Minimum Support = 3
Triplets (3-itemsets)
If every subset is considered, Itemset
{Bread,Milk,Diaper}
Count
3
6C + 6C + 6C = 41
1 2 3
With support-based pruning,
6 + 6 + 1 = 13
Apriori Algorithm
• Method:
– Let k=1
– Generate frequent itemsets of length 1
– Repeat until no new frequent itemsets are identified
• Generate length (k+1) candidate itemsets from length k frequent
itemsets
• Prune candidate itemsets containing subsets of length k that are
infrequent
• Count the support of each candidate by scanning the DB
• Eliminate candidates that are infrequent, leaving only those that
are frequent
Factors Affecting Complexity
• Choice of minimum support threshold
– lowering support threshold results in more frequent itemsets
– this may increase number of candidates and max length of frequent itemsets
• Dimensionality (number of items) of the data set
– more space is needed to store support count of each item
– if number of frequent items also increases, both computation and I/O costs may
also increase
• Size of database
– Apriori makes multiple passes, run time of algorithm increase with number of
transactions
• Average transaction width
– This may increase max length of frequent itemsets and traversals of hash tree
(number of subsets in a transaction increases with its width)
Rule Generation
• How to efficiently generate rules from frequent itemsets?
– In general, confidence does not have an anti-monotone property
c(ABC →D) can be larger or smaller than c(AB →D)

– But confidence of rules generated from the same itemset has an anti-
monotone property
– e.g., L = {A,B,C,D}:
c(ABC → D)  c(AB → CD)  c(A → BCD)
• Confidence is anti-monotone w.r.t. number of items on the RHS of the rule
Rule Generation for Apriori Algorithm
Lattice of rules ABCD=>{ }
Low
Confidence
Rule BCD=>A ACD=>B ABD=>C ABC=>D

CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD

Pruned
D=>ABC C=>ABD B=>ACD A=>BCD
Rules
Rule Generation for Apriori Algorithm
• Candidate rule is generated by merging two rules that share the same prefix
in the rule consequent
CD=>AB BD=>AC
• join(CD=>AB,BD=>AC)
would produce the candidate
rule D => ABC

• Prune rule D=>ABC if its


D=>ABC
subset AD=>BC does not have
high confidence
Pattern Evaluation
• Association rule algorithms tend to produce too many rules
– many of them are uninteresting or redundant
– Redundant if {A,B,C} → {D} and {A,B} → {D}
have same support & confidence

• Interestingness measures can be used to prune/rank the derived


patterns

• In the original formulation of association rules, support &


confidence are the only measures used
Application of Interestingness Measure
Interestingness
Measures
Computing Interestingness Measure
• Given a rule X → Y, information needed to compute rule interestingness
can be obtained from a contingency table
Contingency table for supports X → Y
Y Y
X f11 f10 f1+
X f01 f00 fo+
f+1 f+0 |T|

Used to define various measures


support, confidence, lift, Gini,
J-measure, etc.
Statistical Independence
• Population of 1000 students
– 600 students know how to swim (S)
– 700 students know how to bike (B)
– 420 students know how to swim and bike (S,B)

– P(SB) = 420/1000 = 0.42


– P(S)  P(B) = 0.6  0.7 = 0.42

– P(SB) = P(S)  P(B) => Statistical independence


– P(SB) > P(S)  P(B) => Positively correlated
– P(SB) < P(S)  P(B) => Negatively correlated
Statistical-based Measures
• take into account statistical dependence
P (Y | X )
Lift =
P (Y )
P( X , Y )
Interest =
P( X ) P (Y )
PS = P ( X , Y ) − P ( X ) P (Y )
P ( X , Y ) − P ( X ) P (Y )
 − coefficien t =
P ( X )[1 − P ( X )] P(Y )[1 − P(Y )]
Example: Lift/Interest
Coffee Coffee
Tea 15 5 20
Tea 75 5 80
90 10 100

Association Rule: Tea → Coffee


Confidence= P(Coffee|Tea) = 0.75
but P(Coffee) = 0.9
 Lift = 0.75/0.9= 0.8333 (< 1, therefore is negatively associated)
There are lots of
measures proposed in
the literature

Some measures are


good for certain
applications, but not for
others

What criteria should we


use to determine
whether a measure is
good or bad?

What about Apriori-


style support based
Subjective Interestingness Measure
• Objective measure:
– Rank patterns based on statistics computed from data
– e.g., 21 measures of association (support, confidence, Laplace, Gini,
mutual information, Jaccard, etc).

• Subjective measure:
– Rank patterns according to user’s interpretation
• A pattern is subjectively interesting if it contradicts the
expectation of a user (Silberschatz & Tuzhilin)
• A pattern is subjectively interesting if it is actionable
(Silberschatz & Tuzhilin)
Interestingness via Unexpectedness
• Need to model expectation of users (domain knowledge)

+ Pattern expected to be frequent

- Pattern expected to be infrequent


Pattern found to be frequent
Pattern found to be infrequent

+ - Expected Patterns

- + Unexpected Patterns

• Need to combine expectation of users with evidence from data (i.e., extracted patterns)
End of Association Rule

You might also like