ML - Unit 2 - Part I

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Machine

UNIT- 2
DECISION TREE LEARNING

Decision tree learning is a method for approximating discrete-valued target functions, in


which the learned function is represented by a decision tree.

Topic: 1 - DECISION TREE REPRESENTATION

 Decision trees classify instances by sorting them down the tree from the root to some
leaf node, which provides the classification of the instance.
 Each node in the tree specifies a test of some attribute of the instance, and each branch
descending from that node corresponds to one of the possible values for this attribute.
 An instance is classified by starting at the root node of the tree, testing the attribute
specified by this node, then moving down the tree branch corresponding to the value
of the attribute in the given example. This process is then repeated for the sub tree
rooted at the new node.

FIGURE: A decision tree for the concept Play Tennis. An example is classified by sorting it
through the tree to the appropriate leaf node, then returning the classification associated with
this leaf

1
Machine

 Decision trees represent a disjunction of conjunctions of constraints on the attribute


values of instances.
 Each path from the tree root to a leaf corresponds to a conjunction of attribute tests,
and the tree itself to a disjunction of these conjunctions

For example, the decision tree shown in above figure corresponds to the expression
(Outlook = Sunny ∧ Humidity = Normal)
∨ (Outlook = Overcast)
∨ (Outlook = Rain ∧ Wind =Weak)

Topic 2- APPROPRIATE PROBLEMS FOR DECISION TREE LEARNING

Decision tree learning is generally best suited to problems with the following characteristics:

1. Instances are represented by attribute-value pairs – Instances are described by a


fixed set of attributes and their values

2. The target function has discrete output values – The decision tree assigns a Boolean
classification (e.g., yes or no) to each example. Decision tree methods easily extend to
learning functions with more than two possible output values.

3. The training data may contain errors – Decision tree learning methods are robust to
errors, both errors in classifications of the training examples and errors in the attribute
values that describe these examples.

4. The training data may contain missing attribute values – Decision tree methods can
be used even when some training examples have unknown values

2
Machine

Topic: 3-THE BASIC DECISION TREE LEARNING ALGORITHM

The basic algorithm is ID3 which learns decision trees by constructing them top-down

ID3(Examples, Target _attribute, Attributes)

Examples are the training examples. Target _attribute is the attribute whose value is to
be predicted by the tree. Attributes is a list of other attributes that may be tested by the
learneddecisiontree.ReturnsadecisiontreethatcorrectlyclassifiesthegivenExamples.

Algorithm:
 Create a Root node for the tree
 If all Examples are positive, Return the single-node tree Root, with label =+
 If all Examples are negative, Return the single-node tree Root, with label =-
 If Attributes is empty, Return the single-node tree Root, with label = most common value
of Target _attribute in Examples

 Otherwise Begin
 A ← the attribute from Attributes that best* classifies Examples
 The decision attribute for Root ←A
 For each possible value, vi, of A,

Add a new tree branch below Root, corresponding to the test A =vi

Let Examples vi, be the subset of Examples that have value viforA

If Examples vi, is empty
 Then below this new branch add a leaf node with label = most common
value of Target _attribute in Examples
 Else below this new branch add the sub tree
ID3(Examples vi, Target attribute, Attributes – {A}))
 End
 Return Root

* The best attribute is the one with highest information gain

TABLE: Summary of the ID3 algorithm specialized to learning Boolean-valuedfunctions.ID3


is a greedy algorithm that grows the treetop-down ,at each node selecting the attribute that
best classifies the local training examples. This process continues until the tree perfectly
classifies the training examples, or until all attributes have been used

3
Machine

Which Attribute Is the Best Classifier?

 The central choice in the ID3 algorithm is selecting which attribute to test at each node
in the tree.
 A statistical property called information gain that measures how well a given attribute
separates the training examples according to their target classification.
 ID3 uses information gain measure to select among the candidate attributes at each
step while growing the tree.

ENTROPY MEASURES HOMOGENEITY OF EXAMPLES:

To define information gain, we begin by defining a measure called entropy. Entropy


measures the impurity of a collection of examples.

Given a collection S, containing positive and negative examples of some target concept, the
entropy of S relative to this Boolean classification is

Where,
p+is the proportion of positive examples in S
p-is the proportion of negative examples in S.

Example:
Suppose S is a collection of 14 examples of some Boolean concept, including 9 positive and
5 negative examples. Then the entropy of S relative to this Boolean classification is

 The entropy is 0 if all members of S belong to the same class


 The entropy is 1 when the collection contains an equal number of positive and
negative examples
 If the collection contains unequal numbers of positive and negative examples, the
entropy is between 0 and1

4
Machine

INFORMATION GAIN MEASURES THE EXPECTED REDUCTION IN ENTROPY:

 Information gain, is the expected reduction in entropy caused by partitioning the


examples according to this attribute.
 The information gain, Gain(S, A) of an attribute A, relative to a collection of examples
S, is defined as

Example: Information gain

Let, Values (Wind) = {Weak,


Strong} S = [9+,5−]
S = [6+,2−]
Weak
S = [3+,3−]
Strong

Information gain of attribute Wind:


Gain(S, Wind) = Entropy(S) − 8/14 Entropy (SWeak) − 6/14 Entropy(SStrong)
= 0.94 – (8/14)* 0.811 – (6/14) *1.00
= 0.048

5
Machine

An Illustrative Example

 To illustrate the operation of ID3, consider the learning task represented by the training
examples of below table.
 Here the target attributes Play Tennis, which can have values yes or no for different days.
 Consider the first step through the algorithm, in which the topmost node of the
decision tree is created.

Day Outlook Temperature Humidity Wind Play


Tennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No

 ID3 determines the information gain for each candidate attribute (i.e., Outlook,
Temperature, Humidity, and Wind),then selects the one with highest information gain.

6
Machine

 The information gain values for all four attributes are


Gain(S,Outlook) =0.246
Gain(S,Humidity) =0.151
Gain(S,Wind) = 0.048
Gain(S,Temperature) =0.029

 According to the information gain measure, the Outlook attribute provides the best
prediction of the target attribute, Play Tennis, over the training examples. Therefore,
Outlook is selected as the decision attribute for the root node, and branches are created
below the root for each of its possible values i.e., Sunny, Overcast, and Rain.

7
Machine

SRain = { D4, D5, D6, D10, D14}

Gain (SRain , Humidity) = 0.970 – (2/5)1.0 – (3/5)0.917 = 0.019


Gain (SRain , Temperature) =0.970 – (0/5)0.0 – (3/5)0.918 – (2/5)1.0 = 0.019
Gain (SRain , Wind) =0.970 – (3/5)0.0 – (2/5)0.0 = 0.970

Why Prefer Short Hypotheses?

Occam's razor

 Occam's razor: is the problem-solving principle that the simplest solution tends to be
the right one. When presented with competing hypotheses to solve a problem, one
should select the solution with the fewestassumptions.

 Occam's razor: “Prefer the simplest hypothesis that fits thedata”.

8
Machine

Topic: 4- ISSUES IN DECISION TREE LEARNING

Issues in learning decision trees include


1. Avoiding Over fitting the Data
Reduced error pruning
Rule post-pruning
2. Incorporating Continuous-Valued Attributes
3. Alternative Measures for Selecting Attributes
4. Handling Training Examples with Missing Attribute Values
5. Handling Attributes with Differing Costs

1. Avoiding Over fitting the Data


 The ID3 algorithm grows each branch of the tree just deeply enough to perfectly
classify the training examples but it can lead to difficulties when there is noise in the
data, or when the number of training examples is too small to produce a representative
sample of the true target function. This algorithm can produce trees that over fit the
training examples.

 Definition – Over fit: Given a hypothesis space H, a hypothesis h ∈ H is said to over


fit the training data if there exists some alternative hypothesis h' ∈ H, such that h has
smaller error than h' over the training examples, but h' has a smaller error than h over
the entire distribution of instances.

The below figure illustrates the impact of overfitting in a typical application of decision tree
learning.

 The horizontal axis of this plot indicates the total number of nodes in the decision tree,
as the tree is being constructed. The vertical axis indicates the accuracy of predictions
made by the tree.

9
Machine

 The solid line shows the accuracy of the decision tree over the training examples. The
broken line shows accuracy measured over an independent set of test example
 The accuracy of the tree over the training examples increases monotonically as the tree
is grown. The accuracy measured over the independent test examples first increases,
then decreases.

Howcanitbepossiblefortreehtofitthetrainingexamplesbetterthanh',but more poorly over sub


sequent examples?
1. Over fitting can occur when the training examples contain random errors or noise
2. When small numbers of examples are associated with leaf nodes.

Noisy Training Example

 Example 15: <Sunny, Hot, Normal, Strong,->


 Example is noisy because the correct label is+
 Previously constructed tree misclassifies it

Approaches to avoiding overfitting in decision tree learning


 Pre-pruning(avoidance):Stopgrowingthetreeearlier,beforeitreachesthepointwhere it
perfectly classifies the trainingdata
 Post-pruning (recovery): Allow the tree to overfit the data, and then post-prune thetree

Criterion used to determine the correct final tree size


 Useaseparatesetofexamples,distinctfromthetrainingexamples,toevaluatetheutility of
post-pruning nodes from thetree
 Use all the available data for training, but apply a statistical test to estimate whether
expanding (or pruning) a particular node is likely to produce an improvement beyond
the training set
 Usemeasureofthecomplexityforencodingthetrainingexamplesandthedecisiontree,

1
Machine

Reduced-Error Pruning:

 Reduced-error pruning, is to consider each of the decision nodes in the tree to be


candidates for pruning
 Pruning a decision node consists of removing the sub tree rooted at that node, making
italeafnode,andassigningitthemostcommonclassificationofthetrainingexamples
affiliated with that node
 Nodesareremovedonlyiftheresultingprunedtreeperformsnoworsethan-theoriginal over
the validation set.
 Reduced error pruning has the effect that any leaf node added due to coincidental
regularitiesinthetrainingsetislikelytobeprunedbecausethesesamecoincidencesare
unlikely to occur in the validation set

Theimpactofreduced-errorpruningontheaccuracyofthedecisiontreeisillustratedinbelow figure

 Theadditionallineinfigureshowsaccuracyoverthetestexamplesasthetreeispruned. When
pruning begins, the tree is at bits maximum size and lowest accuracy over the test set.
As pruning proceeds ,the number of nodes is reduced and accuracy over the test set
increases.
 The available data has been subsets: the training examples, the validation examples
used for pruning the tree, and a set of test examples used to provide an unbiased
estimate of accuracy over future unseen examples. The plot shows accuracy over the
training and test sets.

1
Machine

Pros and Cons


Pro: Produces smallest version of most accurate T (subtree of T)
Con: Uses less data to construct T
Can afford to hold out
D validation
?. If not (data is too limited), may make error worse
(insufficient )
train
D

Rule Post-Pruning

Rule post-pruning is successful method for finding high accuracy hypotheses

 Rule post-pruning involves the following steps:


 In the decision tree from the training set, growing the tree until the training data is fit
as well as possible and allowing over fitting to occur.
 Convertthelearnedtreeintoanequivalentsetofrulesbycreatingoneruleforeachpath from
the root node to a leaf node.
 Prune (generalize) each rule by removing any preconditions that result in improving its
estimated accuracy.
 Sort the pruned rules by their estimated accuracy, and consider them in this sequence
when classifying sub sequent instances.

Converting a Decision Tree into Rules

1
Machine

For example, consider the decision tree. The leftmost path of the tree in below figure is
translated into the rule.
IF (Outlook = Sunny) ^ (Humidity = High)
THEN Play Tennis = No

Given the above rule, rule post-pruning would consider removing the preconditions
(Outlook = Sunny) and (Humidity = High)

 It would select whichever of these pruning steps produced the greatest improvement in
estimated rule accuracy, then consider pruning the second precondition as a further
pruning step.
 No pruning step is performed if it reduces the estimated rule accuracy.

There are three main advantages by converting the decision tree to rules before pruning

1. Converting to rules allows distinguishing among the different contexts in which a


decision node is used. Because each distinct path through the decision tree node
produces a distinct rule, the pruning decision regarding that attribute test can be made
differently for each path.
2. Converting to rules removes the distinction between attribute tests that occur near the
root of the tree and those that occur near the leaves. Thus, it avoid messy bookkeeping
issues such as how to reorganize the tree if the root node is pruned while retaining part
of the sub tree below this test.
3. Converting to rules improves readability. Rules are often easier for tounderstand.

2. Incorporating Continuous-Valued Attributes

Continuous-valued decision attributes can be incorporated into the learned tree.


There are two methods for Handling Continuous Attributes
1. Define new discrete valued attributes that partition the continuous attribute value into a
discrete set ofintervals.
E.g., {high ≡ Temp > 35º C, med ≡ 10º C < Temp ≤ 35º C, low ≡ Temp ≤ 10º C}

2. Using thresholds for splittingnodes


e.g. , A ≤ a produces subsets A ≤ a and A >a

1
Machine

What threshold-based Boolean attribute should be defined based on Temperature?

 Pick a threshold, c, that produces the greatest information gain


 In the current example, there are two candidate thresholds, corresponding to the values
of Temperature at which the value of Play Tennis changes:(48+60)/2,and(80+90)/2.
 The information gain can then be computed for each of the candidate attributes,
Temperature >54, and Temperature >85 and the best can be selected
(Temperature>54)

3. Alternative Measures for Selecting Attributes


 The problem is if attributes with many values, Gain will select it?
 Example: consider the attribute Date , which has a very large number of possible
values. (e.g., March 4,1979).
 If this attribute is added to the Play Tennis data, it would have the highest information
gain of any of the attributes. This is because Date alone perfectly predicts the target
attribute over the training data. Thus, it would be selected as the decision attribute for
the root node of the tree and lead to a tree of depth one, which perfectly classifies the
training data.
 This decision tree with root node Date is not a useful predictor because it perfectly
separates the training data, but poorly predict on sub sequent examples.

One Approach: Use Gain Ratio instead of Gain

Thegainratiomeasurepenalizesattributesbyincorporatingasplitinformation,that is sensitive to
how broadly and uniformly the attribute splits the data

Where, Si is subset of S, for which attribute A has value vi

1
Machine

4. Handling Training Examples with Missing Attribute Values

The data which is available may contain missing values for some attributes
Example: Medical diagnosis
 <Fever = true, Blood-Pressure = normal, …, Blood-Test = ?,…>
 Sometimes values truly unknown, sometimes low priority (or cost toohigh)

Strategies for dealing with the missing attribute value


 If node n test A, assign most common value of A among other training examples
sorted to node n
 AssignmostcommonvalueofAamongothertrainingexampleswithsametargetvalue
 AssignaprobabilitypitoeachofthepossiblevaluesviofAratherthansimplyassigning the
most common value to A(x)

5. Handling Attributes with Differing Costs


 In some learning tasks the instance attributes may have associated costs.
 For example: In learning to classify medical diseases, the patients described in terms
of attributes such as Temperature, Biopsy Result, Pulse, Blood Test Results, etc.
 Theseattributesvarysignificantlyintheircosts,bothintermsofmonetarycostandcost to
patient comfort
 Decision trees use low-cost attributes where possible, depends only on high-cost
attributes only when needed to produce reliable classifications

How to Learn A Consistent Tree with Low Expected Cost?

One approach is replace Gain by Cost-Normalized-Gain

Examples of normalization functions

You might also like