Data Mining

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 88

DATA MINING

Evolution of Database Technology


Evolution of Database Technology
Evolution of Database Technology
Why Data Mining?
• The Explosive Growth of Data: from terabytes to
petabytes
– Data collection and data availability
• Automated data collection tools, database systems, Web,
computerized society
– Major sources of abundant data
• Business: Web, e-commerce, transactions, stocks, …
• Science: Remote sensing, bioinformatics, scientific
simulation, …
• Society and everyone: news, digital cameras, YouTube
• We are drowning in data, but starving for knowledge!
• Data mining— Automated analysis of massive data sets
Introduction

• Data is growing at a phenomenal rate


• Users expect more sophisticated information
• How?

UNCOVER HIDDEN INFORMATION


DATA MINING

6
Data Mining Definition
• Finding hidden information in a huge store of data
• Fit data to a model
• Similar terms
– Exploratory data analysis
– Data driven discovery
– Deductive learning

7
What Is Data Mining?
• Data mining (knowledge discovery in databases):
– Extraction of interesting (non-trivial, implicit, previously
unknown and potentially useful) information or patterns from
data in large databases
• Alternative names
– Knowledge discovery(mining) in databases (KDD), knowledge
extraction, data/pattern analysis, data archeology, data
dredging, information harvesting, business intelligence, etc.
• What is not data mining?
– (Deductive) query processing.
– Expert systems or small ML/statistical programs

8
Potential Applications
• Market analysis and management
– target marketing, CRM, market basket analysis, cross selling,
market segmentation
• Risk analysis and management
– Forecasting, customer retention, quality control, competitive
analysis
• Fraud detection and management
• Text mining (news group, email, documents) and Web analysis.
– Intelligent query answering

9
Market Analysis and Management
• Where are the data sources for analysis?
– Credit card transactions, loyalty cards, discount coupons,
customer complaint calls,
– Target marketing (Find clusters of “model” customers who
share the same characteristics: interest, income level,
spending habits, etc.)
• Determine customer purchasing patterns over time
• Cross-market analysis
– Associations/co-relations between product sales
– Prediction based on the association information

10
Market Analysis and Management
• Customer profiling
– data mining can tell you what types of customers buy what
products (clustering or classification)
• Identifying customer requirements
– identifying the best products for different customers
– use prediction to find what factors will attract new
customers
• Provides summary information
– various multidimensional summary reports
– statistical summary information (data central tendency and
variation)

11
Fraud Detection and Management
• Applications
– widely used in health care, retail, credit card services,
telecommunications (phone card fraud), etc.
• Approach
– use historical data to build models of fraudulent behavior and
use data mining to help identify similar instances
• Examples
– auto insurance: detect a group of people who stage accidents
to collect on insurance
– money laundering: detect suspicious money transactions (US
Treasury's Financial Crimes Enforcement Network)
– medical insurance: detect professional patients and ring of
doctors and ring of references
12
Data Mining vs. KDD
• Knowledge Discovery in Databases (KDD): process of
finding useful information and patterns in data.
• Data Mining: Use of algorithms to extract the
information and patterns derived by the KDD process.

13
KDD Process: Several Key Steps
• Learning the application domain
– relevant prior knowledge and goals of application
• Creating a target data set: data selection
• Data cleaning and preprocessing: (may take 60% of effort!)
• Data reduction and transformation
– Find useful features, dimensionality/variable reduction,
invariant representation
• Choosing functions of data mining
– summarization, classification, regression, association,
clustering
• Choosing the mining algorithm(s)
• Data mining: search for patterns of interest
• Pattern evaluation and knowledge presentation
• Use of discovered knowledge
Knowledge Discovery (KDD) Process

– Data mining—core of
Pattern Evaluation
knowledge discovery
process
Data Mining

Task-relevant Data

Data Selection
Warehouse
Data Cleaning

Data Integration

Databases
Data Mining and Business
Intelligence
Increasing potential
to support
business decisions End User
Making
Decisions

Data Presentation Business


Analyst
Visualization Techniques
Data Mining Data
Information Discovery Analyst

Data Exploration
Statistical Analysis, Querying and Reporting
Data Warehouses / Data Marts
OLAP, MDA DBA
Data Sources
Paper, Files, Information Providers,
16
Database Systems, OLTP
Data Mining: Confluence of
Multiple Disciplines
Database
Statistics
Technology

Machine
Learning
Data Mining Visualization

Information Other
Science Disciplines
17
Data Mining Development
•Similarity Measures
•Hierarchical Clustering
•Relational Data Model •IR Systems
•SQL •Imprecise Queries
•Association Rule Algorithms •Textual Data
•Data Warehousing
•Scalability Techniques •Web Search Engines

•Bayes Theorem
•Regression Analysis
•Algorithm Design Techniques •EM Algorithm
•Algorithm Analysis •K-Means Clustering
•Data Structures •Time Series Analysis

•Neural Networks
•Decision Tree Algorithms
DATA MINING VESIT 18
M.VIJAYALAKSHMI
Data Mining Functionalities
• Multidimensional concept description: Characterization and
discrimination
– Generalize, summarize, and contrast data characteristics,
e.g., dry vs. wet regions
• Frequent patterns, association, correlation vs. causality
– Milk  Fruit [0.5%, 75%] (Correlation or causality?)
• Classification and prediction
– Construct models (functions) that describe and distinguish
classes or concepts for future prediction
• E.g., classify countries based on (climate), or classify cars
based on (gas mileage)
– Predict some unknown or missing numerical values
Data Mining Functionalities (2)
• Cluster analysis
– Class label is unknown: Group data to form new classes, e.g.,
cluster houses to find distribution patterns
– Maximizing intra-class similarity & minimizing interclass
similarity
• Outlier analysis
– Outlier: Data object that does not comply with the general
behavior of the data
– Noise or exception? - fraud detection, rare events analysis
• Trend and evolution analysis
– Trend and deviation: e.g., regression analysis
– Sequential pattern mining: digital camera  large SD Mem.
– Periodicity analysis
– Similarity-based analysis
• Other pattern-directed or statistical analyses
Data Mining Issues
• Human Interaction • Multimedia Data
• Overfitting • Missing Data
• Outliers • Irrelevant Data
• Interpretation • Noisy Data
• Visualization • Changing Data
• Large Datasets • Integration
• High Dimensionality • Application

21
Major Issues in Data Mining (1)
• Mining methodology and user interaction
– Mining different kinds of knowledge in databases
– Interactive mining of knowledge at multiple levels of
abstraction
– Incorporation of background knowledge
– Data mining query languages and ad-hoc data mining
– Expression and visualization of data mining results
– Handling noise and incomplete data
– Pattern evaluation: the interestingness problem

• Performance and scalability


– Efficiency and scalability of data mining algorithms
– Parallel, distributed and incremental mining methods

22
Major Issues in Data Mining (2)
• Issues relating to the diversity of data types
– Handling relational and complex types of data
– Mining information from heterogeneous databases and
global information systems (WWW)
• Issues related to applications and social impacts
– Application of discovered knowledge
• Domain-specific data mining tools
• Intelligent query answering
• Process control and decision making
– Integration of the discovered knowledge with existing
knowledge: A knowledge fusion problem
– Protection of data security, integrity, and privacy

23
Social Implications of DM
• Privacy
• Profiling
• Unauthorized use

24
Data Mining Metrics
• Usefulness
• Return on Investment (ROI)
• Accuracy
• Space/Time

25
Are All the “Discovered” Patterns Interesting?
• Data mining may generate thousands of patterns:
– Human-centered, query-based, focused mining
• Interestingness measures
– A pattern is interesting if it is easily understood by humans,
valid on new or test data with some degree of certainty,
potentially useful, novel, or validates some hypothesis that a
user seeks to confirm
• Objective vs. subjective interestingness measures
– Objective: based on statistics and structures of patterns, e.g.,
support, confidence, etc.
– Subjective: based on user’s belief in the data, e.g.,
unexpectedness, novelty, actionability, etc.
Find All and Only Interesting Patterns?
• Find all the interesting patterns: Completeness
– Can a data mining system find all the interesting patterns? Do
we need to find all of the interesting patterns?
– Heuristic vs. exhaustive search
– Association vs. classification vs. clustering

• Search for only interesting patterns: An optimization problem


– Can a data mining system find only the interesting patterns?
• First general all the patterns and then filter out the
uninteresting ones
• Generate only the interesting patterns—mining query
optimization
Why Data Mining Query Language?
• Automated vs. query-driven?
– Finding all the patterns autonomously in a database?—
unrealistic because the patterns could be too many but
uninteresting
• Data mining should be an interactive process
– User directs what to be mined
• Users must be provided with a set of primitives to be used to
communicate with the data mining system
• Incorporating these primitives in a data mining query language
– More flexible user interaction
– Foundation for design of graphical user interface
– Standardization of data mining industry and practice
Primitives that Define a Data Mining Task
• 1. Task-relevant data
– Database or data warehouse name
– Database tables or data warehouse cubes
– Condition for data selection
– Relevant attributes or dimensions
– Data grouping criteria
• 2. Type of knowledge to be mined
– Characterization, discrimination, association, classification,
prediction, clustering, outlier analysis, other data mining
tasks
• Background knowledge
• Pattern interestingness measurements
• Visualization/presentation of discovered patterns
Primitive 3: Background Knowledge
• A typical kind of background knowledge: Concept hierarchies
• Schema hierarchy
– E.g., street < city < province_or_state < country
• Set-grouping hierarchy
– E.g., {20-39} = young, {40-59} = middle_aged
• Operation-derived hierarchy
– email address: [email protected]
login-name < department < university < country
• Rule-based hierarchy
– low_profit_margin (X) <= price(X, P1) and cost (X, P2) and (P1
- P2) < $50
Primitive 4: Pattern Interestingness Measure
• Simplicity
e.g., (association) rule length, (decision) tree size
• Certainty
e.g., confidence, P(A|B) = #(A and B)/ #(B), classification
reliability or accuracy, certainty factor, rule strength, rule
quality, discriminating weight, etc.
• Utility
potential usefulness, e.g., support (association), noise
threshold (description)
• Novelty
not previously known, surprising (used to remove redundant
rules, e.g., Illinois vs. Champaign rule implication support
ratio)
Primitive 5: Presentation of Discovered Patterns

• Different backgrounds/usages may require different forms of


representation
– E.g., rules, tables, crosstabs, pie/bar chart, etc.
• Concept hierarchy is also important
– Discovered knowledge might be more understandable when
represented at high level of abstraction
– Interactive drill up/down, pivoting, slicing and dicing
provide different perspectives to data
• Different kinds of knowledge require different representation:
association, classification, clustering, etc.
DMQL—A Data Mining Query Language
• Motivation
– A DMQL can provide the ability to support ad-hoc and
interactive data mining
– By providing a standardized language like SQL
• Hope to achieve a similar effect like that SQL has on
relational database
• Foundation for system development and evolution
• Facilitate information exchange, technology transfer,
commercialization and wide acceptance
• Design
– DMQL is designed with the primitives described earlier
Integration of Data Mining and Data Warehousing

• Data mining systems, DBMS, Data warehouse systems


coupling
– No coupling, loose-coupling, semi-tight-coupling, tight-
coupling
• On-line analytical mining data
– integration of mining and OLAP technologies
• Interactive mining multi-level knowledge
– Necessity of mining knowledge and patterns at different
levels of abstraction by drilling/rolling, pivoting,
slicing/dicing, etc.
• Integration of multiple mining functions
– Characterized classification, first clustering and then
association
Architecture: Typical Data Mining System

Graphical User Interface

Pattern Evaluation
Knowledge
Data Mining Engine -Base

Database or Data Warehouse Server

data cleaning, integration, and selection

Data World-Wide Other Info


Database Repositories
Warehouse Web
An OLAM System Architecture
Mining query Mining result Layer4
User Interface
User GUI API
Layer3
OLAM OLAP
Engine Engine OLAP/OLAM

Data Cube API

Layer2
MDDB
MDDB
Meta
Data
Filtering&Integration Database API Filtering
Layer1
Data cleaning Data Data
Databases Data integration Warehouse Repository
July 7, 2024
Data Mining Models and Tasks

© Prentice Hall 37
Predictive models

can provide answers to questions like


• Which products should be promoted to a particular
customer?
• What is the probability that a certain customer will
respond to a planned promotion?
• Which securities will be most profitable to buy or sell
during the next trading session?
• What is the likelihood that a certain customer will default
or pay back on schedule?
• What is the appropriate medical diagnosis for this patient?
Descriptive models

Sample demographic clusters/ subgroups


• Men who buy diapers also buy beer
• People who buy scuba gear take Australian vacations
• People who purchase skim milk also tend to buy whole
wheat bread
• Customers who responded to a particular offer are likely to
respond to similar offer
Basic Data Mining Tasks
• Classification maps data into predefined groups or
classes
– Supervised learning
– Pattern recognition
– Prediction
• Regression is used to map a data item to a real
valued prediction variable.
• Clustering groups similar data together into clusters.
– Unsupervised learning
– Segmentation
– Partitioning

40
Basic Data Mining Tasks
(cont’d)
• Summarization maps data into subsets with
associated simple descriptions.
– Characterization
– Generalization
• Link Analysis uncovers relationships among data.
– Affinity Analysis
– Association Rules
– Sequential Analysis determines sequential patterns.

41
Ex: Time Series Analysis
• Example: Stock Market
• Predict future values
• Determine similar patterns over time
• Classify behavior

42
Data Mining Development
•Similarity Measures
•Hierarchical Clustering
•Relational Data Model •IR Systems
•SQL •Imprecise Queries
•Association Rule Algorithms •Textual Data
•Data Warehousing
•Scalability Techniques •Web Search Engines

•Bayes Theorem
•Regression Analysis
•EM Algorithm
•K-Means Clustering
•Time Series Analysis
•Algorithm Design Techniques
•Algorithm Analysis •Neural Networks
•Data Structures
•Decision Tree Algorithms

43
KDD Issues

• Human Interaction
• Overfitting
• Outliers
• Interpretation
• Visualization
• Large Datasets
• High Dimensionality
44
KDD Issues (cont’d)

• Multimedia Data
• Missing Data
• Irrelevant Data
• Noisy Data
• Changing Data
• Integration
• Application

45
Data Mining Techniques

Classification – discover rules that define whether an


item or event belongs to a particular subset or
class of data

• Involves building model; then predicting classifications


• e.g. matching buyer attributes with product attributes
 predict customers likely to buy a particular product
next month
 targeted promotional contact or mailing list
Example: Using Decision Trees to Predict
Classifications - ALICE d'ISoft
A Credit Officer wishes to identify customers who had trouble
paying back their loans.
# of customers in the
database
N: # and % of customers who
had trouble paying back loan

Y: # and % of customers who


had no trouble paying back
loan
Parent Node
Graphical chart representing
success rate Y and failure
rate N

http://www.alice-soft.com/html/tech_dt.htm
Example: Using Decision Trees to Predict
Classifications - ALICE d'ISoft
Split the records according to most discriminating attribute:
housing type

http://www.alice-soft.com/html/tech_dt.htm
Example Classification Rule: People who rent their home and
earn more than 7853 Francs have an 86% success rate.

http://www.alice-soft.com/html/tech_dt.htm
Data Mining Techniques
Association – or link analysis – search all details or
transactions from operational systems for patterns
with a high probability of repetition
• Results to development of associative algorithm that
correlates one set of events or items with another set of
events or items
• e.g. of association rules or patterns:
– 83% of all records that contain items A, B, C also
contain items D and E
– 83% - confidence factor
Data Mining Techniques

Another example of link analysis:


• Market basket analysis – analysing the products contained
in a purchaser’s basket and then using an associative rule
to compare hundreds of thousands of baskets
• 29% of the time that the brand X blender is sold, the
customer also buys a set of kitchen tumblers
• 68% of the time that a customer buys beverages, the
customer also buys pretzels
>Determine the location and content of promotional or
end-of-aisle displays
Market Basket Analysis
• This is the most widely used and, in many ways, most
successful data mining algorithm.
• It essentially determines what products people purchase
together.
• Stores can use this information to place these products in
the same area.
• Direct marketers can use this information to determine
which new products to offer to their current customers.
• Inventory policies can be improved if reorder points reflect
the demand for the complementary products.
Association Rules for Market Basket
Analysis
Rules are written in the form “left-hand side implies right-hand
side” and an example is:

Yellow Peppers IMPLIES Red Peppers, Bananas, Bakery

To make effective use of a rule, three numeric measures about


that rule must be considered: (1) support, (2) confidence and
(3) lift
Measures of Predictive Ability
Support refers to the percentage of baskets where the rule
was true (both left and right side products were present).
LEFT RIGHT

Confidence measures what percentage of baskets that


contained the left-hand product also contained the right.

LEFT RIGHT

Lift measures how much more frequently the left-hand item is


found with the right than without the right.
LEFT RIGHT

Marakas, G.M. (2002) Decision support systems in the 21st Century. 2nd Ed, Prentice Hall
An Example
Green Red Peppers Yellow
Rule: Peppers IMPLIES Peppers
IMPLIES Bananas IMPLIES
Bananas Bananas
Lift 1.37 1.43 1.17
Support 3.77 8.58 22.12
Confidence 85.96 89.47 73.09

• The confidence suggests people buying any kind of pepper


also buy bananas.
• Green peppers sell in about the same quantities as red or
yellow, but are not as predictive.
Market Basket Analysis Methodology

• We first need a list of transactions and what was


purchased. This is pretty easily obtained these days from
scanning cash registers.
• Next, we choose a list of products to analyse, and tabulate
how many times each was purchased with the others.
• The diagonals of the table shows how often a product is
purchased in any combination, and the off-diagonals show
which combinations were bought.
A Convenience Store Example

Consider the following simple example about five


transactions at a convenience store:

Transaction 1: Frozen pizza, cola, milk


Transaction 2: Milk, potato chips
Transaction 3: Cola, frozen pizza
Transaction 4: Milk, pretzels
Transaction 5: Cola, pretzels

These need to be cross tabulated and displayed in a table.


A Convenience Store Example
Product Pizza Milk Cola Chips Pretzels
Bought also also also also also
Pizza 2 1 2 0 0
Milk 1 3 1 1 1
Cola 2 1 3 0 1
Chips 0 1 0 1 0

Pretzels 0 1 1 0 2

 Pizza and Cola sell together more often than any other
combo; a cross-marketing opportunity?
 Milk sells well with everything – people probably come here
specifically to buy it.
Marakas, G.M. (2002) Decision support systems in the 21st Century. 2nd Ed, Prentice Hall
Limitations of Market Basket Analysis

• A large number of real transactions are needed to do an


effective basket analysis, but the data’s accuracy is
compromised if all the products do not occur with similar
frequency.
• The analysis can sometimes capture results that were due
to the success of previous marketing campaigns (and not
natural tendencies of customers).

Marakas, G.M. (2002) Decision support systems in the 21st Century. 2nd Ed, Prentice Hall
Market Basket Analysis in PolyAnalyst

Market Basket Analysis - of


Groups
products
PolyAnalyst sold
together well
Associatio
n Rules

http://www.megaputer.com/products/pa/algorithms/ba.php3
HealthCare Fraud Example
Market Basket Analysis + Summary Statistics reveal providers
sharing a large number of patients >>>Potential Provider Fraud

http://www.megaputer.com
Data Mining Techniques
Sequencing or time-series analysis – techniques that
relate events in time
• Prediction of interest rate fluctuations or stock performance
based on a series of preceding events
• E.g. buying sequence: parents buy promotional toys associated
with a particular movie within 2 weeks after renting the movie
>flyer campaign for promotional toys should be linked to
customer lists created a s a results of movie rentals
• sequence of customer purchases > catalogue of specific product
types can be target-mailed to the customer

Marakas, G.M. (2002) Decision support systems in the 21st Century. 2nd Ed, Prentice Hall
Association and Sequencing
Association and sequencing tools analyse data to discover
rules that identify patterns of behaviour. An association
tool will find rules such as:

• When people buy diapers they also buy beer 50 percent of the time.

A sequencing technique is very similar to an association


technique, but it adds time to the analysis and produces
rules such as:

• People who have purchased a VCR are three times more likely to
purchase a camcorder in the time period two to four months after the
VCR was purchased.

http://www.dbmsmag.com/9807m03.html
Association and Sequencing
Example in care management, procedure interactions
and pharmaceutical interactions

• Patients who are taking drugs A, B, and C are two and a


half times more likely to also be taking drug D.

• Patients receiving procedure X from Doctor Y are three


times less likely to get infection Z.

http://www.dbmsmag.com/9807m03.html
Association and Sequencing

Example in financial industry:

• The prices of stocks in industry Q are 1.8 times more


likely to close up one day after stocks in industry R closed
down.

http://www.dbmsmag.com/9807m03.html
Association and Sequencing
Example in fraud detection in telecommunications and
insurance:

• International credit card calls longer than three minutes


originating in area code 555 between 1:00 AM and 3:00
AM are three times more likely to go uncollected.

• Accident claims involving soft tissue trauma where


attorney P represents the claimant are twice as likely to
be fraudulent.

http://www.dbmsmag.com/9807m03.html
Data Mining Techniques
Clustering – technique for creating partitions so that
all members of each set are similar according to
some metric or set of metrics
• e.g., credit card purchase data
• Cluster 1: business-issues gold card, meals charged
on weekdays, mean values greater than $250
• Cluster 2: personal platinum card, meals charged on
weekends, mean value $175, bottle of wine
charged more than 65% of the time

Marakas, G.M. (2002) Decision support systems in the 21st Century. 2nd Ed, Prentice Hall
Clustering- Example
Identifying natural clusters of patient populations

p://www.enee.umd.edu/medlab/papers/dcsThShort/thpaper1.html
Clustering- Example
Identifying natural clusters of patient populations

p://www.enee.umd.edu/medlab/papers/dcsThShort/thpaper1.html
Current Limitations and Challenges to
Data Mining

Despite the potential power and value, data mining is still a


new field. Some things that thus far have limited
advancement are:
– Identification of missing information – not all
knowledge gets stored in a database
– Data noise and missing values – future systems need
better ways to handle this
– Large databases and high dimensionality – future
applications need ways to partition data into more
manageable chunks

Marakas, G.M. (2002) Decision support systems in the 21st Century. 2nd Ed, Prentice Hall
Summary
• Business intelligence systems with data mining tools allow
the systems to find hidden patterns from large datasets, and
use these patterns to turn data into actionable information
• BIS using data mining tools need data visualisation tools,
to present to the end-user such hidden patterns
• Hidden patterns when placed onto the hands of decision
makers, become actionable information or business
intelligence
References
Marakas, G.M. (2002) Decision support systems in the 21st
Century. 2nd Ed, Prentice Hall (or other editions)
Power, D. (2002) Decision Support Systems: Concepts and
Resources for Managers, Quorum Books.
FREE online resource: Data Mining booklet
http://www.twocrows.com/intro-dm.pdf
Data Mining Metrics
• Usefulness
• Return on Investment (ROI)
• Accuracy
• Space/Time

73
Regression Analysis

Module 3
Regression

Dependent
variable

Independent variable (x)


Regression is the attempt to explain the variation in a
dependent variable using the variation in independent
variables.
Regression is thus an explanation of causation.
If the independent variable(s) sufficiently explain the variation
in the dependent variable, the model can be used for
prediction.
Simple Linear Regression

y’ = b0 + b1X ± є
є
variable (y)
Dependent
B1 = slope
b0 (y intercept) = ∆y/ ∆x

Independent variable (x)

The output of a regression is a function that predicts


the dependent variable based upon values of the
independent variables.
Simple regression fits a straight line to the data.
Simple Linear Regression

Observatio
n: y ^
Prediction
Dependent
variable :y

Zero
Independent variable (x)

The function will make a prediction for each


observed data point. ^

The observation is denoted by y and the prediction


is denoted by y.
Simple Linear Regression

Prediction
error: ε
Observation: y
^
Prediction:
y
Zero

For each observation, the variation can be described


as: ^
y=y+ε
Actual = Explained + Error
Regression

Dependent
variable

Independent variable (x)


A least squares regression selects the line with the
lowest total sum of squared prediction errors.
This value is called the Sum of Squares of Error, or
SSE.
Calculating SSR

Dependent Population mean: y


variable

Independent variable (x)

The Sum of Squares Regression (SSR) is the sum of


the squared differences between the prediction for
each observation and the population mean.
Regression Formulas

The Total Sum of Squares (SST) is equal to SSR + SSE.


Mathematically,

^ 2
SSR = ∑ ( y – y ) (measure of explained
variation)
^

SSE = ∑ ( y – y ) (measure
2
of unexplained
variation)

SST = SSR + SSE = ∑ ( y – y ) (measure of total


variation in y)
The Coefficient of Determination
The proportion of total variation (SST) that is
explained by the regression (SSR) is known as the
Coefficient of Determination, and is often referred to
2
as R .

2
R = SSR = SSR
SST SSR + SSE

2
The value of R can range between 0 and 1, and the
higher its value the more accurate the regression
model is. It is often referred to as a percentage.
Standard Error of Regression

The Standard Error of a regression is a measure of


its variability. It can be used in a similar manner to
standard deviation, allowing for prediction intervals.
y ± 2 standard errors will provide approximately 95%
accuracy, and 3 standard errors will provide a 99%
confidence interval.
SSE
Standard Error = √ n-k
Standard Error is calculated by taking the square
root of the average prediction error.

Where n is the number of observations in the sample and


k is the total number of variables in the model
The output of a simple regression is the coefficient
β and the constant A. The equation is then:

y=A+β*x+ε

where ε is the residual error.


β is the per unit change in the dependent variable
for each unit change in the independent variable.
Mathematically:

∆y
β= ∆x
Multiple Linear Regression

More than one independent variable can be used to


explain variance in the dependent variable, as long
as they are not linearly related.

A multiple regression
1 1 2 2takes the form:

y = A + β X + β X + … + β k Xk + ε

where k is the number of variables, or parameters.


Multicollinearity

Multicollinearity is a condition in which at least 2


independent variables are highly linearly correlated.
It will often crash computers.
Example table of
Correlations
Y X1 X2
Y 1.000
X1 0.802 1.000
X2 0.848 0.578 1.000

A correlations table can suggest which independent


variables may be significant. Generally, an ind.
variable that has more than a .3 correlation with the
dependent variable and less than .7 with any other
ind. variable can be included as a possible
Nonlinear Regression

Nonlinear functions can also be fit as


regressions. Common choices include Power,
Logarithmic, Exponential, and Logistic, but any
continuous function can be used.
Regression Output
in Excel

SUMMARY OUTPUT

Regression Statistics
Multiple R 0.982655 Y = B0 + B1 X1 + B2X2 + B3X3 - - - +/- Error
R Square 0.96561 Total = Estimated/Predicted +/- Error
Adjusted R Square 0.959879
Standard Error 26.01378
Observations 15

ANOVA
df SS MS F Significance F
Regression 2 228014.6 114007.3 168.4712 1.65E-09
Residual 12 8120.603 676.7169
Total 14 236135.2

Coefficients
Standard Error t Stat P-value Lower 95%Upper 95%
Intercept 562.151 21.0931 26.65094 4.78E-12 516.1931 608.1089
Temperature -5.436581 0.336216 -16.1699 1.64E-09 -6.169133 -4.704029
Insulation -20.01232 2.342505 -8.543127 1.91E-06 -25.1162 -14.90844

Estimated Heating Oil = 562.15 - 5.436 (Temperature) - 20.012 (Insulation)

You might also like