Module 1 Part1
Module 1 Part1
Module 1 Part1
INTELLIGENCE
Course Information
• Credits: 4 Units
• Course Code: IT402
Course Objectives
• The purpose of this course is to introduce the basic data mining
technologies and their use for business intelligence. The objective of
this course is to teach the students how to analyze the business needs
for knowledge discovery in order to create competitive advantages and
to apply data mining technologies appropriately in order to realize
their real business value
Syllabus
Syllabus Contd.
Assessment/Examination Scheme
Books
Recommended Books
Jiawei Han, Micheline Kamber and Jian Pei, “Data
mining Concepts and Techniques”, Elsevier. Soft copy
available at:
http://ccs1.hnue.edu.vn/hungtd/DM2012/DataMining_BO
OK.pdf
ASET
8/55
INTRODUCTION: DM AND KDD PROCESS
Loads of Data
ASET
9/55
INTRODUCTION: DM AND KDD PROCESS
Loads of Data
ASET
10/
INTRODUCTION: DM AND KDD PROCESS
ASET
11/
INTRODUCTION: DM AND KDD PROCESS
ASET
12/
What is data mining?
• After years of data mining there is still no unique answer to this
question.
• A tentative definition:
Data mining is the use of efficient techniques for the
analysis of very large collections of data and the
extraction of useful and possibly unexpected patterns in
data.
Why do we need data mining?
• Really, really huge amounts of raw data!!
• In the digital age, TB of data is generated by the second
• Mobile devices, digital photographs, web documents.
• Facebook updates, Tweets, Blogs, User-generated
content
• Transactions, sensor data, surveillance data
• Queries, clicks, browsing
• Cheap storage has made possible to maintain this data
• Need to analyze the raw data to extract
knowledge
Why do we need data mining?
• “The data is the computer”
• Large amounts of data can be more powerful than complex algorithms and
models
• Google has solved many Natural Language Processing problems, simply by looking at the
data
• Example: misspellings, synonyms
• Data is power!
• Today, the collected data is one of the biggest assets of an online company
• Query logs of Google
• The friendship and updates of Facebook
• Tweets and follows of Twitter
• Amazon transactions
• We need a way to harness the collective intelligence
The data is also very complex
• Multiple types of data: tables, time series, images, graphs, etc
• Online news portals: steady stream of 100’s of new articles every day
• Amazon collects all the items that you browsed, placed into your basket, read reviews about,
purchased.
• Google and Bing record all your browsing activity via toolbar plugins. They also record the queries
you asked, the pages you saw and the clicks you did.
timeout
season
coach
game
score
team
ball
lost
pla
wi
n
Document 1 3 0 y
5 0 2 6 0 2 0 2
Document 2 0 7 0 2 1 0 0 3 0 0
Document 3 0 1 0 0 1 2 2 0 3 0
Transaction Data
• Each record (transaction) is a set of items.
TID Items
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk
2 <a href="papers/papers.html#bbbb">
Data Mining </a>
<li>
5 1 <a href="papers/papers.html#aaaa">
Graph Partitioning </a>
2 <li>
<a href="papers/papers.html#aaaa">
Parallel Solution of Sparse Linear System of Equations </a>
5 <li>
<a href="papers/papers.html#ffff">
N-Body Computation and Dense Linear System Solvers
Types of data
• Numeric data: Each object is a point in a multidimensional space
• Categorical data: Each object is a vector of categorical values
• Set data: Each object is a set of values (with or without counts)
• Sets can also be represented as binary vectors, or vectors of counts
• Ordered sequences: Each object is an ordered sequence of values.
• Graph data
What can you do with the data?
• Suppose that you are the owner of a supermarket and you have
collected billions of market basket data. What information would you
extract from it and how would you use it?
TID Items
Product placement
1 Bread, Coke, Milk
2 Beer, Bread
3 Beer, Coke, Diaper, Milk Catalog creation
4 Beer, Bread, Diaper, Milk
5 Coke, Diaper, Milk Recommendations
• What if this was an online store?
What can you do with the data?
• Suppose you are a search engine and you have a toolbar log
consisting of
• pages browsed,
• queries, Ad click prediction
• pages clicked,
• ads clicked Query reformulations
each with a user id and a timestamp. What information would you like
to get out of the data?
What can you do with the data?
• Suppose you are a stock broker and you observe the fluctuations of
multiple stocks over time. What information would you like to get our
of your data?
Clustering of stocks
Correlation of stocks
Pattern Evaluation
Data mining: the core of
knowledge discovery
Data Mining
process.
Task-relevant Data
Data Cleaning
Data Integration
Databases
KDD Process
Data cleaning (to remove noise and inconsistent data)
Data selection (where data relevant to the analysis task are retrieved fromthe database)
Data transformation (where data are transformed or consolidated into forms appropriate for mining
by performing summary or aggregation operations, for instance)
Data mining (an essential process where intelligent methods are applied in order to extract data
patterns)
Pattern evaluation (to identify the truly interesting patterns representing knowledge based on some
interestingness measures)
Knowledge presentation (where visualization and knowledge representation techniques are used to
present the mined knowledge to the user)
Architecture of Data Mining
Contd.
Data Sources: Database, World Wide Web(WWW), and data warehouse are parts of data
sources. The data in these sources may be in the form of plain text, spreadsheets, or
other forms of media like photos or videos. WWW is one of the biggest sources of data.
Database Server: The database server contains the actual data ready to be processed. It
performs the task of handling data retrieval as per the request of the user.
Data Mining Engine: It is one of the core components of the data mining architecture that
performs all kinds of data mining techniques like association, classification,
characterization, clustering, prediction, etc.
Contd…
Pattern Evaluation Modules: They are responsible for finding interesting patterns in the data and
sometimes they also interact with the database servers for producing the result of the user
requests.
Graphic User Interface: Since the user cannot fully understand the complexity of the data mining
process so graphical user interface helps the user to communicate effectively with the data mining
system.
Knowledge Base: Knowledge Base is an important part of the data mining engine that is quite
beneficial in guiding the search for the result patterns. Data mining engines may also sometimes
get inputs from the knowledge base. This knowledge base may contain data from user
experiences. The objective of the knowledge base is to make the result more accurate and reliable.
What can we do with data mining?
• Some examples:
• Frequent itemsets and Association Rules extraction
• Coverage
• Clustering
• Classification
• Ranking
• Exploratory analysis
Frequent Itemsets and Association Rules
• Given a set of records each of which contain some number of
items from a given collection;
• Identify sets of items (itemsets) occurring frequently
together
• Produce dependency rules which will predict occurrence of
an item based on occurrences of other items.
Itemsets
ItemsetsDiscovered:
Discovered:
TID Items {Milk,Coke}
{Milk,Coke}
1 Bread, Coke, Milk {Diaper,
{Diaper,Milk}
Milk}
2 Beer, Bread
3 Beer, Coke, Diaper, Milk Rules
RulesDiscovered:
Discovered:
4 Beer, Bread, Diaper, Milk {Milk}
{Milk}-->
-->{Coke}
{Coke}
5 Coke, Diaper, Milk {Diaper,
{Diaper,Milk}
Milk}-->
-->{Beer}
{Beer}
Tan, M. Steinbach and V. Kumar, Introduction to Data Mining
Frequent Itemsets: Applications
• Text mining: finding associated phrases in text
• There are lots of documents that contain the phrases “association rules”,
“data mining” and “efficient algorithm”
• Recommendations:
• Users who buy this item often buy this item as well
• Users who watched James Bond movies, also watched Jason Bourne movies.
Intracluster
Intraclusterdistances
distances Intercluster
Interclusterdistances
distances
are
areminimized
minimized are
aremaximized
maximized
Clustering: Application 1
• Bioinformatics applications:
• Goal: Group genes and tissues together such that genes are
coexpressed on the same tissues
Clustering: Application 2
• Document Clustering:
• Goal: To find groups of documents that are similar to each other based on the
important terms appearing in them.
• Approach: To identify frequently occurring terms in each document. Form a
similarity measure based on the frequencies of different terms. Use it to
cluster.
• Gain: Information Retrieval can utilize the clusters to relate a new document
or search term to clustered documents.
Clustering of S&P 500 Stock Data
1
Applied-Matl-DOW N,Bay-Net work-Down,3-COM-DOWN,
Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,
DSC-Co mm-DOW N,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,
Technology1-DOWN
Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOW N,
Sun-DOW N
2
Apple-Co mp-DOW N,Autodesk-DOWN,DEC-DOWN,
ADV-M icro-Device-DOWN,Andrew-Corp-DOWN,
Co mputer-Assoc-DOWN,Circuit-City-DOWN,
Technology2-DOWN
Co mpaq-DOWN, EM C-Corp-DOWN, Gen-Inst-DOWN,
Motorola-DOW N,Microsoft-DOWN,Scientific-Atl-DOWN
3
Fannie-Mae-DOWN,Fed-Ho me-Loan-DOW N,
MBNA-Corp -DOWN,Morgan-Stanley-DOWN Financial-DOWN
4
Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,
Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Oil-UP
Schlu mberger-UP
Classification: Definition
• Given a collection of records (training set )
• Each record contains a set of attributes, one of the attributes
is the class.
• Find a model for class attribute as a function of the
values of other attributes.
Set
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
Training
Learn
10
10 No Single 90K Yes
Set Classifier Model
Classification: Application 1
• Ad Click Prediction
• Goal: Predict if a user that visits a web page will click on a
displayed ad. Use it to target users with high click
probability.
• Approach:
• Collect data for users over a period of time and record who clicks
and who does not. The {click, no click} information forms the class
attribute.
• Use the history of the user (web pages browsed, queries issued)
as the features.
• Learn a classifier model and test on new users.
Classification: Application 2
• Fraud Detection
• Goal: Predict fraudulent cases in credit card transactions.
• Approach:
• Use credit card transactions and the information on its account-
holder as attributes.
• When does a customer buy, what does he buy, how often he pays on
time, etc
• Label past transactions as fraud or fair transactions. This forms the
class attribute.
• Learn a model for the class of the transactions.
• Use this model to detect fraud by observing credit card
transactions on an account.
Outlier Analysis
Outlier values may also be detected with respect to the location and
type of purchase, or the purchase frequency
Evolution Analysis
• The important thing is to find the right metrics and ask the right questions
• It helps our understanding of the world, and can lead to models of the
phenomena we observe.
Exploratory Analysis: The Web
• What is the structure and the properties of the web?
Exploratory Analysis: The Web
• What is the distribution of the incoming links?
Connections of Data Mining with other areas
• Draws ideas from machine learning/AI, pattern
recognition, statistics, and database systems
• Traditional Techniques
may be unsuitable due to
• Enormity of data Statistics/ Machine Learning/
• High dimensionality AI Pattern
of data Recognition