Why Data Preprocessing?

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Why Data Preprocessing?

• Data in the real world is dirty


 incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate
data.e.g., occupation=“ ”
 noisy: containing errors or outliers e.g., Salary=“-10”
 inconsistent: containing discrepancies in codes or names
• e.g., Age=“42” Birthday=“03/07/1997”
• e.g., Was rating “1,2,3”, now rating “A, B, C”
• e.g., discrepancy between duplicate records
Why Is Data Dirty?
• Incomplete data may come from
 “Not applicable” data value when collected
 Different considerations between the time when the data was collected and when it is analyzed.
 Human/hardware/software problems
• Noisy data (incorrect values) may come from
 Faulty data collection instruments
 Human or computer error at data entry
 Errors in data transmission
• Inconsistent data may come from
 Different data sources
 Functional dependency violation (e.g., modify some linked data)
• Duplicate records also need data cleaning

Why Is Data Preprocessing Important?


• No quality data, no quality mining results!
 Quality decisions must be based on quality data
• e.g., duplicate or missing data may cause incorrect or even misleading statistics
 Data warehouse needs consistent integration of quality data
• Data extraction, cleaning, and transformation comprises the majority of the work of building a data
warehouse
Multi-Dimensional Measure of Data Quality
• A well-accepted multidimensional view:
 Accuracy  Believability
 Completeness  Value added
 Consistency  Interpretability
 Timeliness  Accessibility
• Broad categories:
 Intrinsic, contextual, representational, and accessible
Major Tasks in Data Preprocessing Data cleaning

Fill in missing values, smooth noisy data, identify or remove outliers,
and resolve inconsistencies
• Data integration
• Integration of multiple databases, data cubes, or files
• Data reduction
• Obtains reduced representation in volume but produces the same or
similar analytical results
• Data transformation
• Normalization and aggregation

DESCRIPTIVE DATA SUMMARIZATION


Data attributes and Attribute types:
An attribute is a data field, representing a characteristic or feature of a data object.
The type of attribute can be determined by the set of values that an attribute can have.
• Nominal Attributes: Value of attribute are symbols or names of things and is often referred to as categorical.
• Occupation: teacher, dentist, farmer etc.
• Binary Attributes: A nominal attribute with only two values i.e. 0 or 1.
• Smoker: 0 means person is not a smoker and 1 means he is
• Ordinal Attributes: values with a meaningful order or ranking.
• Customer satisfaction: 0 very dissatisfied, 1 somewhat satisfied, 2 neutral, 3 satisfied, 4 strongly satisfied
• Numeric Attributes: measurable quantity represented in integer or real value. Numerical attributes can be
Interval or ratio scaled.
• Interval-Scaled: the attributes that can not be described as a ratio to zero point.
• Temperature in Celsius or Fahrenheit
• Ratio-Scaled: Numeric attribute with an inherent value of zero-point. -Years of experience
• Discrete versus Continuous Attributes: Discrete attributes have countably infinite set of values.
Continuous attributes are represented as floating point values.
Mining Data Descriptive Characteristics
To better understand the data and to have an overall picture of data many statistical descriptions are used:
• Measure of central tendency: measure the location of center or middle of a data distribution.
• Dispersion of Data: How are the data spread out?
• Graphical Display of statistical Description: Visual representation of data. 1 n
Measuring the Central Tendency
x
n
x
i 1
i

• Mean:
• arithmetic mean: • Weighted arithmetic mean: n
• Trimmed mean: mean after chopping of extreme values. w x i i
• Median: x i 1
n
• Middle value if odd number of values, or average of the middle two values otherwise
N w i


• Estimated by interpolation (for grouped data):

Mode
median=L1+ 2
(
−( ∑ freql )
freqmedian
width
) i 1

• Value that occurs most frequently in the data • Unimodal, bimodal, trimodal
• Empirical formula: mean  mode  3  (mean  median)
Data Cleaning
• Importance
• “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball
• “Data cleaning is the number one problem in data warehousing”—DCI survey
• Data cleaning tasks
• Fill in missing values • Identify outliers and smooth out noisy data
• Correct inconsistent data • Resolve redundancy caused by data integration
Missing Data
• Data is not always available
• E.g., many tuples have no recorded value for several attributes, such as customer income in sales data
• Missing data may be due to
• equipment malfunction • inconsistent with other recorded data and thus deleted
• data not entered due to misunderstanding
• certain data may not be considered important at the time of entry
• did not register history or changes of the data
• Missing data may need to be inferred.
How to Handle Missing Data?
• Ignore the tuple: usually done when class label is missing (assuming the tasks in classification—not
effective when the percentage of missing values per attribute varies considerably.
• Fill in the missing value manually: tedious + infeasible?
• Fill in automatically with
• a global constant : e.g., “unknown”, a new class?!
• the attribute mean
• the attribute mean for all samples belonging to the same class: smarter
• the most probable value: inference-based such as Regression, Bayesian formula or decision tree
Noisy Data
• Noise: random error or variance in a measured variable
• Incorrect attribute values may occur due to
• faulty data collection instruments • data transmission problems
• data entry problems • technology limitation
• inconsistency in naming convention
• Other data problems which requires data cleaning
• duplicate records • incomplete data • inconsistent data

You might also like