Feature Engineering Handout
Feature Engineering Handout
Feature Engineering Handout
Feature Engineering in
Machine Learning
Chun-Liang Li (李俊良)
[email protected]
2016/07/17@
About Me
Academic Competition
• NTU CSIE BS/MS (2012/2013) • KDD Cup 2011 Champions
• Advisor: Prof. Hsuan-Tien Lin
KDD Cup 2013 Champions
• With Prof. Chih-Jen Lin
• CMU MLD PhD (2014-) Prof. Hsuan-Tien Lin
• Advisor: Prof. Jeff Schneider Prof. Shou-De Lin
Prof. Barnabás Póczos Many students
Working
2
What is Machine Learning?
• What is Machine Learning?
Learning Prediction
Existing Data New Data
Model Prediction
3
Data? Algorithm?
• In academic
• Assume we are given good enough data (in d-dimensional
of course )
• In practice
• Where is your good enough data?
4
From Zero to One:
Create your features by your observations
5
An Apple
6
More Fruits
• Method I: Use size of picture
(640, 580) (640, 580)
7
Case Study (KDD Cup 2013)
• Determine whether a paper is written by a given
author
Data: https://www.kaggle.com/c/kdd-cup-2013-author-paper-identification-challenge
8
NTU Approaches
Pipeline Feature Engineering
9
First observation:
Authors Information
• Are these my (Chun-Liang Li) papers? (Easy! check author names)
1. Chun-Liang Li and Hsuan-Tien Lin. Condensed filter tree for cost-sensitive multi-label
classification.
2. Yao-Nan Chen and Hsuan-Tien Lin. Feature-aware label space dimension reduction for
multi-label classification.
• Encode by name similarities (e.g., how many characters are the same)
• 29 features in total
10
Second Observation:
Affiliations
• Are Dr. Chi-Jen Lu and Prof. Chih-Jen Lin the same?
• 13 features in total
11
Last of KDD Cup 2013
• Many other features, including
Summary
The 97 features designed by students won the competition
12
Furthermore
• If I can access the content, can I do better?
Definitely
13
Writing Style?
1 2
• “I was testing things
3 like word length, sentence
4
length, paragraph length, frequency of particular
5
words and the pattern of punctuation”
— Peter Millican (University of Oxford)
14
Game Changing Point:
Deep Learning
15
Common Type of Data
• Image
• Text
16
Representation Learning
• Deep Learning as learning hidden representations
Raw data
Use last layer to extract features (Krizhevsky et al., 2012)
(Check Prof. Lee’s talk and go to deep learning session later )
17
Use Pre-trained Network
• Yon don’t need to train a network by yourself
• AlexNet
• VGG
• Word2Vector
Result
Simply using deep learning features achieves state-of-the-art
performance in many applications
18
Successful Example
• The PASCAL Visual Object Classes Challenge
0.6
Mean Average
0.45
Precision
0
2005 2007 2008 2009 2010 2012 2013 2014
HoG feature Slow progress on feature engineering and
algorithms before deep learning
19
Curse of Dimensionality:
Feature Selection and Dimension Reduction
20
The more, the better?
Practice Noisy Feature
21
Feature Selection
• Select import features
• Reduce dimensions
• Explainable Results
22
KDD Cup Again
• In KDD Cup 2013, we actually generated more
than 200 features (some secrets you won’t see in the paper )
23
Non-useful Features
• Duplicated features
• Noisy features
24
Dimension Reduction
• Let’s visualize the data (a perfect example)
1 0.5
0.9 0.4
0.8
0.3
0.7 0.2
0.6 0.1
0.5
0.4
0
−0.1
One dimension is enough
0.3 −0.2
0.2 −0.3
0.1 −0.4
0 −0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4
0.9 0.4
0.8 0.3
0.7
0.6
0.2
0.1
Trade-off between
0.5
0.4
0
0.2 −0.3
0.1 −0.4
0 −0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4
25
PCA — Intuition
1
• 0.8
0.7
0.5
0.4
0.3
coordinates
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
http://comp435p.tk/
26
PCA — Intuition (cont.)
• We can use very few base faces to approximate
(describe) the original faces
1 2 3 4 5 6 7 8 9
(Sirovich and Kirby, Low-dimensional procedure for the characterization of human faces)
http://comp435p.tk/
27
PCA — Case Study
• CIFAR-10 image classification
with raw pixels as features and
using approximated kernel SVM
(Li and Pòczos, Utilize Old Coordinates: Faster Doubly Stochastic Gradients for
Kernel Methods, UAI 2016)
28
PCA in Practice
• Practical concern:
Small Problem
• Time complexity: O(N d2 ) PCA takes <10 seconds for
CIFAR-10 dataset (d=3072) by
• Space complexity: O(d2 ) using 12 cores (E5-2620)
29
Conclusion
• Observe the data and encode them into meaningful features
Beginning: Existing Data Machine (Algorithm)
Now: Existing Data Features (Simple) Algorithm
• Computational concern
30
Thanks!
Any Question?
31
References
1. Richard Szeliski. Computer Vision: Algorithms and Applications, 2010.
2. Senjuti Basu Roy, Martine De Cock, Vani Mandava, Swapna Savanna, Brian Dalessandro, Claudia
Perlich, William Cukierski, and Ben Hamner. The Microsoft academic search dataset and KDD cup
2013. In KDD Cup 2013 Workshop, 2013.
3. Chun-Liang Li, Yu-Chuan Su, Ting-Wei Lin, Cheng-Hao Tsai, Wei-Cheng Chang, Kuan-Hao Huang,
Tzu-Ming Kuo, Shan-Wei Lin, Young-San Lin, Yu-Chen Lu, Chun-Pai Yang, Cheng-Xia Chang, Wei-
Sheng Chin, Yu-Chin Juan, Hsiao-Yu Tung, Jui-Pin Wang, Cheng-Kuang Wei, Felix Wu, Tu-Chun Yin,
Tong Yu, Yong Zhuang, Shou-De Lin, Hsuan-Tien Lin, and Chih-Jen Lin. Combination of feature
engineering and ranking models for paper-author identification in KDD Cup 2013. In JMLR, 2015.
4. How JK Rowling was unmasked. http://www.bbc.com/news/entertainment-arts-23313074
5. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new
perspectives. In IEEE PAMI, 2015.
6. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep
convolutional neural networks. In NIPS, 2012.
7. Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image
Recognition. In ICLR, 2015.
8. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word
Representations in Vector Space. Technical Report, 2013.
32
9. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for
accurate object detection and semantic segmentation. In CVPR, 2014.
10. Matthew A. Turk, and Alex Peatland. Face Recognition Using Eigenfaces. In CVPR, 1991.
11. Chun-Liang Li, and Barnabás Póczos. Utilize Old Coordinates: Faster Doubly Stochastic
Gradients for Kernel Methods. In UAI, 2016.
12. Nathan Halko, Per-Gunnar Martinsson, Joel A. Tropp. Finding structure with randomness:
Probabilistic algorithms for constructing approximate matrix decompositions. In SIAM Rev.,
2011.
13. Chun-Liang Li, Hsuan-Tien Lin and, Chi-Jen Lu. Rivalry of Two Families of Algorithms for
Memory-Restricted Streaming PCA. In AISTATS, 2016.
33