DNN2 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

TRAINING DATA REDUCTION IN DEEP NEURAL NETWORKS WITH PARTIAL MUTUAL

INFORMATION BASED FEATURE SELECTION AND CORRELATION MATCHING BASED


ACTIVE LEARNING

Jian Zheng, Wei Yang, and Xiaohua Li

State University of New York at Binghamton


Department of ECE, Binghamton, NY 13902
{jzheng65, wyang17, xli}@binghamton.edu

ABSTRACT the input data. The input data usually have a large proportion
of irrelevant or redundant features which not only degrade the
In this paper, we develop a novel scheme to reduce the amount
accuracy of the learning models [3] but also call for an unnec-
of training data required for training deep neural networks
essarily large training data set.
(DNNs). We first apply a partial mutual information (PMI)
technique to seek for the optimal DNN feature set. Then Since irrelevant or redundant features in the input data
we use a correlation matching based active learning (CMAL) may not improve the performance of learning, many input
technique to select and label the most informative training variable selection (IVS) algorithms have been developed to
data. We integrate these two techniques with a DNN classifier choose the features which are highly informative and least
consisting of layers of unsupervised sparse autoencoders and redundant. The IVS algorithms can be subdivided into two
a supervised softmax layer. Simulations are then conducted categories, i.e., filters and wrappers. One type of the filters
over the breast cancer data set from the UCI repository to algorithms exploits the partial mutual information (PMI) as
show that this scheme can drastically reduce the amount of optimization objective [4]. As an extension of mutual infor-
labeled data necessary for the DNN training, and can guaran- mation, PMI measures the dependency among multiple vari-
tee the superior performance in reduced training data sets. ables. PMI-based feature selection is capable of resolving
the feature redundancy problem without assuming any depen-
Index Terms— Deep neural network, feature selection, dence structure among the features or variables. Neverthe-
partial mutual information, active learning, classification less, the IVS algorithms are conventionally studied in terms
of making the training task computationally more efficient or
1. INTRODUCTION improving the learning performance, rather than training data
reduction.
We have seen an explosion of research in deep learning since Training data reduction is conventionally a subject of ac-
its emergence [1]. Deep neural networks (DNNs) distinguish tive learning (AL). AL aims to select and label the most am-
from those shallow-structured neural networks by their depth biguous and informative training samples [5]. This is desir-
or the number of hidden layers. The deep depth makes it pos- able when the training data are costly to label [6] or the size
sible to extract more complex latent structures, to learn a hi- of the training data set is too big. An active learning algo-
erarchy of features automatically from data, and to achieve rithm that directly minimizes the expected generalization er-
unprecedented performance [2]. ror was developed in [7]. Algorithms in [8] and [9] used the
One of the challenges of DNNs is that they need an enor- stratification and vector norm maximization techniques, re-
mous amount of labeled training data. The input data of spectively. Sequential active learning was studied in [10][11]
DNNs tend to be high dimensional, the optimization prob- under a concept of integrated human and machine learning.
lems tend to be highly nonlinear and complex, and the train-
In this paper, we apply jointly the PMI-based feature
ing procedure tends to converge slowly. These factors lead
selection technique and a correlation matching based active
to the requirement of a large amount of training data. How-
learning (CMAL) technique to reduce the amount of train-
ever, in practice, it can be very difficult to obtain a sufficient
ing data needed in DNNs. This scheme will also improve
amount of training data, and it is extremely labor intensive to
the robustness of the DNNs in case of reduced training data
label the data accurately.
sets. The PMI technique is firstly applied to select the opti-
To reduce the amount of training data that are needed in
mal feature set by abandoning those irrelevant and redundant
DNNs, firstly, we need to reduce the effective dimension of
features. Then, the simple yet effective CMAL technique is
This work is supported by NSF via grant CNS-1443885. applied to select the most informative training data. We use

978-1-5090-4117-6/17/$31.00 ©2017 IEEE 2362 ICASSP 2017


PN
these two techniques to construct a new DNN classification In (2), ρ̂j = N1 i=1 hj (xi ) is the average activation over the
scheme, and test it with a hand-crafted data set. jth hidden unit, ρ is a sparsity level, and α is used to control
The remainder of this paper is organized as follows. In the impact of the sparsity regularizer.
2
Section 2, the DNN classification model is introduced. The In addition, a weight decay term βkWk2 can also be used
integration of PMI and CMAL with DNN is presented in Sec- to decrease the weights of the network in terms of magnitude
tion 3. Simulations are presented in Section 4 and conclusions to get rid of overfitting, where kWk22 = tr(W′ W), (·)′ de-
are given in Section 5. notes transpose, and β is utilized to control the impact of the
above weight regularizer [13].
2. DNN CLASSIFICATION MODEL As a result, the overall cost function of a sparse autoen-
coder becomes
In this paper, we consider a DNN classification model that
M
consists of layers of sparse autoencoders and a softmax layer, X 2
Jsparse (W, b) = J(W, b) + α KL(ρkρ̂j ) + βkWk2 .
followed by a fine tuning procedure. The sparse autoencoders
j=1
are trained in an unsupervised way to learn features of differ- (3)
ent levels from the input data. The softmax layer is trained in In our model, multiple such sparse autoencoders are ap-
a supervised way for classification. Fine tuning is conducted plied sequentially to learn more and more abstracted or com-
via backpropagation to further improve the performance. pressed features automatically from the input data. Specif-
ically, let X ∈ ℜK×N be the input training data set, each
2.1. Unsupervised Training of Sparse Autoencoders sparse autoencoder ℓ, where ℓ = 1, · · · , L, is trained and its
An autoencoder is a neural network that learns features output feature set Xℓ ∈ ℜMℓ ×N is used as the input to the
from unlabeled data. Given an unlabeled training data set next autoencoder (ℓ + 1) to achieve a more abstract and com-
{x1 , x2 , . . . xN }, the output of an autoencoder yi are set to pressed feature set Xℓ+1 ∈ ℜMℓ+1 ×N . Note that Mℓ is the
be the same as the input xi , i.e., yi = xi , where xi , yi ∈ ℜK , number of hidden units of the sparse autoencoder ℓ, and we
K is the number of features, and N is the number of training have Mℓ > Mℓ+1 .
data records.
The autoencoder tries to learn a function that approxi- 2.2. Classification and Fine-tuning
mates the identity function whose output is similar to the in-
put. For the input data vector xi ∈ ℜK , a single-layer au- After extracting features at various levels with the L sparse
toencoder maps it to a hidden representation through h(xi ) = autoencoders in the unsupervised way, a supervised softmax
f (W1 xi + b1 ), where f (z) is a non-linear activation func- classification layer is added. This softmax layer has the last
tion (e.g., f (z) = 1/(1 + exp(−z))), h(xi ) ∈ ℜM is the extracted feature set XL and the corresponding target set y ∈
neuron activation vector, W1 ∈ ℜM×K is a weight matrix, ℜN as input, which means this layer is trained with the la-
b1 ∈ ℜM is a bias vector and M is the number of hid- beled training data set. Therefore, the autoencoders are used
den units. The reconstructed output of the autoencoder is to extract more and more abstract features, while the softmax
x̂i = f (W2 h(xi ) + b2 ), where b2 ∈ ℜK , W2 ∈ ℜK×M , layer is used to classify the data in a supervised manner.
and we have x̂i ≈ xi . We stack all the L + 1 trained layers together to form the
With all the training data xi , backpropagation can be used overall DNN that is used to conduct the classification tasks.
to adapt the weight matrices and bias vectors so as to mini- The output of the DNN can be further improved with a fine
mize the reconstruction error tuning procedure, which is conducted by applying backprop-
agation over the multilayer DNN iteratively.
N
1X
J(W, b) = kxi − x̂i k2 . (1)
N i=1 3. INTEGRATING PMI AND CMAL WITH DNN
If M > K, the autoencoder would not compress the in- In order to reduce the amount of training data for the DNN, in
put information, but preserves all the input information triv- this section, we integrate the DNN scheme of Section 2 with
ially. To avoid this trivial solution, sparsity constraints can a PMI-based feature selection technique and a CMAL-based
be imposed on the hidden units to reduce the amount of in- training data selection technique.
formation put into the hidden layer. A way to realize the
sparsity
PM constraint is to apply the penalty regularization term
α j=1 KL(ρkρ̂j ), which adds together the Kullback-Leibler 3.1. Partial Mutual Information for Feature Selection
divergence [12]
Let Z be a set of pre-selected features of the input data. For
ρ 1−ρ the input discrete variable X (with realizations xi ) and the
KL(ρkρ̂j ) = ρ log + (1 − ρ) log . (2) discrete variable Y (with realizations yi ), the corresponding
ρ̂j 1 − ρ̂j

2363
PMI is calculated as 3.2. Correlation Matching based Active Learning
N
fX ′ ,Y ′ (xi ′ , yi ′ ) In our scheme, we use CMAL to select and label T of the
 
1 X
PMI = ln , (4) most informative training data out of the overall N data sam-
N i=1 fX ′ (xi ′ )fY ′ (yi ′ )
ples Xpmi . This is conducted iteratively as follows. Con-
where xi ′ = xi − E(xi |Z) and yi ′ = yi − E(yi |Z) repre- sidering the jth iteration, where j = 1, · · · , T , we need to
sent the residual information of xi and yi , i = 1, . . . , N , re- select a new data sample xtj ∈ ℜL . Note that in the previous
spectively. In (4), E(·) denotes expectation, while fX ′ (xi ′ ), iterations we have already selected j − 1 training data sam-
fY ′ (yi ′ ) and fX ′ ,Y ′ (xi ′ , yi ′ ) are the marginal and joint prob- ples and formed the training data set (Xj−1 , yj−1 ), where
ability densities. The conditional expectation E(x|Z) can be Xj−1 = [xt1 , · · · , xtj−1 ]′ and yj−1 = [yt1 , · · · , ytj−1 ]′ .
estimated as The new data sample xtj is then selected from the data
N
1 X pool Xj = {xi |1 ≤ i ≤ N, xi 6= xtℓ , 1 ≤ ℓ ≤ j − 1}. We
E(x|Z) = ωi xi , (5) can simply evaluate each of the N − j + 1 data samples in Xj
N i=1
and choose
where, with a bandwidth parameter λ, we define 2
1 ′ ′

xtj = arg min + − , (12)
 
exp − kZ−Z ik X j−1 Xj−1 zz R
2λ2 z∈Xj j
ωi = P  . (6)
N kZ−Zi k
j=1 exp − 2λ2
PN ′
where R = N1 i=1 xi xi is the sample correlation matrix of
The bandwidth parameter is calculated as the overall data pool. The objective of the optimization (12)
is to rapidly match the correlation of the training data set with
1

1
 p+4
1
that of the overall data pool.
λ= σN − p+4 , (7) With the selected xtj , we acquire its corresponding la-
p+2
bel ytj and append (xtj , ytj ) into the set (Xj−1 , yj−1 ) to
where σ is the standard deviation of the data sample, p is the get (Xj , yj ). In this way, T training data samples Xcmal ∈
dimension of each Zi , and kZ − Zi k is the Mahalanobis dis- ℜL×T and the corresponding data label set y ∈ ℜT are deter-
tance. Note that the Mahalanobis distance is defined as mined. These data will be used in the subsequent training of
X−1 the DNN.
kZ − Zi k = (Z − Zi )′ (Z − Zi ), (8)

3.3. DNN Classification Algorithm


P
where is the sample covariance matrix.
Kernel density estimation (KDE) is used to estimate the
The overall procedure of applying the PMI and CMAL tech-
marginal probability density
niques to optimize the number of training data is shown in Al-
1 X
N gorithm 1. The PMI-based technique is first applied to select
fˆ(x) = Kλ (x − xi ), (9) the most informative and least redundant input data features.
N i=1
Then the CMAL-based technique is applied to select and la-
where bel the most informative training data. The reduced training
  data set with the optimized feature set serves as the input to
1 −kx − xi k train the multilayer DNN.
Kλ = √ p p P exp (10)
2πλ | | 2λ2 The PMI iteration continues until the performance of the
overall DNN scheme starts to degrade when adding the last
is the Gaussian kernel. Similarly, the joint probability density selected feature whose PMI value is Imin .
defined in [14] is estimated as
N
1 X 4. SIMULATIONS
fˆ(x, y) = Kλ (x − xi )Kλy (y − yi ). (11)
N i=1
We have conducted intensive simulations to evaluate the per-
The procedure of using PMI as a metric to reduce the formance of our proposed scheme, in particular the DNN
number of features is shown in Algorithm 1. The features classification accuracy under reduced training data sets. In
are selected one by one. The next most informative feature is simulations, we compared our algorithm DNN CMAL with
selected as the one that gives the maximum PMI when taking four other algorithms: SVM Full and DNN Full which ap-
into consideration of those features that have already been se- plied SVM (support vector machine) and DNN without train-
lected. This process is performed iteratively until a stopping ing data optimization; DNN Random which selected training
criterion is met. With the selected features, the input data set data randomly; and DNN PBAL which applied DNN and the
X can be rewritten as Xpmi . active learning algorithm of [7].

2364
Algorithm 1 DNN feature and training data optimization 2 that without feature selection, SVM Full and DNN Full
1: procedure PMI achieved an accuracy of 37.3% and 95.2%, respectively. By
2: Input:{Xin ∈ ℜK×N |xi , i = 1, . . . , K}, yin ∈ ℜN , randomly selecting T training samples from the overall N
Z→∅ training samples, the accuracy of DNN Random was only
3: While Xin 6= ∅ 60.0%. In contrast, selecting T training samples with active
4: For yin and each xi , estimate the residue information learning algorithms, DNN PBAL and DNN CMAL achieved
5: yi′ and x′i according to (5)-(8) the accuracy of 87.1% and 95.7%, respectively.
6: Estimate the marginal and joint probability densities Next, we applied the PMI technique to select the op-
7: according to (7)-(11) timal feature set, which reduced the number of features
8: Estimate each PMI value I with (4) and select the from 30 to 9 (K to L). Table 2 shows that the perfor-
9: candidate feature x that maximizes I mance of all the five classifiers improved at different lev-
10: if I < Imin then els. SVM Full and DNN Full jumped to 91.8% and 96.8%,
11: Exit respectively. As for DNN PBAL, it obtained an accuracy
12: Move x to Z of 94.3%. DNN Random achieved the lowest accuracy of
13: Output: Optimal input variable set Xpmi = Z ∈ ℜL×N 62.9%. In contrast, when selecting L (9) features out of
14: procedure CMAL K (30) features randomly, the accuracy of DNN Random,
15: Input:Xpmi ∈ ℜL×N SVM Full and DNN PBAL were increased to 72.9%, 86.3%
16: Select T data samples iteratively according to (12) and 91.4% individually. It is important to note that with
17: Output: Optimal training data X = Xcmal ∈ ℜL×T PMI, our DNN CMAL witnessed a dramatic improvement
from 95.7% to over 99.9%. Figure 1 shows the excellent
performance of our proposed classifier DNN CMAL with
We used the Breast Cancer Wisconsin (Diagnostic) data PMI.
set [15] from the UCI data repository to evaluate the classifi-
cation accuracy. There are altogether 569 data records. This Confusion Matrix

data set has been used widely in many breast cancer detec-
tion studies. As shown in Table 1, a classifier named AIRS in 1
26 0 100%
37.1% 0.0% 0.0%
[16] obtained an accuracy of 97.2%. A classifier named LS-
SVM was proposed in [17] with an accuracy of 98.53%. RS-
BPNN was used in [18] with an accuracy of 98.6%. More re-
Output Class

cently, an algorithm integrating unsupervised deep belief net- 2


0 44 100%
0.0% 62.9% 0.0%
work (DBN) with supervised back propagation [19] achieved
an accuracy of 99.68%.

Methods AIRS LS-SVM RS-BPNN DBN 100% 100% 100%


0.0% 0.0% 0.0%
Accuracy 97.2% 98.53% 98.6% 99.68%

Table 1. Performance of some classifiers in literature. 1 2


Target Class

Methods T Full Random PMI


Fig. 1. Performance of the proposed DNN scheme with the
SVM Full N 37.3% 86.3% 91.8%
PMI and CMAL techniques.
DNN Full N 95.2% 95.2% 96.8%
DNN Random 0.5N 60.0% 72.9% 62.9%
DNN PBAL 0.5N 87.1% 91.4% 94.3%
DNN CMAL 0.5N 95.7% 95.7% ≥99.9% 5. CONCLUSIONS

Table 2. Performance of the five classifiers compared in our In this paper, we developed a novel scheme that reduces the
simulation. training data amount necessary for DNN by applying the par-
tial mutual information technique to select the optimal fea-
In our simulation, the training/testing ratio was set as ture set and by applying the correlation matching based ac-
78%/22%. For the training data set, N = 499, T = 21 N , and tive learning technique to select the most informative training
K = 30. data. Simulations over the UCI breast cancer data set show
Firstly, in order to evaluate the performance of CMAL, that with only half of the original training data and 9 out of 30
we simulated the five algorithms without turning on the PMI- features, our proposed scheme outperforms many other algo-
based feature selection function. It can be seen from Table rithms.

2365
6. REFERENCES [13] Jun Xu, Lei Xiang, Renlong Hang, and Jianzhong Wu,
“Stacked sparse autoencoder (ssae) based framework for
[1] Yoshua Bengio, “Learning deep architectures for ai,” nuclei patch classification on breast cancer histopathol-
Foundations and trends R in Machine Learning, vol. 2, ogy,” in 2014 IEEE 11th International Symposium on
no. 1, pp. 1–127, 2009. Biomedical Imaging (ISBI). IEEE, 2014, pp. 999–1002.

[2] Yoshua Bengio, Aaron C Courville, and Pascal Vincent, [14] Shotaro Akaho, “Conditionally independent component
“Unsupervised feature learning and deep learning: A re- analysis for supervised feature extraction,” Neurocom-
view and new perspectives,” CoRR, abs/1206.5538, vol. puting, vol. 49, no. 1, pp. 139–150, 2002.
1, 2012.
[15] M. Lichman, “UCI machine learning repository,” 2013.
[3] Lei Yu and Huan Liu, “Feature selection for high-
[16] Donald E Goodman, Lois Boggess, and Andrew
dimensional data: A fast correlation-based filter solu-
Watkins, “Artificial immune system classification of
tion,” in ICML, 2003, vol. 3, pp. 856–863.
multiple-class problems,” Proceedings of the artificial
[4] Robert J May, Holger R Maier, Graeme C Dandy, and neural networks in engineering ANNIE, vol. 2, pp. 179–
TMK Gayani Fernando, “Non-linear variable selection 183, 2002.
for artificial neural networks using partial mutual infor- [17] Kemal Polat and Salih Güneş, “Breast cancer diagno-
mation,” Environmental Modelling & Software, vol. 23,
sis using least square support vector machine,” Digital
no. 10, pp. 1312–1326, 2008.
Signal Processing, vol. 17, no. 4, pp. 694–701, 2007.
[5] Scott Doyle, James Monaco, Michael Feldman, John [18] Kindie Biredagn Nahato, Khanna Nehemiah Harichan-
Tomaszewski, and Anant Madabhushi, “An active dran, and Kannan Arputharaj, “Knowledge mining
learning based classification strategy for the minority from clinical datasets using rough sets and backpropaga-
class problem: application to histopathology annota- tion neural network,” Computational and mathematical
tion,” BMC bioinformatics, vol. 12, no. 1, pp. 424, 2011. methods in medicine, vol. 2015, 2015.
[6] Xiaohua Li and Jian Zheng, “Active learning for regres- [19] Ahmed M Abdel-Zaher and Ayman M Eldeib, “Breast
sion with correlation matching and labeling-error sup- cancer classification using deep belief networks,” Expert
pression,” 2015. Systems with Applications, vol. 46, pp. 139–144, 2016.
[7] Masashi Sugiyama and Shinichi Nakajima, “Pool-based
active learning in approximate linear regression,” Ma-
chine Learning, vol. 75, no. 3, pp. 249–274, 2009.

[8] Sivan Sabato and Remi Munos, “Active regression by


stratification,” in Advances in Neural Information Pro-
cessing Systems, 2014, pp. 469–477.

[9] Carlos Riquelme, Baosen Zhang, and Ramesh Johari,


“Online active linear regression via thresholding,” arXiv
preprint arXiv:1602.02845, 2016.

[10] Xiaohua Li and Jian Zheng, “Joint machine learning


and human learning design with sequential active learn-
ing and outlier detection for linear regression problems,”
in 2016 Annual Conference on Information Science and
Systems (CISS). IEEE, 2016, pp. 407–411.

[11] Xiaohua Li, Yu Chen, and Kai Zeng, “Integration of


machine learning and human learning for training opti-
mization in robust linear regression,” in 2016 IEEE In-
ternational Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 2016, pp. 2613–2617.

[12] Andrew Ng, “Sparse autoencoder,” CS294A Lecture


notes, vol. 72, pp. 1–19, 2011.

2366

You might also like