DNN2 PDF
DNN2 PDF
DNN2 PDF
ABSTRACT the input data. The input data usually have a large proportion
of irrelevant or redundant features which not only degrade the
In this paper, we develop a novel scheme to reduce the amount
accuracy of the learning models [3] but also call for an unnec-
of training data required for training deep neural networks
essarily large training data set.
(DNNs). We first apply a partial mutual information (PMI)
technique to seek for the optimal DNN feature set. Then Since irrelevant or redundant features in the input data
we use a correlation matching based active learning (CMAL) may not improve the performance of learning, many input
technique to select and label the most informative training variable selection (IVS) algorithms have been developed to
data. We integrate these two techniques with a DNN classifier choose the features which are highly informative and least
consisting of layers of unsupervised sparse autoencoders and redundant. The IVS algorithms can be subdivided into two
a supervised softmax layer. Simulations are then conducted categories, i.e., filters and wrappers. One type of the filters
over the breast cancer data set from the UCI repository to algorithms exploits the partial mutual information (PMI) as
show that this scheme can drastically reduce the amount of optimization objective [4]. As an extension of mutual infor-
labeled data necessary for the DNN training, and can guaran- mation, PMI measures the dependency among multiple vari-
tee the superior performance in reduced training data sets. ables. PMI-based feature selection is capable of resolving
the feature redundancy problem without assuming any depen-
Index Terms— Deep neural network, feature selection, dence structure among the features or variables. Neverthe-
partial mutual information, active learning, classification less, the IVS algorithms are conventionally studied in terms
of making the training task computationally more efficient or
1. INTRODUCTION improving the learning performance, rather than training data
reduction.
We have seen an explosion of research in deep learning since Training data reduction is conventionally a subject of ac-
its emergence [1]. Deep neural networks (DNNs) distinguish tive learning (AL). AL aims to select and label the most am-
from those shallow-structured neural networks by their depth biguous and informative training samples [5]. This is desir-
or the number of hidden layers. The deep depth makes it pos- able when the training data are costly to label [6] or the size
sible to extract more complex latent structures, to learn a hi- of the training data set is too big. An active learning algo-
erarchy of features automatically from data, and to achieve rithm that directly minimizes the expected generalization er-
unprecedented performance [2]. ror was developed in [7]. Algorithms in [8] and [9] used the
One of the challenges of DNNs is that they need an enor- stratification and vector norm maximization techniques, re-
mous amount of labeled training data. The input data of spectively. Sequential active learning was studied in [10][11]
DNNs tend to be high dimensional, the optimization prob- under a concept of integrated human and machine learning.
lems tend to be highly nonlinear and complex, and the train-
In this paper, we apply jointly the PMI-based feature
ing procedure tends to converge slowly. These factors lead
selection technique and a correlation matching based active
to the requirement of a large amount of training data. How-
learning (CMAL) technique to reduce the amount of train-
ever, in practice, it can be very difficult to obtain a sufficient
ing data needed in DNNs. This scheme will also improve
amount of training data, and it is extremely labor intensive to
the robustness of the DNNs in case of reduced training data
label the data accurately.
sets. The PMI technique is firstly applied to select the opti-
To reduce the amount of training data that are needed in
mal feature set by abandoning those irrelevant and redundant
DNNs, firstly, we need to reduce the effective dimension of
features. Then, the simple yet effective CMAL technique is
This work is supported by NSF via grant CNS-1443885. applied to select the most informative training data. We use
2363
PMI is calculated as 3.2. Correlation Matching based Active Learning
N
fX ′ ,Y ′ (xi ′ , yi ′ ) In our scheme, we use CMAL to select and label T of the
1 X
PMI = ln , (4) most informative training data out of the overall N data sam-
N i=1 fX ′ (xi ′ )fY ′ (yi ′ )
ples Xpmi . This is conducted iteratively as follows. Con-
where xi ′ = xi − E(xi |Z) and yi ′ = yi − E(yi |Z) repre- sidering the jth iteration, where j = 1, · · · , T , we need to
sent the residual information of xi and yi , i = 1, . . . , N , re- select a new data sample xtj ∈ ℜL . Note that in the previous
spectively. In (4), E(·) denotes expectation, while fX ′ (xi ′ ), iterations we have already selected j − 1 training data sam-
fY ′ (yi ′ ) and fX ′ ,Y ′ (xi ′ , yi ′ ) are the marginal and joint prob- ples and formed the training data set (Xj−1 , yj−1 ), where
ability densities. The conditional expectation E(x|Z) can be Xj−1 = [xt1 , · · · , xtj−1 ]′ and yj−1 = [yt1 , · · · , ytj−1 ]′ .
estimated as The new data sample xtj is then selected from the data
N
1 X pool Xj = {xi |1 ≤ i ≤ N, xi 6= xtℓ , 1 ≤ ℓ ≤ j − 1}. We
E(x|Z) = ωi xi , (5) can simply evaluate each of the N − j + 1 data samples in Xj
N i=1
and choose
where, with a bandwidth parameter λ, we define
2
1 ′ ′
xtj = arg min
+ −
, (12)
exp − kZ−Z ik X j−1 Xj−1 zz R
2λ2 z∈Xj
j
ωi = P . (6)
N kZ−Zi k
j=1 exp − 2λ2
PN ′
where R = N1 i=1 xi xi is the sample correlation matrix of
The bandwidth parameter is calculated as the overall data pool. The objective of the optimization (12)
is to rapidly match the correlation of the training data set with
1
1
p+4
1
that of the overall data pool.
λ= σN − p+4 , (7) With the selected xtj , we acquire its corresponding la-
p+2
bel ytj and append (xtj , ytj ) into the set (Xj−1 , yj−1 ) to
where σ is the standard deviation of the data sample, p is the get (Xj , yj ). In this way, T training data samples Xcmal ∈
dimension of each Zi , and kZ − Zi k is the Mahalanobis dis- ℜL×T and the corresponding data label set y ∈ ℜT are deter-
tance. Note that the Mahalanobis distance is defined as mined. These data will be used in the subsequent training of
X−1 the DNN.
kZ − Zi k = (Z − Zi )′ (Z − Zi ), (8)
2364
Algorithm 1 DNN feature and training data optimization 2 that without feature selection, SVM Full and DNN Full
1: procedure PMI achieved an accuracy of 37.3% and 95.2%, respectively. By
2: Input:{Xin ∈ ℜK×N |xi , i = 1, . . . , K}, yin ∈ ℜN , randomly selecting T training samples from the overall N
Z→∅ training samples, the accuracy of DNN Random was only
3: While Xin 6= ∅ 60.0%. In contrast, selecting T training samples with active
4: For yin and each xi , estimate the residue information learning algorithms, DNN PBAL and DNN CMAL achieved
5: yi′ and x′i according to (5)-(8) the accuracy of 87.1% and 95.7%, respectively.
6: Estimate the marginal and joint probability densities Next, we applied the PMI technique to select the op-
7: according to (7)-(11) timal feature set, which reduced the number of features
8: Estimate each PMI value I with (4) and select the from 30 to 9 (K to L). Table 2 shows that the perfor-
9: candidate feature x that maximizes I mance of all the five classifiers improved at different lev-
10: if I < Imin then els. SVM Full and DNN Full jumped to 91.8% and 96.8%,
11: Exit respectively. As for DNN PBAL, it obtained an accuracy
12: Move x to Z of 94.3%. DNN Random achieved the lowest accuracy of
13: Output: Optimal input variable set Xpmi = Z ∈ ℜL×N 62.9%. In contrast, when selecting L (9) features out of
14: procedure CMAL K (30) features randomly, the accuracy of DNN Random,
15: Input:Xpmi ∈ ℜL×N SVM Full and DNN PBAL were increased to 72.9%, 86.3%
16: Select T data samples iteratively according to (12) and 91.4% individually. It is important to note that with
17: Output: Optimal training data X = Xcmal ∈ ℜL×T PMI, our DNN CMAL witnessed a dramatic improvement
from 95.7% to over 99.9%. Figure 1 shows the excellent
performance of our proposed classifier DNN CMAL with
We used the Breast Cancer Wisconsin (Diagnostic) data PMI.
set [15] from the UCI data repository to evaluate the classifi-
cation accuracy. There are altogether 569 data records. This Confusion Matrix
data set has been used widely in many breast cancer detec-
tion studies. As shown in Table 1, a classifier named AIRS in 1
26 0 100%
37.1% 0.0% 0.0%
[16] obtained an accuracy of 97.2%. A classifier named LS-
SVM was proposed in [17] with an accuracy of 98.53%. RS-
BPNN was used in [18] with an accuracy of 98.6%. More re-
Output Class
Table 2. Performance of the five classifiers compared in our In this paper, we developed a novel scheme that reduces the
simulation. training data amount necessary for DNN by applying the par-
tial mutual information technique to select the optimal fea-
In our simulation, the training/testing ratio was set as ture set and by applying the correlation matching based ac-
78%/22%. For the training data set, N = 499, T = 21 N , and tive learning technique to select the most informative training
K = 30. data. Simulations over the UCI breast cancer data set show
Firstly, in order to evaluate the performance of CMAL, that with only half of the original training data and 9 out of 30
we simulated the five algorithms without turning on the PMI- features, our proposed scheme outperforms many other algo-
based feature selection function. It can be seen from Table rithms.
2365
6. REFERENCES [13] Jun Xu, Lei Xiang, Renlong Hang, and Jianzhong Wu,
“Stacked sparse autoencoder (ssae) based framework for
[1] Yoshua Bengio, “Learning deep architectures for ai,” nuclei patch classification on breast cancer histopathol-
Foundations and trends
R in Machine Learning, vol. 2, ogy,” in 2014 IEEE 11th International Symposium on
no. 1, pp. 1–127, 2009. Biomedical Imaging (ISBI). IEEE, 2014, pp. 999–1002.
[2] Yoshua Bengio, Aaron C Courville, and Pascal Vincent, [14] Shotaro Akaho, “Conditionally independent component
“Unsupervised feature learning and deep learning: A re- analysis for supervised feature extraction,” Neurocom-
view and new perspectives,” CoRR, abs/1206.5538, vol. puting, vol. 49, no. 1, pp. 139–150, 2002.
1, 2012.
[15] M. Lichman, “UCI machine learning repository,” 2013.
[3] Lei Yu and Huan Liu, “Feature selection for high-
[16] Donald E Goodman, Lois Boggess, and Andrew
dimensional data: A fast correlation-based filter solu-
Watkins, “Artificial immune system classification of
tion,” in ICML, 2003, vol. 3, pp. 856–863.
multiple-class problems,” Proceedings of the artificial
[4] Robert J May, Holger R Maier, Graeme C Dandy, and neural networks in engineering ANNIE, vol. 2, pp. 179–
TMK Gayani Fernando, “Non-linear variable selection 183, 2002.
for artificial neural networks using partial mutual infor- [17] Kemal Polat and Salih Güneş, “Breast cancer diagno-
mation,” Environmental Modelling & Software, vol. 23,
sis using least square support vector machine,” Digital
no. 10, pp. 1312–1326, 2008.
Signal Processing, vol. 17, no. 4, pp. 694–701, 2007.
[5] Scott Doyle, James Monaco, Michael Feldman, John [18] Kindie Biredagn Nahato, Khanna Nehemiah Harichan-
Tomaszewski, and Anant Madabhushi, “An active dran, and Kannan Arputharaj, “Knowledge mining
learning based classification strategy for the minority from clinical datasets using rough sets and backpropaga-
class problem: application to histopathology annota- tion neural network,” Computational and mathematical
tion,” BMC bioinformatics, vol. 12, no. 1, pp. 424, 2011. methods in medicine, vol. 2015, 2015.
[6] Xiaohua Li and Jian Zheng, “Active learning for regres- [19] Ahmed M Abdel-Zaher and Ayman M Eldeib, “Breast
sion with correlation matching and labeling-error sup- cancer classification using deep belief networks,” Expert
pression,” 2015. Systems with Applications, vol. 46, pp. 139–144, 2016.
[7] Masashi Sugiyama and Shinichi Nakajima, “Pool-based
active learning in approximate linear regression,” Ma-
chine Learning, vol. 75, no. 3, pp. 249–274, 2009.
2366