Zhao Et Al. - 2017
Zhao Et Al. - 2017
Zhao Et Al. - 2017
Funding: This work was funded by the National helpful in assisting experts, witnesses and victims’ relatives in recognizing the faces of victims,
Natural Science Foundation of China (no. to provide an identity and solve a criminal case more quickly. In the archaeological field, it has
61572078), Natural Science Foundation of Beijing
benefits in portraying ancient people with more realistic faces, improving the reconstruction
Municipality (CN) (no. 4152027 and no. 4152028),
and the Open Research Fund of the Ministry of results and providing an important reference value for archaeological research. In the medical
Education Engineering Research Center of Virtual cosmetic surgery field, it is useful for predicting the face remediation effect and providing ref-
Reality Application (MEOBNUEVRA201601). It was erence data. It also has an important role in promoting the development of anthropology, in
also partially supported by grants from Projects in that anthropologists and biologists can learn about changes in the process of human growth
the National Science & Technology Pillar Program
and provide scientific support for the evolution of humans.
during the Twelfth Five-year Plan Period (no.
2013BAI01B03), National High-tech R&D Program
Reconstructed three-dimensional faces have their own characteristics and inaccuracies,
(863 Program) of China (no. 2015AA020506) and owing to the use of different reconstruction techniques, and their similarity to the original
the Program for New Century Excellent Talents in faces directly reflects the pros and cons of the reconstruction methods used. The evaluation of
University (NCET-13-0051). craniofacial reconstruction results has become an important issue, and the current research
Competing interests: The authors have declared work was motivated by the problematic paucity of research on the evaluation and analysis of
that no competing interests exist. reconstruction results.
The relevant research has focused on the field of the similarity of three-dimensional objects
and face recognition. The study of three-dimensional object similarity has focused primarily
on the comparison of two objects with entirely different shapes, which is relatively easy. How-
ever, the reconstructed craniofacial model and the original model are very similar in their
overall shapes; therefore, many of the existing methods used for three-dimensional object sim-
ilarity are not suitable, and the approaches appropriate for craniofacial similarity analysis are
needed. Face recognition (FR) determines an identity on the basis of facial characteristics. The
face features are usually extracted and used to recognize a face from a database, a given image
or a video scene [1]. That is only to find the given face. In craniofacial similarity measurement,
however, the main focus is on analysing whether an area is similar or dissimilar in shape,
which is a more detailed and deeper question than in face recognition. Therefore, craniofacial
similarity evaluation requires deep research.
In this paper, we propose the use of sparse PCA (SPCA) for 3D craniofacial similarity analy-
sis, a method that can not only determine the similarity between two craniofacial models, but
also identify regions of high similarity, which is important for improving the reconstruction
effect. In addition, the areas that are important for craniofacial similarity analysis can be deter-
mined from the large amounts of data. This paper thus provides valuable information that
may guide further studies.
Related work
Craniofacial models belong to three-dimensional models. However, the methods of similarity
evaluation of three-dimensional objects are mainly based on the geometry of an object, including
its contour shape, topology shape and visual projection shape. Because the geometry of 3D faces
is substantially identical, many of the similarity assessment methods for 3D objects are not appli-
cable to evaluating 3D faces. To date, most scholars have evaluated craniofacial reconstruction
results using subjective methods [2–7] to evaluate craniofacial similarity by collecting a certain
number of tests and designing different evaluation strategies. Although this type of evaluation
method is consistent with human cognitive theory, it requires a great deal of manpower and
time, and the accuracy of the evaluation results is influenced by subjective human factors.
There are few objective evaluation methods for craniofacial reconstruction results. Some
scholars have conducted preliminary explorations. Ip et al. [8] have presented a technique for
3D head model retrieval that combines a 3D shape representation scheme and hierarchical
facial region similarity. The proposed shape similarity measure is based on comparing the 3D
model shape signatures computed from the extended Gaussian images (EGI) of the polygon
normal. First, the normal vector of each polygon of a head is mapped onto the Gaussian
sphere, which is divided into cells, each of which corresponds to a range of orientations. Then,
the cells are mapped onto a rectangular array to form a 1D shape signature. Finally, the total
number of normal belonging to each cell on the rectangular array is counted, and the differ-
ence between any two signatures is revealed with a histogram. Wong et al. [9] have compared
craniofacial geometries by taking the directions of the normal vectors as random variables and
considering the statistical distribution of different cells as a probability density function. Feng
et al. [10] have used a relative angle-context distribution (RACD) to compare two sets of cra-
niofacial data. They defined the probability density function of the relative angle-context dis-
tribution and counted the number of relative angles in different intervals. To address the
instability of calculation and long computing time problem of RACD, Zhu et al. [11] have
extended the RACD to the radius-relative angle-context distribution (BRACD) algorithm by
defining a set of concentric spherical shells and dividing the three-dimensional craniofacial
points into different sphere ranges to calculate the relative angle in each partition for crani-
ofacial similarity comparison. They have also proposed a method that uses the distances for
different types of craniofacial feature points [12] and uses the principal warps method [13] to
measure craniofacial similarity. Li et al[14] have put forward a similarity measure method
based on iso-geodesic stripes. Zhao et al[15] have proposed a global and local evaluation
method of craniofacial reconstruction based on a geodesic network. They defined the weighted
average of the shape index value in a neighbourhood as the feature of one vertex and took the
correlation coefficient’s absolute value of the features of all the corresponding geodesic net-
work vertices between two models as their similarity. These methods are mainly analyses of
craniofacial reconstruction results from the geometry.
Much research has focused on the field of face recognition, from 2D face recognition to 3D
face recognition. Next, we provide a brief overview of 3D face recognition methods because
they serve as a reference for craniofacial similarity evaluation.
Feature-based methods
Feature-based methods recognize a face by extracting local or global features, such as curva-
ture, curve, and depth value. Previous research on three-dimensional(3D) face recognition has
focused mainly on curvature analysis. Lee and Milios[16], Gordon et al[17], and Tanaka et al.
[18] have analysed the mean curvature and Gaussian curvature or principal curvature. Later,
Nagamine et al.[19] proposed a method of matching face curves for 3D face recognition. Haar
et al. [20] have computed the similarity of 3D faces by using a set of eight contour curves
extracted according to the geodesic distance. Berretti et al. [21] have evaluated the similarity by
using a three-dimensional model spatial distribution vector on equal-width iso-geodesic facial
stripes. Lee et al. [22] have proposed a 3D face recognition method using multiple statistical
features for the local depth information. Jahanbin et al.[23] have combined the depth of geode-
sic lines for identification. Recently Smeets et al. [24] have used meshSIFT features in 3D face
recognition. Berretti et al.[25] have extracted the SIFT key points on the face scan and con-
nected them into a contour for 3D face recognition. Drira et al. [26] and Kurtek et al. [27] have
extracted the radial curves from facial surfaces and used elastic shape analysis for 3D face rec-
ognition. The face recognition efficiency of these methods is affected by the number or type of
the characteristics extracted from the face.
methods are usually divided into two steps: alignment and similarity calculation. Achermann
et al. [28] have used the Hausdorff distance to measure the similarity between point clouds of
human faces. Pan et al. [29] have used a one-way partial Hausdorff as a similarity metric. Lee
et al.[30] have used a depth value as a weight when using the Hausdorff distance for 3D face rec-
ognition. Chua et al.[31] have used ICP for three-dimensional face model precise alignment.
Cook et al. [32] have established a corresponding relationship for a 3D face model by ICP. Med-
ioni et al. [33] have performed 3D face recognition using iterative closest point (ICP) matching.
Lu et al. [34] have proposed an improved ICP for matching rigid changing regions of a 3D
human face and used the results as a first-class similarity measure. ICP is suitable for rigid sur-
face transformation, but a face is essentially not a rigid surface, thus affecting the accuracy.
Statistical methods
Statistical methods can obtain a general rule through applying statistical learning to many three-
dimensional face models and then using these general rules for evaluation and analysis. Principal
component analysis (PCA) has been used for face recognition. Vetter and Blanz[35] have uti-
lized a 3D model based on PCA to address the problem of pose variation for 2D face recognition.
Hesher et al.[36] have extended the PCA approach from an image into a range of images by
using different numbers of eigenvectors and image sizes. This method provides the probe image
with more chances to make a correct match. Chang et al. [37] have applied a PCA-based method
using 3D and 2D images and combining the results by using a weighted sum of the distances
from the individual 3D and 2D face spaces. Yuan[38] has used PCA to normalize both the 2D
texture images and the 3D shape images extracted from 3D facial images and then recognized a
face through fuzzy clustering and parallel neural networks. Theodoros et al[39] have evaluated
3D face recognition by using registration and PCA. First, the facial surface was cleaned and reg-
istered and then normalized to a standard template face. Then, a PCA model was created, and
the dimensionality of the face space was reduced to calculate the facial similarity. Theodoros
et al have used this technique on a 3D surface and texture data comprising 83 subjects, and the
results demonstrate a wealth of 3D information on the face as well as the importance of stan-
dardization and noise elimination in the datasets. Russ et al. [40] have presented a 3D approach
for recognizing faces through PCA, which addresses the issue of the proper 3D face alignment.
Passalis et al [41] have used an annotated deformable model approach to evaluate 3D face recog-
nition in the presence of facial expressions. First, they applied elastically adaptive deformable
models to obtain parametric representations of the geometry of selected localized face areas.
Then, they used wavelet analysis to extract a compact biometric signature to perform rapid com-
parisons on either a global or a per area basis.
Statistical methods are commonly used at present, but the classical method—PCA—cannot
easily provide actual explanations. In this paper, we use the sparse principal component an-
alysis (SPCA) method to evaluate craniofacial similarity, thereby effectively reducing the
dimensionality and simultaneously producing sparse principal components with sparse load-
ings, and making it easy to explain the results. This methodology makes it possible to explain
the similar and dissimilar parts between two craniofacial data and to carry out in-depth analy-
sis. The SPCA method should therefore be more conducive to the evaluation of craniofacial
reconstruction results.
75 years, and 81 females and 127 males were included. The CT scans were obtained with a clin-
ical multislice CT scanner system (Siemens Sensation16) in Xianyang Hospital located in west-
ern China. Our research was approved by the Institutional Review Board (IRB) of the Image
Center for Brain Research, National Key Laboratory of Cognitive Neuroscience and Learning,
Beijing Normal University. All participants gave written informed consent. The individuals in
this manuscript have given written informed consent (as outlined in PLOS consent form) to
publish these images.
First, we extracted the craniofacial borders from the original CT slice images (as shown in
Fig 1A) and reconstructed the 3D craniofacial surfaces (as shown in Fig 1B) with a marching
cubes algorithm [42]. After data processing[43], all 3D craniofacial data were transformed into
a unified Frankfurt coordinate system[44][45] to eliminate the effects of data acquisition, pos-
ture, and scale. We selected a set of craniofacial data as a reference template and cut away the
back part of the reference craniofacial model because there were too many vertices in the
whole head, and the face features are mainly concentrated on the front part of the head. All of
the craniofacial models were automatically registered with the reference model through the
non-rigid data registration method [44], and each craniofacial data set (as shown in Fig 1C)
had 40969 vertices.
tik is called the loading of the k-th principal component yk in the i-th original variable xi.
The derived coordinate axes are the columns of T, called loading vectors, with individual
elements known as loadings. Clearly, each principal component extracted by PCA is a linear
combination of all of the original variables, and the loadings are typically non-zero. That is, the
principal components are dependent on all of the original variables. This dependence makes
interpretation difficult and is a major shortcoming of PCA. Therefore, we sought to improve
PCA to make it easy to explain the results.
Hui Zou et al[46] have proposed sparse principal component analysis (SPCA) aiming at
approximating the properties of regular PCA while keeping the number of non-zero loadings
small. Sparse principal component analysis (SPCA) is an approach to obtain modified PCs
with sparse loadings and is based on the ability of PCA to be written as a regression-type opti-
mization problem, with the lasso[47] (elastic net[48]) directly integrated into the regression
criterion, such that the resulting modified PCA produces sparse loadings. Next, we explain the
lasso, the elastic net and the solution of SPCA.
Regression techniques. PCA can be written as a regression-type optimization problem,
and the classic regression method is an ordinary least squares (OLS) approximation. The
response variable y.is approximated by the predictors in X. The coefficients for each variable
(column) of X are contained in b [49],
2
bOLS ¼ arg min jjy Xbjj ð2Þ
b
where nEN is short for naive elastic net. The elastic net penalty is a convex combination of the
ridge penalty and the lasso penalty, where λ > 0.
Next, we discuss how to calculate sparse principal components (PCs) on the basis of the
above regression approach.
Sparse principal component analysis (SPCA). Zou and Hastie have proposed a problem
formulation called the SPCA criterion [46] to approximate the properties of PCA while keep-
ing the loadings sparse.
X
n
2
X
k
2
X
k
^ BÞ
ðA; ^ ¼ arg min jjxi ABT xi jj þ l jjbj jj þ dj jjbj jj1 Subject to AT A ¼ Ik ð4Þ
A;B
i¼1 j¼1 j¼1
X
n
2
The first part jjxi ABT xi jj measures the reconstruction error, and the other parts
i¼1
drive the columns of B towards sparsity, similarly to the elastic net regression constraints. The
constraint weight λ has the same value for all PCs, and it must be chosen beforehand, whereas
δ may be set to different values for each PC to offer good flexibility.
Zou and Hastie have also provided a reasonably efficient optimization method for minimiz-
ing the SPCA criterion in Ref [46]. First, if given A, Zou and Hastie has have proven that
X
n
2
Xk
2
X
k X
k
jjxi ABT xi jj þ l jjbj jj þ dj jjbj jj1 ¼ TrX T X þ ðbTj ðX T X þ lÞbj 2aTj X T Xbj
i¼1 j¼1 j¼1 j¼1
This equation amounts to solving k independent naïve elastic net problems, one for each
column of B.
Second, if B is fixed, A can be solved by singular value decomposition. If the SVD of B is
B = UDVT, then A = UVT. Zou and Hastie [46] have suggested first initializing A to the load-
ings of the k first ordinary principal components and then alternately iterating until conver-
gence because matrices A and B are unknown.
Thus, we can find the first k sparse components selected by the above SPCA criterion, and
the detailed algorithm is provided in the next section. Then, the original data can be projected
into the main direction to which the sparse principal components correspond. Thus, the
dimensionality of the data is reduced.
We used 108 sets of point cloud format craniofacial data as training samples and then used
the sparse principal component analysis (SPCA) method to find k sparse principal compo-
nents. Then, we used 100 sets of point cloud format craniofacial data as test samples and pro-
jected them into a space of principal components. After k sparse principal components were
selected by SPCA, the original data were projected into the main direction to which the sparse
principal components corresponded for dimensional reduction. Then, we computed the mean
square error of each pair of craniofacial data after the dimensionality reduction and deter-
mined the craniofacial similarity.
In the mean square error evaluation, we first compute the dimensional reduction vectors yi
and yj through SPCA, which respectively are the projections in the main direction of craniofa-
cial data vectors xi and xj. Then we needed to determine the difference between two feature
vectors yi and yj in different dimensions. We then calculated the square of the difference and
averaged the results. The mean square error of the two craniofacial data was calculated with
the formula
1X L
2
sði; jÞ ¼ ðy yjk Þ ð7Þ
L k¼1 ik
where yi and yj denote two craniofacial vectors subjected to dimensional reduction and L
denotes the number of principal components. A smaller result of s(i,j) represents a smaller dif-
ference between the i-th craniofacial data and the j-th craniofacial data and a greater similarly
degree.
On the basis of the above analysis, we constructed the algorithm for measuring craniofacial
similarity by using SPCA as follows:
Input: Point cloud format craniofacial data
Output: Similarity matrix of every two craniofacial data
Step1: Read M sets of the point cloud format craniofacial data as training samples. The matrix
X(N × M) is composed of M training samples, and each column datum of X is a craniofacial
datum.
Step2: Find L sparse principal components and the primary directions by M training samples
(craniofacial) using the SPCA method as follows.
① Let A start at V(1970), the loadings of first k ordinary principal components.
② Given a fixed A, solve the following naive elastic net problem for j = 1,2,. . .,k
bj ¼ arg min
bT ðX T X þ lÞb 2aTj X T Xb þ dj jjb jj
b
③ For each fixed B, do the SVD of XTXB = UDVT, and then update A = UVT.
④ Repeat steps 2–3, until B converges.
b
b j ¼ j , j = 1,2,. . .,k.
⑤ Normalization: V jbj j
Step3: Read T sets of point cloud format craniofacial data as test samples and project them into
the sparse primary directions to which the sparse principal components correspond. Calcu-
late the new sample matrix after the dimensional reduction Y = VTX. In the same way, the
original N dimensional data are reduced to the L dimension.
Step4: Compute the mean square error using formula (7) between two craniofacial data of T
test samples after dimensionality reduction, and perform the similarity comparison and
obtain a similarity matrix s.
Bk is the k-th sparse principal component proportion. In the above formula, each molecule
denotes the sum of the similarity comparison values of the k-th sparse principal component
when all T(T = 100) craniofacial data are compared. The denominator is the sum of the simi-
larity comparison values of all sparse principal components (L is the total number of sparse
principal components, in experiments L = 60 when all T(T = 100) craniofacial data are com-
pared. The ratio is the proportion of the k-th sparse principal component in the comparison.
Calculate the proportion of each sparse principal component in the ten most similar
craniofacial comparisons. The difference between this calculate and above calculate is that
each molecule of this method calculates only the sum of the similarity comparison values of
the k-th sparse principal component when each craniofacial datum is compared with the ten
most similar craniofacial data. Thus, L is the total number of sparse principal components (in
experiments L = 60), Bk is the proportion of the k-th the sparse primary component in the ten
most similar craniofacial comparison.
X
T X
10
2
ðyik yjk Þ
i¼1 j¼1
Bk ¼ ð9Þ
X
L X
T X
T
2
ðyik yjk Þ
k¼1 i¼1 j¼1
Calculate the proportion of each sparse principal component in the most similar cra-
niofacial comparison.
X
T
2
ðyik y1k Þ
i¼1
Bk ¼ ð10Þ
X
L X
T X
T
2
ðyik yjk Þ
k¼1 i¼1 j¼1
where y1 represents the highest similarity craniofacial data to the i-th craniofacial yi; i.e., the
molecule calculates only the sum of the similarity comparison values of the k-th sparse princi-
pal component when each craniofacial data is compared with the most similar craniofacial
data. Thus, L is the total number of sparse principal components (in experiments L = 60), Bk is
the proportion of the k-th sparse principal component in the most similar craniofacial
comparison.
After the proportions of each sparse principal component in the similarity comparison are
calculated, the importance of each sparse principal component in the comparison results can
be seen by sorting the proportions in descending order.
The detailed algorithm for calculating the sparse principal component in the comparison
result according to importance is as follows:
① Read the craniofacial data of the point cloud format.
② Take M craniofacial data as training samples and obtain sparse principal components.
③ Derive the mean face by M craniofacial data.
④ Add each sparse principal component to the mean face and find the reflected area of
each sparse principal component.
⑤ Calculate the proportions of each sparse principal component of craniofacial similarity
measure in T test samples and sort them.
Results
In our experiments, the preprocessed and registered craniofacial data (introduced in materials
section) are used to compare the craniofacial similarity by PCA and SPCA method respec-
tively. There 108 craniofacial data among the 208 CT scans were used as the training data and
the other 100 skins were used as the test data for the craniofacial similarity comparison, i.e,
M = 108 and T = 100 in our experiments. We use 108 craniofacial data to train the principal
components by PCA and SPCA respectively, and use 100 craniofacial data to test their similar-
ity. In SPCA method, the total number of sparse principal components L = 60 in our experi-
ments. The experimental results are described as follows.
Fig 2. Comparison of the closest craniofacial data found by SPCA and PCA.
https://doi.org/10.1371/journal.pone.0179671.g002
th craniofacial and the j-th craniofacial. The smaller the mean square error is, the higher the
similarity is. The diagonal elements are mean square error of each craniofacial against itself,
which is 0, indicating complete similarity.
Comparison of SPCA and PCA results. In 100 craniofacial data, we used the PCA and
SPCA methods to find the most similar data. The comparison indicated that in 100 sets of
data, 35 (35%) sets of data were not identical, but the other 65 sets (65%) were the same in
their ability to identify the most similar craniofacial data.
In our comparison of the different 35 sets of data results by SPCA and PCA methods (Fig
2), it can be seen from the following table that the SPCA results were significantly more similar
to the target craniofacial model than were the PCA results.
We performed a test on the following 35 sets of data in which we randomly selected 50 tes-
ters to evaluate which one was most like the original craniofacial data in identifying the results
of PCA and SPCA (in the test, the subjects did not whether the craniofacial data had been
selected by PCA or SPCA). The test results showed that 92% of the testers (46 persons) thought
that the craniofacial data selected by SPCA were more like the original data.
Discussion
PCA and SPCA similarity comparison results
The experimental results of PCA and SPCA similarity comparison indicated that in 100 cra-
niofacial data, 65% of the results identifying the most similar craniofacial data by the SPCA
and PCA methods were the same. When the comparison results were not the same, we per-
formed a subjective test with 50 human subjects and concluded that 92% of the testers (46 per-
sons) thought that the craniofacial selected by SPCA was more similar than that found by
PCA. That is, on the whole, using the SPCA method to reduce the craniofacial data and per-
form similarity evaluation is better than using the PCA method.
According to the comparison of the reflected area by principal component in PCA and the
sparse principal component in SPCA, each PCA component reflects the whole or a larger
region of the craniofacial, whereas each sparse SPCA component reflects only a local part of
Fig 6. The proportion of each component in the most ten similar craniofacial comparison.
https://doi.org/10.1371/journal.pone.0179671.g006
Fig 7. The proportion of each component in the most similar craniofacial comparison.
https://doi.org/10.1371/journal.pone.0179671.g007
the craniofacial, such as the mouth or nose. Thus, each sparse SPCA principal component
reflects detailed areas. Hence, the sparse SPCA principal component, compared with the PCA
principal component, can more easily explain the results.
Fig 9. The regions with high similarity between F2 and F53 by SPCA sparse principal components.
https://doi.org/10.1371/journal.pone.0179671.g009
Conclusion
From the above discussion of the experimental results of craniofacial similarity analysis, it is
clear that both PCA and SPCA can reduce dimension while maintaining the main features of
Fig 10. The regions with dissimilarity between F2 and F53 by SPCA sparse principal components.
https://doi.org/10.1371/journal.pone.0179671.g010
the original data; thus, both processes can be used in craniofacial comparison. The results of
these two methods are identical to a large extent. For inconsistent results, the SPCA results are
superior to the PCA results. Most importantly, using SPCA in a similarity comparison allows
not only comparison of the similarity degree of two craniofacial data but also identification of
the areas of high similarity, which is important for improving the craniofacial reconstruction
effect. The areas that are important for craniofacial similarity analysis can be determined from
the large amounts of data. We conclude that the craniofacial contour was the most important
factor for craniofacial similarity evaluation in our experimental data. These conclusions are
consistent with the conclusions of psychology experiments on face recognition. Our results
may provide important guidance in three- or two-dimensional face similarity evaluation and
analysis and three- or two-dimensional face recognition.
Acknowledgments
The authors gratefully appreciate the anonymous reviewers for all of their helpful comments.
We also acknowledge the support of Xianyang Hospital for providing CT images.
Author Contributions
Conceptualization: FD.
Data curation: MZ.
Formal analysis: JL XL.
Funding acquisition: FD ZP JZ ZW MZ QD.
Investigation: QD.
Methodology: JZ.
Software: JZ.
Supervision: JZ.
Writing – original draft: JZ.
Writing – review & editing: FD ZW ZP.
References
1. Wang YH. Face Recognition——Principle, Methods and Technology. Beijing: Science Press; Febru-
ary, 2011:16–17
2. Snow CC, Gatliff BP, McWilliams KR. Reconstruction of facial features from the skull: an evaluation of
its usefulness in forensic anthropology. American Journal of Physical Anthropology.1970; 33(2):221–
227. https://doi.org/10.1002/ajpa.1330330207 PMID: 5473087
3. Helmer RP, Rohricht S, Petersen D, Mohr F. Assessment of the reliability of facial reconstruction.
Forensic analysis of the skull.1993: 229–246.
4. Stephan CN, Henneberg M. Building faces from dry skulls: are they recognized above chance rates?.
Journal of Forensic Science.2001; 46(3): 432–440.
5. Claes P, Vandermeulen D, De Greef S, Willems G, Suetens P. Craniofacial reconstruction using a com-
bined statistical model of face shape and soft tissue depths: methodology and validation. Forensic sci-
ence international. 2006; 159: S147–S158. https://doi.org/10.1016/j.forsciint.2006.02.035 PMID:
16540276
6. VaneZis M. Forensic facial reconstruction using 3-D computer graphics: evaluation and improvement of
its reliability in identification. University of Glasgow, 2008.
7. Lee WJ, Wilkinson CM. The unfamiliar face effect on forensic craniofacial reconstruction and recogni-
tion. Forensic Science International. 2016; 269: 21–30. https://doi.org/10.1016/j.forsciint.2016.11.003
PMID: 27863281
8. Ip HHS, Wong W. 3D head models retrieval based on hierarchical facial region similarity.Proceedings of
the 15th International Conference on Vision Interface. 2002: 314–319.
9. Wong HS, Cheung KKT, Ip HHS. 3D head model classification by evolutionary optimization of the
Extended Gaussian Image representation. Pattern Recognition. 2004; 37(12): 2307–2322.
10. Feng J, Ip HHS, Lai LY, Linney A. Robust point correspondence matching and similarity measuring for
3D models by relative angle-context distributions. Image and Vision Computing. 2008; 26(6): 761–775.
11. Zhu XY, Geng GH, Wen C. Craniofacial similarity measuring based on BRACD.Biomedical Engineering
and Informatics (BMEI), 2011 4th International Conference on. IEEE. 2011; 2: 942–945.
12. Zhu XY, Geng GH. Craniofacial similarity comparison in craniofacial reconstruction. Jisuanji Yingyong
Yanjiu. 2010; 27(8): 3153–3155.
13. Zhu XY, Geng GH, Wen C. Estimate of craniofacial geometry shape similarity based on principal warps.
Journal of Image and Graphics. 2012; 17(004): 568–574.
14. Li H, Wu Z, Zhou M. A Iso-Geodesic Stripes based similarity measure method for 3D face.Biomedical
Engineering and Informatics (BMEI), 2011 4th International Conference on. IEEE. 2011; 4: 2114–2118.
15. Zhao J, Liu C, Wu Z, Duan F, Wang K, Jia T, Liu Q. Craniofacial reconstruction evaluation by geodesic
network. Computational and mathematical methods in medicine. 2014; 2014.
16. Lee JC, Milios E. Matching range images of human face.Computer Vision, 1990. Proceedings, Third
International Conference on. IEEE. 1990: 722–726.
17. Gordon GG. Face recognition based on depth maps and surface curvature. San Diego,’91, San Diego,
CA. International Society for Optics and Photonics. 1991: 234–247.
18. Tanaka HT, Ikeda M. Curvature-based face surface recognition using spherical correlation-principal
directions for curved object recognition.Pattern Recognition, 1996., Proceedings of the 13th Interna-
tional Conference on. IEEE. 1996; 3: 638–642.
19. Nagamine T, Uemura T, Masuda I. 3D facial image analysis for human identification. Pattern Recogni-
tion, 1992. Vol. I. Conference A: Computer Vision and Applications, Proceedings., 11th IAPR Interna-
tional Conference on. IEEE. 1992: 324–327.
20. ter Haar FB, Veltkampy RC. SHREC’08 entry: 3D face recognition using facial contour curves.Shape
Modeling and Applications, 2008. SMI 2008. IEEE International Conference on. IEEE. 2008: 259–260.
21. Berretti S, Del Bimbo A, Pala P. 3D face recognition using isogeodesic stripes. IEEE Transactions on
Pattern Analysis and Machine Intelligence. 2010; 32(12): 2162–2177. https://doi.org/10.1109/TPAMI.
2010.43 PMID: 20975115
22. Lee Y, Yi T. 3D face recognition using multiple features for local depth information. Video/Image Pro-
cessing and Multimedia Communications, 2003. 4th EURASIP Conference focused on. IEEE. 2003; 1:
429–434.
23. Jahanbin S, Choi H, Liu Y, Bovik AC. Three dimensional face recognition using iso-geodesic and iso-
depth curves. Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE International
Conference on. IEEE. 2008: 1–6.
24. Smeets D, Keustermans J, Vandermeulen D, Suetens P. meshSIFT: Local surface features for 3D face
recognition under expression variations and partial data. Computer Vision and Image Understanding
2013; 117(2): 158–169.
25. Berretti S, Del Bimbo A, Pala P. Recognition of 3d faces with missing parts based on profile network.
Proceedings of the ACM workshop on 3D object retrieval. ACM. 2010: 81–86.
26. Drira H, Amor BB, Srivastava A, Daoudi M, Slama R. 3D face recognition under expressions, occlu-
sions, and pose variations. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2013; 35
(9): 2270–2283. https://doi.org/10.1109/TPAMI.2013.48 PMID: 23868784
27. Kurtek S, Drira H. A comprehensive statistical framework for elastic shape analysis of 3D faces. Com-
puters & Graphics, 2015; 51: 52–59.
28. Achermann B, Bunke H. Classifying range images of human faces with Hausdorff distance. Pattern
Recognition, 2000. Proceedings. 15th International Conference on. IEEE. 2000; 2: 809–813.
29. Pan G, Wu Z, Pan Y. Automatic 3D face verification from range data. Acoustics, Speech, and Signal
Processing, 2003. Proceedings.(ICASSP’03). 2003 IEEE International Conference on. IEEE. 2003; 3:
III-193.
30. Lee Y, Shim J. Curvature based human face recognition using depth weighted hausdorff distance.
Image Processing, 2004. ICIP’04. 2004 International Conference on. IEEE. 2004; 3: 1429–1432.
31. Chua CS, Han F, Ho YK. 3D human face recognition using point signature. Automatic Face and Gesture
Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE. 2000: 233–238.
32. Cook J, Chandran V, Sridharan S, Fookes C. Face recognition from 3d data using iterative closest point
algorithm and gaussian mixture models. 3D Data Processing, Visualization and Transmission, 2004.
3DPVT 2004. Proceedings. 2nd International Symposium on. IEEE. 2004: 502–509.
33. Medioni G, Waupotitsch R. Face recognition and modeling in 3D.IEEE Intl Workshop on Analysis and
Modeling of Faces and Gestures (AMFG 2003). 2003: 232233.
34. Lu X, Jain AK, Colbry D. Matching 2.5 D face scans to 3D models. IEEE transactions on pattern analysis
and machine intelligence. 2006; 28(1): 31–43. https://doi.org/10.1109/TPAMI.2006.15 PMID:
16402617
35. Blanz V, Vetter T. Face recognition based on fitting a 3D morphable model. IEEE Transactions on pat-
tern analysis and machine intelligence. 2003; 25(9): 1063–1074.
36. Hesher C, Srivastava A, Erlebacher G. A novel technique for face recognition using range imaging.Sig-
nal processing and its applications, 2003. Proceedings. Seventh international symposium on. IEEE.
2003; 2: 201–204.
37. Chang K, Bowyer K, Flynn P. Face recognition using 2D and 3D facial data. ACM Workshop on Multi-
modal User Authentication. 2003: 25–32.
38. Yuan X, Lu J, Yahagi T. A method of 3d face recognition based on principal component analysis algo-
rithm. Circuits and Systems, 2005. ISCAS 2005. IEEE International Symposium on. IEEE. 2005: 3211–
3214.
39. Papatheodorou T, Rueckert D. Evaluation of 3D face recognition using registration and PCA. Interna-
tional Conference on Audio-and Video-Based Biometric Person Authentication. Springer Berlin Heidel-
berg. 2005: 997–1009.
40. Russ T, Boehnen C, Peters T. 3D face recognition using 3D alignment for PCA. Computer Vision and
Pattern Recognition, 2006 IEEE Computer Society Conference on. IEEE. 2006; 2: 1391–1398.
41. Passalis G, Kakadiaris IA, Theoharis T, Toderici G, Murtuza N. Evaluation of 3D face recognition in the
presence of facial expressions: an annotated deformable model approach. Computer Vision and Pat-
tern Recognition-Workshops, 2005. CVPR Workshops. IEEE Computer Society Conference on. IEEE.
2005: 171–171.
42. Lorensen WE, Cline HE. Marching cubes: A high resolution 3D surface construction algorithm. ACM
siggraph computer graphics. ACM. 1987; 21(4): 163–169.
43. Deng Q, Zhou M, Shui W, Wu Z, Ji Y, Bai R. A novel skull registration based on global and local defor-
mations for craniofacial reconstruction. Forensic science international. 2011; 208(1): 95–102.
44. Hu Y, Duan F, Yin B, Zhou M, Sun Y, Wu Z, Geng G. A hierarchical dense deformable model for 3D
face reconstruction from skull. Multimedia Tools and Applications. 2013; 64(2): 345–364.
45. Duan F, Yang Y, Li Y, Tian Y, Lu K, Wu Z, Zhou M. Skull Identification via Correlation Measure Between
Skull and Face Shape. IEEE Transactions on Information Forensics & Security. 2014; 9(8):1322–1332.
46. Zou H, Hastie T, Tibshirani R. Sparse principal component analysis. Journal of computational and
graphical statistics. 2006; 15(2): 265–286.
47. Tibshirani R. Regression shrinkage and selection via the lasso: a retrospective. Journal of the Royal
Statistical Society: Series B (Statistical Methodology). 2011; 73(3): 273–282.
48. Zou H, Hastie T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical
Society: Series B (Statistical Methodology). 2005; 67(2): 301–320.
49. Sjöstrand K, Stegmann MB, Larsen R. Sparse principal component analysis in medical shape modeling.
Medical Imaging 2006: Image Processing. 2006; 6144: 1579–1590.
50. Wang G., Gong X. Human Face Percetion: from 2D to 3D. Beijing: Science Press. 2011:5–7.