Face Antispoofing3
Face Antispoofing3
Face Antispoofing3
Research Article
Face Antispoofing Method Using Color Texture
Segmentation on FPGA
1
Department of Computer Engineering, Kyung Hee University, Yongin-si, Gyeonggi-do 17104, Republic of Korea
2
Department of Software Convergence, Soonchunhyang University, Asan-si, Chungcheongnam-do 31538, Republic of Korea
3
Department of Computer Software Engineering, Soonchunhyang University, Asan-si, Chungcheongnam-do 31538,
Republic of Korea
Correspondence should be addressed to Intae Ryoo; [email protected] and Seokhoon Kim; [email protected]
Received 4 March 2021; Revised 5 April 2021; Accepted 29 April 2021; Published 10 May 2021
Copyright © 2021 Youngjun Moon et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
User authentication for accurate biometric systems is becoming necessary in modern real-world applications. Authentication
systems based on biometric identifiers such as faces and fingerprints are being applied in a variety of fields in preference over
existing password input methods. Face imaging is the most widely used biometric identifier because the registration and au-
thentication process is noncontact and concise. However, it is comparatively easy to acquire face images using SNS, etc., and there
is a problem of forgery via photos and videos. To solve this problem, much research on face spoofing detection has been
conducted. In this paper, we propose a method for face spoofing detection based on convolution neural networks using the color
and texture information of face images. The color-texture information combined with luminance and color difference channels is
analyzed using a local binary pattern descriptor. Color-texture information is analyzed using the Cb, S, and V bands in the color
spaces. The CASIA-FASD dataset was used to verify the proposed scheme. The proposed scheme showed better performance than
state-of-the-art methods developed in previous studies. Considering the AI FPGA board, the performance of existing methods was
evaluated and compared with the method proposed herein. Based on these results, it was confirmed that the proposed method can
be effectively implemented in edge environments.
1. Introduction and concise. However, face images are very easy to acquire
using social networks, etc., and are vulnerable against var-
Recently, authentication systems based on biometric in- ious spoofing techniques, including printed photos and
formation have been applied to various mobile devices such video replay. To solve this problem, research utilizing
as smartphones, and many users perform identity authen- software solutions have become popular, rather than anti-
tication using facial or fingerprint information instead of the spoofing hardware solutions using additional sensors. These
existing password input methods. In addition, biometric software approaches can be classified into motion-based
authentication is being applied to bank transactions and methods and texture-based methods [1].
mobile payment applications. As a result, researchers are The motion-based counterfeit face detection method
greatly interested in developing high-performance authen- measures eye/head movement, eye blinking, and changes in
tication systems. facial expression [2, 3]. In the case of counterfeit face de-
Among user biometric information, face images are the tection methods utilizing eyes, note that a still face such as in
most widely used biometric identifier because the associated a photograph does not exhibit eye blinking or pupil
registration and authentication processes are noncontact movement, as opposed to real human faces which exhibit
2 Security and Communication Networks
relatively large amounts of movement over time. This LBP [13], SIFT [14], SURF [15], HOG [16], and DoG [17] are
method is very simple and fast. However, this method used to extract frame level functions, while methods such as
classifies a spoofing face using only eye movement and thus dynamic texture [18], micromotion [19], and eye blinking
cannot defend against simple attack variations that focus on [20] extract video features.
and accurately emulate the eye area based on a photo. Recently, several deep learning-based methods have
The texture-based spoofing face detection method been studied to prevent face spoofing at the frame and video
mainly uses lighting characteristics that appear differently levels. In frame level methods [21–24], the pretrained CNN
between 2D plane and 3D stereoscopic objects or uses a fine model is fine-tuned to extract features from the binary
texture difference between the spoofing face data and live classification setup [25–27].
face data through an external medium such as printing
[4–8]. This method mainly uses a local image descriptor such
as an LBP (local binary pattern) [9] to express differences in 2.2. Color Spaces. RGB is a color space commonly used for
the texture characteristics between live and spoofing face sensing and displaying color images. However, its use in
images. Such texture-based methods have been actively image analysis is typically limited because the three colors
researched due to the advantages of easy implementation (red, green, and blue) are not separated according to lumi-
and short detection times; however, these methods have nance and color difference information. Thus, it is common to
difficulty classifying liveness faces in nonuniform images or additionally convert the RGB information into YCbCr and
images with large amounts of noise. Recently, researchers HSV information before use. These two latter color spaces are
have been working on the detection of spoofing faces using based on luminance and chrominance information [28–31].
convolutional neural networks (CNNs) [10, 11]. Since this In particular, the YCbCr Color space separates RGB into
method can effectively derive features through learning, its luminance (Y), chrominance blue, and chrominance red.
performance is improved over existing texture-based de- Similarly, the HSV color space uses the hue and saturation
tection methods. dimensions to define the color differences of the image, and
Although the field of spoofing face detection has de- the value dimension corresponds to the luminance.
veloped tremendously, the existing methods mainly focus on
the brightness information of face images. More specifically,
2.3. LBP (Local Binary Pattern). LBPs [32, 33] are a feature
other color information, which is similar to brightness in-
developed for classifying image textures. Since then, LBPs
formation, is often overlooked in spoofing face detection.
have been used for face recognition. LBPs are a simple
Therefore, by considering both color and brightness infor-
operation used for image analysis and recognition and are
mation of face images, a method was proposed that inde-
robust to changes in discrimination and lighting. Equation
pendently extracts texture features from the brightness space
(1) is an LBP equation:
and color space of the face image using an LBP [12].
The difference between a real face and spoofing face is p−1
discriminated using a descriptor (such as an LBP) that LBP(p, r) � sgp − gc 2p , (1)
encodes comparison results with respect to surrounding p�1
RGB face
image
Color
component (R) (G) (B)
images
LBP
images
V � max(R, G, B), Videos for 20 people are used for learning, while the
remaining videos for 30 people are used for performance
⎪
⎧ V − min(R, G, B) evaluation.
⎪
⎪ , if V ≠ 0, We extracted each frame from the CASIA-FASD dataset
⎨ V
S �⎪ videos images for performance evaluation. In total, 4,577 live
⎪
⎪
⎩ face images, 5,054 printed photo attack images, 2,368 cut
0, if V � 0,
photo attack images, and 4,429 video replay attack images
⎪
⎧ 60(G − B) were used for learning. In addition, 5,912 live face images,
⎪
⎪ , if V � R, 7,450 printed photo attack images, 4,437 cut photo attack
⎪
⎪ V − min(R, G, B)
⎪
⎪ (3) images, and 5,652 video replay attack images were used for
⎪
⎪
⎪
⎪ evaluation. Table 1 shows detailed information on data
⎪
⎪
⎨ 60(B − R) partitioning of CASIA-FASD.
H � ⎪ 120 + , if V � G,
⎪
⎪ V − min(R, G, B)
⎪
⎪
⎪
⎪
⎪
⎪ 4.2. Experimental Setup. In this paper, we used FPGA for
⎪
⎪ 60(R − G)
⎪
⎪ performance evaluation. We evaluated the performance of
⎩ 240 + , if V � B,
V − min(R, G, B) the proposed scheme by using the AI Accelerator of FPGA.
The specifications of FPGA and the implemented board are
if H < 0, H � H + 360. shown in Figure 5.
The YCbCr calculation formula is shown as ® ™
Zynq UltraScale+ MPSoC devices provide 64-bit
processor scalability while combining real-time control with
Y � 0.299R + 0.587G + 0.114B, soft and hard engines for graphics, video, waveform, and
Cb � 0.564(B − Y), (4) packet processing. Built on a common real-time processor
and programmable logic-equipped platform, three distinct
Cr � 0.713(R − Y). variants (dual application processor (CG) devices, quad
application processor and GPU (EG) devices, and video
In existing methods, RGB face images are converted
codec (EV) devices) are included, creating numerous pos-
into the YCbCr and HSV color spaces, and the spoofing
sibilities for various applications such as 5G wireless, next-
images are classified by applying an LBP to each color
generation ADAS, and industrial internet-of-things tech-
space. However, this method increases the amount of
nologies [36].
computation because it uses a 6-channel color space.
Vitis AI is Xilinx’s development stack for AI inference on
Figure 3 shows a conceptual diagram of the existing
Xilinx hardware platforms, including both edge devices and
methods.
Alveo cards. It consists of an optimized IP, tools, libraries,
In this paper, we use a 3-channel color space consisting
models, and example designs. Vitis AI is designed with high
of Cb, S, and V, from which many facial features can be
efficiency and ease of use in mind, leading to great potential
derived. The proposed method aims toward high-speed
for AI acceleration on Xilinx FPGA and ACAP [37].
processing and robustness against lighting changes in face
Face antispoofing detection uses AlexNet based on CNN.
antispoofing. Figure 4 shows a conceptual diagram of the
AlexNet is a basic model utilizing a convolutional layer, a
proposed scheme.
pooling layer, and a fully connected layer [38].
The advantages of this approach are summarized as
AlexNet consists of five convolution layers and three
follows:
full-connected (FC) layers, where the last FC layer uses
(1) This proposed scheme reduces false detection by softmax as an active function for category classification.
using a 3-channel color space in which sufficient Figure 6 shows Alexnet’s CNN architecture.
facial feature information is expressed
(2) This proposed scheme uses less memory with fewer
feature dimensions, thus enabling high-speed 4.3. Experimental Analysis Method. To evaluate the pro-
processing posed scheme, we measured the HTER (Half Total Error
Rate) in the CASIA-FASD dataset. The HTER is calculated
using the false acceptance rate (FAR) and false rejection rate
4. Performance Evaluation
(FRR) in the attack dataset, both of which are defined below.
4.1. Train/Test Dataset. In this paper, we performed a The HTER calculation is given as follows [39]:
spoofing face detection test using the CASIA Face Anti- FAR + FRR
spoofing Database (CASIA-FASD) [35] for performance HTER � . (5)
2
evaluation. CASIA-FASD consists of real face videos and
fake face videos acquired from 50 different users. The real The FAR [40] is a measure of how likely the biometric
face videos consist of three types of videos: low quality, security system will incorrectly accept an access attempt by
medium quality, and high quality. Similarly, the fake face an unauthorized user. A system’s FAR typically is defined as
videos consist of three types of fake attack videos: printed the ratio of the number of false acceptances divided by the
photo attacks, cut photo attacks, and video relay attacks. number of identification attempts.
Security and Communication Networks 5
Cb
Cr Real
SVM
Fake
Cb
Real
SVM
Fake
V
RGB face Image space conversion Individual Histogram SVM
image (Cb, S, V color space) LBP histogram concatenation decision
Figure 4: Conceptual diagram of the proposed scheme.
3 3
5 3
3
3 3
5
11 3
Dense
48 192 192 128 2048 2048
11 27 128
55
13 13 13
3 3
5
224 5 3 3 3
13 13 Dense Dense
27 3 3 13
11 3
55 1000
11 192 192 128 Max
224 Max Max pooling 2048 2048
Stride 128
of 4 pooling pooling
3 48
Figure 6: Illustration of the proposed CNN architecture, explicitly showing the delineation of responsibilities between the two GPUs: one
GPU runs the layer parts at the top of the figure while the other runs the layer parts at the bottom.
The FRR [41] is a measure of how likely the biometric 4.4. Experimental Results and Discussion. To verify the
security system will incorrectly reject an access attempt by an performance of the proposed scheme, eight scenarios were
authorized user. A system’s FRR typically is defined as the compared and tested using the CASIA-FASD attack dataset.
ratio of the number of false rejections divided by the number Table 2 shows HTERs according to eight different sce-
of identification attempts. narios in the CASIA-FASD dataset. The proposed method
Smaller HTER values indicate good performance, where showed improved performance for printed photo attacks,
HTER is defined using only misclassification ratios. Addi- cut photo attacks, and video replay attacks. Figure 7 shows
tionally, the EER (equal error rate) refers to the rate at which the performance comparison for the CASIA-FASD dataset.
the FRR and FAR values converge to one another, where a Table 3 shows the EER values according to eight different
small value also indicates good performance. scenarios for the CASIA-FASD dataset. Compared with the
The EER [42] is a biometric security system algorithm proposed scheme, only the “YCbCr_lbp + HSV_lbp” scheme
used to predetermine the threshold values for the FAR and has good EER performance.
FRR. When the rates are equal, the common value is referred The receiver operating characteristic (ROC) curves are
to as the equal error rate. The lower the ERR, the better the presented. These curves show the error of the false positive
accuracy of the biometric system. rates against the true positive rates. ROC curves are best used
ROC (receiver operating characteristic) curve is a for comparing the performance of various systems. Figures 8
graphical plot that illustrates the diagnostic ability of a bi- and 9 show the ROC curves generated for each scenario in
nary classifier system as its discrimination threshold is the CASIA-FASD dataset.
varied. The ROC curve is created by plotting the true positive Table 4 shows the FAR, FRR, and area under the curve
rate (TPR) against the false positive rate (FPR) at various (AUC) results according to eight different scenarios in the
threshold settings. CASIA-FASD dataset. A high AUC indicates good
AUC (area under the curve) is the area under the ROC performance.
Curve. If the AUC value is high, it means that the model for Table 5 shows the accuracy for different facial spoofing
classifying objects has excellent performance. attacks. The accuracy for YCbCr_lbp + HSV_lbp is the
Security and Communication Networks 7
HTER
14.00
12.00
10.00
8.00
6.00
4.00
2.00
0.00
YCbCr HSV YCbCr_Ibp HSV_Ibp YCbCr + HSV YCbCr_Ibp YCbCr_Ibp Proposed
+ HSV + HSV_Ibp
Photo attack scenario Video replay attack scenario
Cut photo attack scenario Total
Figure 7: Performance comparison for the CASIA-FASD dataset.
1.0 1.0
0.8 0.8
0.6 0.6
TPR
TPR
0.4 0.4
0.2 0.2
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
FPR FPR
(a) (b)
Figure 8: Continued.
8 Security and Communication Networks
1.0 1.0
0.8 0.8
0.6 0.6
TPR
TPR
0.4 0.4
0.2 0.2
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
FPR FPR
(c) (d)
Figure 8: Receiver operating characteristic curves for the (a) YCbCr, (b) HSV, (c) YCbCr_lbp, and (d) HSV_lbp scenarios.
1.0 1.0
0.8 0.8
0.6 0.6
TPR
TPR
0.4 0.4
0.2 0.2
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
FPR FPR
(a) (b)
1.0 1.0
0.8 0.8
0.6 0.6
TPR
TPR
0.4 0.4
0.2 0.2
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
FPR FPR
(c) (d)
Figure 9: Receiver operating characteristic curves for the (a) YCbCr + HSV, (b) YCbCr_lbp + HSV, (c) YCbCr_lbp + HSV_lbp, and (d)
proposed scenarios.
highest, but the proposed method shows similar performance with respect to printed photo attacks (0.18%),
performance. cut photo attacks (0.69%), and video replay attacks (1.52%),
The overall test results of this paper are shown in Table 6. with an overall performance improvement of 0.73%. Ad-
Compared to the already existing YCbCr_lbp + HSV_lbp ditionally, the ERR was low, while the accuracy values were
method, the method proposed in this paper has improved similar. Overall, the YCbCr_lbp + HSV_lbp method showed
Security and Communication Networks 9
Table 4: FAR, FRR, and AUC performances for the eight scenarios.
FAR (%)
Scenarios FRR (%) AUC
Printed photo attacks Cut photo attacks Video replay attacks Total
YCbCr 5.53 4.26 3.52 4.44 20.57 0.72
HSV 2.01 0.00 4.05 2.02 10.77 0.94
YCbCr_lbp 2.99 3.49 5.11 3.87 2.65 0.84
HSV_lbp 1.71 0.63 4.51 2.29 17.83 0.96
YCbCr + HSV 2.22 0.00 3.26 1.82 9.19 0.95
YCbCr_lbp + HSV 2.05 0.05 2.58 1.56 9.08 0.97
YCbCr_lbp + HSV_lbp 1.30 0.81 2.44 1.52 4.90 0.97
Proposed approach 3.78 1.35 2.25 2.46 1.17 0.96
similar performance but uses six color space channels, while performances of the existing methods were evaluated with
the proposed method uses only three-color space channels, that of the proposed scheme. It was confirmed that the
leading to a faster calculation speed. proposed method can be effectively used in edge
environments.
5. Conclusions As future work, we want to verify the performance
against another well-known face spoof dataset. In addition,
In this paper, we proposed a face antispoofing method we plan to conduct performance tests between databases.
utilizing CNN learning and inference and constructed im-
portant parameters by extracting texture information via an Data Availability
LBP from the face image color space. CASIA-FASD was used
as the dataset for performance verification. Images were The data used to support the finding were included in this
extracted from videos and divided into printed photo at- paper.
tacks, cut photo attacks, and video replay attacks. These
images extracted from the CASIA-FASD dataset were used Conflicts of Interest
for both training and evaluation. It was confirmed that the
detection performance was improved by separating the color The authors declare that they have no conflicts of interest.
space from the face image in addition to the Cb, S, and V
color space, which is useful for antispoofing. In previous Acknowledgments
studies, a 6-channel (YCbCr + HSV) color space was typi-
cally used, leading to large computational costs. On the This work was funded by BK21 FOUR (Fostering Out-
contrary, the proposed approach reduces the computational standing Universities for Research) (no. 5199990914048),
load by instead considering only three (Cb, S, V) color space and this research was supported by Basic Science Research
channels. Considering the AI FPGA board, the Program through the National Research Foundation of
10 Security and Communication Networks
Korea (NRF) funded by the Ministry of Education (NRF- [15] Z. Boulkenafet, J. Komulainen, and A. Hadid, “Face anti-
2020R1I1A3066543). In addition, this work was supported spoofing using speeded-up robust features and Fisher vector
by the Soonchunhyang University Research Fund. encoding,” IEEE Signal Processing Letters, vol. 24, no. 2,
pp. 141–145, 2017.
[16] J. Komulainen, A. Hadid, and M. Pietikainen, “Context based
References face anti-spoofing,” in Proceedings of the 2013 IEEE Sixth
International Conference on Biometrics: Theory, Applications
[1] Z. Akhtar and G. Luca Foresti, “Face spoof attack recognition
and Systems (BTAS), pp. 1–8, Arlington, VA, USA, September
using discriminative image patches,” Journal of Electrical and
2013.
Computer Engineering, vol. 2016, Article ID 4721849,
[17] P. Bruno, C. Michelassi, and R. Anderson, “Face liveness
14 pages, 2016.
detection under bad illumination conditions,” in Proceedings
[2] H. K. Jee, S. U. Jung, and J. H. Yoo, “Liveness detection for
of the 2011 18th IEEE International. Conference on Image
embedded face recognition system,” International Journal of
Processing. (ICIP 2011), pp. 3557–3560, IEEE, Brussels, Bel-
Biological and Medical Sciences, vol. 1, pp. 235–238, 2006.
gium, September 2011.
[3] W. Bao, H. Li, N. Li, and W. Jiang, “A liveness detection
[18] J. Komulainen, A. Hadid, and M. Pietik¨ainen, “Face spoofing
method for face recognition based on optical flow field,” in
detection using dynamic texture,” in Asian Conference on
Proceedings of the 2009 International Conference on Image
Analysis and Signal Processing IASP, pp. 233–236, IEEE, Computer Vision, pp. 146–157, Springer, Daejeon, Korea,
Linhai, China, April 2009. November 2012.
[4] J. Li, Y. Wang, T. Tan, and A. K. Jain, “Live face detection [19] T. Ahmad Siddiqui, S. Bharadwaj, T. I Dhamecha et al., “Face
based on the analysis of fourier spectra,” in Proceedings of the anti-spoofing with multifeature videolet aggregation,” in 2016
SPIE - International Society for Optics and Photonics, 23rd International Conference on Pattern Recognition (ICPR),
pp. 296–303, Choufu, Japan, March 2004. pp. 1035–1040, IEEE, Cancun, Mexico, December 2016.
[5] A. D. S. Pinto, H. Pedrini, W. R. Schwartz, and A. Rocha, [20] G. Pan, L. Sun, Z. Wu, and S. Lao, “Eyeblink-based anti-
“Video-based face spoofing detection through visual rhythm spoofing in face recognition from a generic webcamera,” in
analysis,” in Proceedings of the 2012 25th SIBGRAPI Con- Proceedings of the IEEE International Conference on Computer
ference on Graphics, Patterns and Images (SIBGRAPI), Vision, pp. 1–8, Rio de Janeiro, Brazil, October 2007.
pp. 221–228, IEEE, Ouro Preto, Brazil, August 2012. [21] L. Li, X. Feng, Z. Boulkenafet, Z. Xia, M. Li, and A. Hadid, “An
[6] W. R. Schwartz, A. Rocha, and H. P. Edrini, “Face spoofing original face anti-spoofing approach using partial convolu-
detection through partial least squares and low-level de- tional neural network,” in The sixth International Conference
scriptors,” in Proceedings of the 2011 International Joint on Image Processing Theory, Tools and Applications (IPTA’16),
Conference on Biometrics (IJCB), pp. 1–8, IEEE, Washington, pp. 1–6, Oulu, Finland, December 2016.
WA, USA, October 2011. [22] K. Patel, H. Han, and A. K. Jain, “Cross-database face anti-
[7] A. Anjos and S. Marcel, “Counter-measures to photo attacks spoofing with robust feature representation,” in Proceedings of
in face recognition: a public database and a baseline,” in the Chinese Conference on Biometric Recognition, pp. 611–619,
Proceedings of the 2011 Joint Conference on Biometrics (IJCB), Springer, Chengdu, China, October 2016.
pp. 1–7, IEEE, Washington, WA, USA, October 2011. [23] A. George and S. Marcel, “Deep pixel-wise binary supervision
[8] J. Määttä, A. Hadid, and M. Pietikainen, “Face spoofing for face presentation attack detection,” in Proceedings of the
detection from single images using micro-texture analysis,” in 2019 Inter-national Conference on Biometrics, Crete, Greece,
Proceedings of the 2011 international joint conference on June 2019.
Biometrics (IJCB), pp. 1–7, IEEE, Washington, WA, USA, [24] J. Amin, Y. Liu, and X. Liu, “Face despoofing: anti-spoofing
October 2011. via noise modeling,” in Proceedings of the European Confer-
[9] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution ence on Computer Vision (ECCV), pp. 290–306, Munich,
gray-scale and rotation invariant texture classification with Germany, September 2018.
local binary patterns,” IEEE Transactions on Pattern Analysis [25] M. Sajid, N. Ali, S. Hanif Dar et al., “Data augmentation-
and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. assisted makeup-invariant face recognition,” Mathematical
[10] J. Yang, Z. Lei, and S. Z. Li, “Learn convolutional neural Problems in Engineering, vol. 2018, Article ID 2850632,
network for face anti-spoofing,” 2014, https://arxiv.org/abs/ 10 pages, 2018.
1408.5601. [26] M. Alghaili, Z. Li, A. Hamdi, and R. Ali, “Face filter: face
[11] O. Lucena, A. Junior, V. Moia, R. Souza, E. Valle, and identification with deep learning and filter algorithm,” Sci-
R. Lotufo, “Transfer learning using convolutional neural entific Programming, vol. 2020, Article ID 7846264, 9 pages,
networks for face anti-spoofing,” in Lecture Notes in Com- 2020.
puter ScienceSpringer, Berlin, Germany, 2017. [27] Y. Xu, W. Yan, G. Yang et al., “Joint face detection and
[12] Z. Xu, S. Li, and W. Deng, “Learning temporal features using alignment using face as point,” Scientific Programming,
LSTM-CNN architecture for face anti-spoofing,” in Pro- vol. 2020, Article ID 7845384, 8 pages, 2020.
ceedings of 2015 3rd IAPR Asian Conference on Pattern [28] Z. Boulkenafet, J. Komulainen, and A. Hadid, “Face anti-
Recognition (ACPR), pp. 141–145, IEEE, Kuala Lumpur, spoofing based on color texture analysis,” in Proceedings of the
Malaysia, November 2015. 2015 IEEE International Conference on Image Processing
[13] T. Pereira, A. Anjos, J. M. De Martino, and S´. Marcel, “Lbp- (ICIP), pp. 2636–2640, Quebec, Canada, September 2015.
top based countermeasure against face spoofing attacks,” in [29] S. H. Lee, H. Kim, and Y. M. Ro, “A comparative study of color
Proceedings of Asian Conference on Computer Vision, texture features for face analysis,” in Computational Color
pp. 121–132, Springer, Daejeon, Korea, November 2012. Imaging. CCIW, S. Tominaga, R. Schettini, and A. Trémeau,
[14] K. Patel, H. Han, and A. K. Jain, “Secure face unlock: spoof Eds., Berlin, Heidelberg, Springer.
detection on smartphones,” IEEE Transactions on Information [30] G. Kim, S. Eum, J. Suhr, D. Kim, K. Park, and J. Kim, “Face
Forensics and Security, vol. 11, no. 10, pp. 2268–2283, 2016. liveness detection based on texture and frequency analyses,”
Security and Communication Networks 11