laitimes

【Machine Learning Practice】Face Liveness Detection

author:AIPlayer
【Machine Learning Practice】Face Liveness Detection

Face Anti-spoofing can effectively prevent spoofing attacks by determining whether the currently entered face is a real person before face recognition.

1. Overview

1. Common deception methods

Common face recognition spoofing attacks include: (1) photo-printed color face photos, (2) video: a recorded face video, (3) 3D masks: 3D printed human head models, etc.

2. In vivo detection methods

In order to perform liveness detection on faces, it can essentially be understood as a binary classification problem of real and fake faces, and the basic methods are:

  • Based on texture features

Based on the difference in texture details between real and fake faces, the LBP, DoG, and SURF features of faces are extracted, and a binary classifier such as SVM and LDA is trained. This method tends to be more sensitive to lighting and camera conditions, and is less robust.

  • Based on motion information

Extract the specific movement information of the face area from the video to judge the real or fake face, and use the user's blinking, mouth movement, nodding and shaking to judge the real and fake face. This method requires the cooperation of users and is mostly used for financial security authentication, which is not suitable for real-time access control systems.

  • Based on deep learning

Deep learning networks such as CNN and RNN are used to train a binary classification network model for judging real and fake faces. Because the training of deep learning needs to be based on a large amount of data, and the variety of scene datasets of face deception is limited, it is difficult to cover the way of deception, the distribution of the test set and the training set is very different, and the problem of overfitting is prone to occur, and the model is not ideal on the test set.

  • With the help of assistive devices

Using external auxiliary equipment such as near-infrared, two wavelength bands were selected and combined with visible light imaging for face camouflage detection. This method has strict requirements for acquisition conditions, and the cost is higher than that of ordinary visible light systems.

3. Commonly used datasets

【Machine Learning Practice】Face Liveness Detection

Common datasets for face liveness detection

  • NUAA - http://parnec.nuaa.edu.cn/xtan/data/nuaaimposterdb.html,http://parnec.nuaa.edu.cn/xtan/NUAAImposterDB_download.html
  • Replay-Attack dataset - https://www.idiap.ch/dataset/replayattack
  • CASIA Face Anti-Spoofing Database - http://www.cbsr.ia.ac.cn/english/FaceAntiSpoofDatabases.asp
  • MSU Mobile Face Spoofing Database (MSU MFSD) - http://biometrics.cse.msu.edu/Publications/Databases/MSUMobileFaceSpoofing/index.htm#Download_instructions

2. Train a simple CNN model based on HSV + YCrCb color features

Project Address:

https://github.com/Oreobird/Face-Anti-Spoofing

1. Network model structure

Based on the implementation of tensorflow keras, the network model is relatively simple, as shown in the figure below, the model is a multiple-input, single-output structure, which converts the face image into HSV and YCrCb color spaces, and inputs them into the VGG16 basic network to extract features respectively, and then fuses the two, and then adds several layers of full connections, and finally outputs the classification probability of real and fake faces with softmax.

The training and testing of the model are encapsulated in the FasNet class of the models.py file.

【Machine Learning Practice】Face Liveness Detection

Network structure

2. Dataset

The dataset uses NUAA, and the training data is processed in datasets.py files, where DataSet is a general reading data class, and the NUAA class encapsulates the relevant operations on the NUAA dataset with the interface provided by DataSet.

3. Testing

Dlib detects faces in the video data read by the camera in real time and feeds it to the trained model for inference.

3. Summary

In this paper, we briefly summarize the basic knowledge related to face liveness detection, train a simple CNN classification model based on Tensorflow + Keras through the HSV and YCrCb color features of images, and then use the face detection module of Dlib library to implement the real-time video face liveness detection demo.

Read on