Anomaly Detection in Face Anti-spoofing: Algorithms, Training Set Construction, and Bias Analysis
Date
2023-12-07
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Durham University
Abstract
Face recognition is a mature and trustworthy method for identifying individuals.
Thanks to the availability of high-definition cameras and accompanying devices,
this particular biometric recognition modality is widely regarded as the fastest and
least obtrusive option. Despite advancements in face recognition systems, it has been
discovered that successful spoofing attempts are still possible. Various anti-spoofing
algorithms, also known in the literature as liveness detection tests and presentation
attack detection algorithms, have been devised to counteract such attacks.
The first contribution of this research is to demonstrate the effectiveness of certain
simple and direct spoofing attacks. Our approach involves utilizing ResNet50,
a highly reliable deep neural network, as a binary classification method. We assess
its performance by subjecting it to adversarial attacks that involve manipulating
the saturation component of imposter images. We have found that it is particularly
vulnerable to spoofing attacks that employ processed imposter images. To the best
of our knowledge, this study represents the pioneering exploration of adversarial
attacks on deep neural networks within the realm of face anti-spoofing detection. In
addition, we conducted an experiment that revealed the potential of the proposed
adversarial attack to be converted into a direct presentation attack.
In a second contribution, we propose an alternative approach incorporating in the-
wild images and non-specialised databases into anomaly detection to improve
the face anti-spoofing algorithm’s performance on unseen databases. We developed
a method for detecting anomalies in face anti-spoofing by employing a convolutional
autoencoder. We assessed its effectiveness using the NUAA database, which
had not been previously utilized in the training. Our results indicated improved
performance when incorporating in-the-wild face images and face data from nonspecialized
databases into the training dataset.
Transformers are emerging as the new gold standard in various computer vision
applications and have already been used in face anti-spoofing, demonstrating
competitive performance. In a third contribution, we propose a network with the
ViT transformer and ResNet18 as the backbone for anomaly detection in face anti-spoofing with a decoder as the head. Then, we validate various anomaly detectors
to compare the results with our proposed method. Also, using the ViT with MLP
as a binary classifier baseline and compare it with our model. Our comprehensive
testing and evaluation have demonstrated that this proposed approach competes
admirably as a method for detecting anomalies in the domain of face anti-spoofing.
Finally, there are only a few papers that specifically address the issue of racial
bias in anti-spoofing. As a fourth contribution, we present a systematic study of race
bias in face anti-spoofing with three key characteristics: the focus is on analysing potential
bias in bona fide errors, where significant ethical and legal issues lie; analyses
of various stages of the classification process, and treating the value of the threshold
that determines the classifier’s operating point on the ROC curve as a user-defined
variable. We do not assume it is fixed by the vendor of the biometric verification
system through a black-box process. To the best of our knowledge, this is the
first investigation into racial bias within the face anti-spoofing domain that employs
anomaly detection techniques while also incorporating a non-specialized database
for analysis. Our results show that racial bias in face anti-spoofing is influenced by
factors beyond mean response values, such as different variances, bimodality, and
outliers.
Overall, this thesis contributes to the ongoing development of anti-spoofing techniques
and investigates some important issues regarding the potential for bias in
these systems.
Description
Keywords
Face spoofing attacks, Face anti-spoofing, Adversarial attack, Anomaly Detection, Racial Bias.