University of Oulu

Z. Yu et al., "Searching Central Difference Convolutional Networks for Face Anti-Spoofing," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 5294-5304, doi: 10.1109/CVPR42600.2020.00534

Searching central difference convolutional networks for face anti-spoofing

Saved in:
Author: Yu, Zitong1; Zhao, Chenxu2; Wang, Zezheng3;
Organizations: 1CMVS, University of Oulu
2Mininglamp Academy of Sciences, Mininglamp Technology
3Aibee
4Northwestern Polytechnical University
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 4.6 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202102195368
Language: English
Published: Institute of Electrical and Electronics Engineers, 2020
Publish Date: 2021-02-19
Description:

Abstract

Face anti-spoofing (FAS) plays a vital role in face recognition systems. Most state-of-the-art FAS methods 1) rely on stacked convolutions and expert-designed network, which is weak in describing detailed fine-grained information and easily being ineffective when the environment varies (e.g., different illumination), and 2) prefer to use long sequence as input to extract dynamic features, making them difficult to deploy into scenarios which need quick response. Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information. A network built with CDC, called the Central Difference Convolutional Network (CDCN), is able to provide more robust modeling capacity than its counterpart built with vanilla convolution. Furthermore, over a specifically designed CDC search space, Neural Architecture Search (NAS) is utilized to discover a more powerful network structure (CDCN++), which can be assembled with Multiscale Attention Fusion Module (MAFM) for further boosting performance. Comprehensive experiments are performed on six benchmark datasets to show that 1) the proposed method not only achieves superior performance on intra-dataset testing (especially 0.2% ACER in Protocol-1 of OULU-NPU dataset), 2) it also generalizes well on cross-dataset testing (particularly 6.5% HTER from CASIA-MFSD to Replay-Attack datasets). The codes are available at https://github.com/ZitongYu/CDCN.

see all

Series: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN: 1063-6919
ISSN-E: 2575-7075
ISSN-L: 1063-6919
ISBN: 978-1-7281-7168-5
ISBN Print: 978-1-7281-7169-2
Pages: 5294 - 5304
Article number: 9156660
DOI: 10.1109/CVPR42600.2020.00534
OADOI: https://oadoi.org/10.1109/CVPR42600.2020.00534
Host publication: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020
Conference: IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: This work was supported by the Academy of Finland for project MiGA (grant 316765), ICT 2023 project (grant 328115), and Infotech Oulu. As well, the authors wish to acknowledge CSC - IT Center for Science, Finland, for computational resources.
Academy of Finland Grant Number: 316765
328115
Detailed Information: 316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
Copyright information: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.