University of Oulu

Yu, Z., Qin, Y., Zhao, H., Li, X., & Zhao, G. (2021). Dual-Cross Central Difference Network for Face Anti-Spoofing. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2021/177

Dual-cross central difference network for face anti-spoofing

Saved in:
Author: Yu, Zitong1; Qin, Yunxiao2; Zhao, Hengshuang3;
Organizations: 1CMVS, University of Oulu
2Northwestern Polytechnical University
3University of Oxford
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.4 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2021110453716
Language: English
Published: International Joint Conferences on Artificial Intelligence Organization, 2021
Publish Date: 2021-11-04
Description:

Abstract

Face anti-spoofing (FAS) plays a vital role in securing face recognition systems. Recently, central difference convolution (CDC) has shown its excellent representation capacity for the FAS task via leveraging local gradient features. However, aggregating central difference clues from all neighbors/directions simultaneously makes the CDC redundant and sub-optimized in the training phase. In this paper, we propose two Cross Central Difference Convolutions (C-CDC), which exploit the difference of the center and surround sparse local features from the horizontal/vertical and diagonal directions, respectively. It is interesting to find that, with only five ninth parameters and less computational cost, C-CDC even outperforms the full directional CDC. Based on these two decoupled C-CDC, a powerful Dual-Cross Central Difference Network (DC-CDN) is established with Cross Feature Interaction Modules (CFIM) for mutual relation mining and local detailed representation enhancement. Furthermore, a novel Patch Exchange (PE) augmentation strategy for FAS is proposed via simply exchanging the face patches as well as their dense labels from random samples. Thus, the augmented samples contain richer live/spoof patterns and diverse domain distributions, which benefits the intrinsic and robust feature learning. Comprehensive experiments are performed on four benchmark datasets with three testing protocols to demonstrate our state-of-the-art performance.

see all

ISBN Print: 978-0-9992411-9-6
Pages: 1281 - 1287
DOI: 10.24963/ijcai.2021/177
OADOI: https://oadoi.org/10.24963/ijcai.2021/177
Host publication: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21)
Conference: International Joint Conferences on Artificial Intelligence Organization
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: This work was supported by the Academy of Finland for project MiGA (grant 316765), ICT 2023 project (grant 328115), Infotech Oulu, project 6+E (grant 323287) funded by Academy of Finland, and project PhInGAIN (grant 200414) funded by The Finnish Work Environmental Fund.
Academy of Finland Grant Number: 316765
328115
323287
200414
Detailed Information: 316765 (Academy of Finland Funding decision)
328115 (Academy of Finland Funding decision)
323287 (Academy of Finland Funding decision)
200414 (Academy of Finland Funding decision)
Copyright information: © 2021 International Joint Conferences on Artificial Intelligence.