Z. Ming, Z. Yu, M. Al-Ghadi, M. Visani, M. M. Luqman and J. -C. Burie, "Vitranspad: Video Transformer Using Convolution And Self-Attention For Face Presentation Attack Detection," 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022, pp. 4248-4252, doi: 10.1109/ICIP46576.2022.9897560.
Vitranspad : video transformer using convolution and self-attention for face presentation attack detection
|Author:||Ming, Zuheng1; Yu, Zitong2; Al-Ghadi, Musab1;|
1L3i, University of La Rochelle, La Rochelle, France
2CMVS, University of Oulu, Finland
3School of Information & Communication Technology, Hanoi University of Science and Technology, Vietnam
|Online Access:||PDF Full Text (PDF, 0.6 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2023041135862
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2023-04-11
Face Presentation Attack Detection (PAD) is an important measure to prevent spoof attacks for face biometric systems. Many works based on Convolution Neural Networks (CNNs) for face PAD formulate the problem as an image-level binary classification task without considering the context. Alternatively, Vision Transformers (ViT) using self-attention to attend the context of an image become the mainstreams in face PAD. Inspired by ViT, we propose a Video-based Transformer for face PAD (ViTransPAD) with short/long-range spatio-temporal attention which can not only focus on local details with short-range attention within a frame but also capture long-range dependencies over frames. Instead of using coarse image patches with single-scale as in ViT, we pro-pose the Multi-scale Multi-Head Self-Attention (MsMHSA) module to accommodate multi-scale patch partitions of Q, K, V feature maps to different heads on a single transformer in a coarse-to-fine manner, which enables to learn a fine-grained representation to perform pixel-level discrimination for face PAD. Due to lack inductive biases of convolutions in pure transformers, we also introduce convolutions to our ViTransPAD to integrate the desirable properties of CNNs. The extensive experiments show the effectiveness of our proposed ViTransPAD with a preferable accuracy-computation balance, which can serve as a new backbone for face PAD.
IEEE International Conference on Image Processing
|Pages:||4248 - 4252|
2022 IEEEInternational Conferenceon Image Processing : Proceedings
International Conference on Image Processing
|Type of Publication:||
A4 Article in conference proceedings
|Field of Science:||
113 Computer and information sciences
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.