B. Chen, X. Liu, Y. Zheng, G. Zhao and Y. -Q. Shi, "A Robust GAN-Generated Face Detection Method Based on Dual-Color Spaces and an Improved Xception," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 6, pp. 3527-3538, June 2022, doi: 10.1109/TCSVT.2021.3116679
A robust GAN-generated face detection method based on dual-color spaces and an improved Xception
|Author:||Chen, Beijing1,2,3; Liu, Xin1,2,3; Zheng, Yuhui1,2,3;|
1Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing 210044, China
2School of Computer, Nanjing University of Information Science and Technology, Nanjing 210044, Chin
3Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing 210044, China
4Center for Machine Vision and Signal Analysis, University of Oulu, 90014 Oulu, Finland
5Department of Electrical and Computer Engineer- ing, New Jersey Institute of Technology, Newark, NJ 07102 USA
|Online Access:||PDF Full Text (PDF, 1.4 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2022090257046
Institute of Electrical and Electronics Engineers,
|Publish Date:|| 2022-09-02
In recent years, generative adversarial networks (GANs) have been widely used to generate realistic fake face images, which can easily deceive human beings. To detect these images, some methods have been proposed. However, their detection performance will be degraded greatly when the testing samples are post-processed. In this paper, some experimental studies on detecting post-processed GAN-generated face images find that (a) both the luminance component and chrominance components play an important role, and (b) the RGB and YCbCr color spaces achieve better performance than the HSV and Lab color spaces. Therefore, to enhance the robustness, both the luminance component and chrominance components of dual-color spaces (RGB and YCbCr) are considered to utilize color information effectively. In addition, the convolutional block attention module and multilayer feature aggregation module are introduced into the Xception model to enhance its feature representation power and aggregate multilayer features, respectively. Finally, a robust dual-stream network is designed by integrating dual-color spaces RGB and YCbCr and using an improved Xception model. Experimental results demonstrate that our method outperforms some existing methods, especially in its robustness against different types of post-processing operations, such as JPEG compression, Gaussian blurring, gamma correction, and median filtering.
IEEE transactions on circuits and systems for video technology
|Pages:||3527 - 3538|
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This work was supported in part by the National Natural Science Foundation of China under Grant 62072251, Grant U20B2065, and Grant 61972206; in part by the Natural Science Foundation of Jiangsu Province under Grant BK20211539; in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) fund.
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.