Volume 31 Issue 1
Jan.  2022
Turn off MathJax
Article Contents
CHEN Beijing, TAN Weijin, WANG Yiting, et al., “Distinguishing Between Natural and GAN-Generated Face Images by Combining Global and Local Features,” Chinese Journal of Electronics, vol. 31, no. 1, pp. 59-67, 2022, doi: 10.1049/cje.2020.00.372
Citation: CHEN Beijing, TAN Weijin, WANG Yiting, et al., “Distinguishing Between Natural and GAN-Generated Face Images by Combining Global and Local Features,” Chinese Journal of Electronics, vol. 31, no. 1, pp. 59-67, 2022, doi: 10.1049/cje.2020.00.372

Distinguishing Between Natural and GAN-Generated Face Images by Combining Global and Local Features

doi: 10.1049/cje.2020.00.372
Funds:  This work was supported by the National Natural Science Foundation of China (62072251), NUIST Students’ Platform for Innovation and Entrepreneurship Training Program (202110300022Z), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD) Fund
More Information
  • Author Bio:

    (corresponding author) received the Ph.D. degree in computer science from Southeast University, Nanjing, China, in 2011. Now he is a Professor in School of Computer, Nanjing University of Information Science and Technology, China. His research interests include color image processing, image forensics, image watermarking, and pattern recognition. He serves as an Editorial Board Member of the Journal of Mathematical Imaging and Vision. (Email: nbutimage@126.com)

    received the M.S. degree in computer science and technology from Nanjing University of Information Science and Technology, Nanjing, China, in 2011. His research interests include image forensics and image processing

    received the B.S. degree in safety engineering from Nanjing University of Information Science and Technology, Nanjing, China, in 2019. Now she is pursuing the Ph.D. degree in Warwick Manufacturing Group, University of Warwick, UK. Her research interests include machine learning and image processing

    received the Ph.D. degree in computer science from the Chinese Academy of Sciences, Beijing, China, in 2005. She is currently a Professor with the Center for Machine Vision and Signal Analysis, University of Oulu, Finland. She is a Fellow of the IAPR. She has authored or coauthored more than 240 papers in journals and conferences. Her current research interests include image and video descriptors, facial expression and micro-expression recognition, and person identification

  • Received Date: 2020-11-06
  • Accepted Date: 2021-07-05
  • Available Online: 2021-08-19
  • Publish Date: 2022-01-05
  • With the development of face image synthesis and generation technology based on generative adversarial networks (GANs), it has become a research hotspot to determine whether a given face image is natural or generated. However, the generalization capability of the existing algorithms is still to be improved. Therefore, this paper proposes a general algorithm. To do so, firstly, the learning on important local areas, containing many face key-points, is strengthened by combining the global and local features. Secondly, metric learning based on the ArcFace loss is applied to extract common and discriminative features. Finally, the extracted features are fed into the classification module to detect GAN-generated faces. The experiments are conducted on two publicly available natural datasets (CelebA and FFHQ) and seven GAN-generated datasets. Experimental results demonstrate that the proposed algorithm achieves a better generalization performance with an average detection accuracy over 0.99 than the state-of-the-art algorithms. Moreover, the proposed algorithm is robust against additional attacks, such as Gaussian blur, and Gaussian noise addition.
  • loading
  • [1]
    D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks,” arXiv preprint, arXiv: 1703.10717, 2017.
    [2]
    T. Karras, T. Aila, S. Laine, et al., “Progressive growing of GANs for improved quality, stability, and variation,” arXiv preprint, arXiv: 1710.10196, 2017.
    [3]
    T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proc. of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, California, pp.4401–4410, 2019.
    [4]
    X. Xu, L. Zhang, B. Lang, et al., “Research on inception module incorporated Siamese convolutional neural networks to realize face recognition,” Acta Electronica Sinica, vol.48, no.4, pp.643–647, 2020. (in Chinese)
    [5]
    H. Li, Q. Li, and L. Zhou, “Dynamic facial expression recognition based on multi-visual and audio descriptors,” Acta Electronica Sinica, vol.47, no.8, pp.1643–1653, 2019. (in Chinese)
    [6]
    C. Gao, X. Li, F. Zhou, et al., “Face liveness detection based on the improved CNN with context and texture information,” Chinese Journal of Electronics, vol.28, no.6, pp.1092–1098, 2019. doi: 10.1049/cje.2019.07.012
    [7]
    X. Yang, Y. Li, H. Qi, et al., “Exposing GAN-synthesized faces using landmark locations,” in Proc. of the ACM Workshop on Information Hiding and Multimedia Security, Paris, pp.113–118, 2019.
    [8]
    H. Li, B. Li, S. Tan, et al., “Detection of deep network generated images using disparities in color components,” arXiv preprint, arXiv: 1808.07276, 2018.
    [9]
    L. Nataraj, T. M. Mohammed, B. S. Manjunath, et al., “Detecting GAN generated fake images using co-occurrence matrices,” Electronic Imaging, vol.532, no.5, pp.1–7, 2019.
    [10]
    S. McCloskey and M. Albright, “Detecting GAN-generated imagery using saturation cues,” in Proc. of 2019 IEEE International Conference on Image Processing, Taipei, pp.4584–4588, 2019.
    [11]
    F. Matern, C. Riess, and M. Stamminger, “Exploiting visual artifacts to expose deepfakes and face manipulations,” in Proc. of 2019 IEEE Winter Applications of Computer Vision Workshops, Waikoloa, HI, pp.83–92, 2019.
    [12]
    F. Marra, D. Gragnaniello, D. Cozzolino, et al., “Detection of gan-generated fake images over social networks,” in Proc. of 2018 IEEE Conference on Multimedia Information Processing and Retrieval, Miami, Florida, pp.384–389, 2018.
    [13]
    F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, pp.1251–1258, 2017.
    [14]
    H. Mo, B. Chen, and W. Luo, “Fake faces identification via convolutional neural network,” in Proc. of the ACM Workshop on Information Hiding and Multimedia Security, Innsbruck, pp.43–47, 2018.
    [15]
    C. C. Hsu, Y. X. Zhuang, and C. Y. Lee, “Deep fake image detection based on pairwise learning,” Applied Sciences, vol.10, no.1, article no.370, 2020. doi: 10.3390/app10010370
    [16]
    S. Tariq, S. Lee, H. Kim, et al., “Detecting both machine and human created fake face images in the wild,” in Proc. of the 2nd International Workshop on Multimedia Privacy and Security, Toronto, pp.81–87, 2018.
    [17]
    S. Y. Wang, O. Wang, R. Zhang, et al., “CNN-generated images are surprisingly easy to spot... for now,” in Proc. of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, pp.8695–8704, 2020.
    [18]
    B. Liu and C. M. Pun, “Locating splicing forgery by fully convolutional networks and conditional random field,” Signal Processing: Image Communication, vol.66, pp.103–112, 2018. doi: 10.1016/j.image.2018.04.011
    [19]
    B. Chen, W. Tan, G. Coatrieux, et al., “A serial image copy-move forgery localization scheme with source/target distinguishment,” IEEE Transactions on Multimedia, vol.23, pp.3506–3517, 2020.
    [20]
    C. Ding and D. Tao, “Robust face recognition via multimodal deep face representation,” IEEE Transactions on Multimedia, vol.17, no.11, pp.2049–2058, 2015. doi: 10.1109/TMM.2015.2477042
    [21]
    B. Chen, X. Ju, B. Xiao, et al., “Locally GAN-generated face detection based on an improved Xception,” Information Sciences, vol.572, pp.16–28, 2021. doi: 10.1016/j.ins.2021.05.006
    [22]
    R. Huang, S. Zhang, T. Li, et al., “Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis,” in Proc. of the 2017 IEEE International Conference on Computer Vision, Honolulu, Hawaii, pp.2439–2448, 2017.
    [23]
    J. Deng, J. Guo, N. Xue, et al., “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, California, pp.4690–4699, 2019.
    [24]
    H. Larochelle and G. E. Hinton, “Learning to combine foveal glimpses with a third-order boltzmann machine,” in Proc. of a meeting of the 24th Annual Conference on Neural Information Processing Systems 2010: Advances in Neural Information Processing Systems, Vancouver, pp.1243–1251, 2010.
    [25]
    J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah, pp.7132–7141, 2018.
    [26]
    S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification”, in Proc. of the 2005 IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, pp.539–546, 2005.
    [27]
    F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, pp.815–823, 2015.
    [28]
    K. He, X. Zhang, S. Ren, et al., “Deep residual learning for image recognition,” in Proc. of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, pp.770–778, 2016.
    [29]
    D. King, “Dlib c++ library,” Access on: http://dlib.net, 2018.
    [30]
    Z. Liu, P. Luo, X. Wang, et al., “Deep learning face attributes in the wild,” in Proc. of the 2015 IEEE International Conference on Computer Vision, Boston, MA, pp.3730–3738, 2015.
    [31]
    I. Gulrajani, F. Ahmed, M. Arjovsky, et al., “Improved training of wasserstein gans,” arXiv preprint, arXiv:1704.00028, 2017.
    [32]
    M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” arXiv Preprint, arXiv: 1701.07875, 2017.
    [33]
    X. Mao, Q. Li, H. Xie, et al., “Least squares generative adversarial networks,” in Proc. of the 2017 IEEE International Conference on Computer Vision, Honolulu, HI, pp.2813–2821, 2017.
    [34]
    A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv Preprint, arXiv: 1511.06434, 2015.
    [35]
    I. Sutskever, J. Martens, G. Dahl, et al., “On the importance of initialization and momentum in deep learning”, in Proc. of International Conference on Machine Learning, Atlanta, Georgia, pp.1139–1147, 2013.
    [36]
    W. Quan, K. Wang, D. M. Yan, et al., “Distinguishing between natural and computer-generated images using convolutional neural networks,” IEEE Trans. on Information Forensics and Security, vol.13, no.11, pp.2772–2787, 2018. doi: 10.1109/TIFS.2018.2834147
    [37]
    X. Chang, J. Wu, T. Yang, et al., “DeepFake face image detection based on improved VGG convolutional neural network,” in Proc. of the 39th Chinese Control Conference, Shenyang, pp.7252–7256, 2020.
    [38]
    H. Li, B. Li, S. Tan, et al., “Identification of deep network generated images using disparities in color components,” Signal Processing, vol.174, article no.107616, 2020.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(4)

    Article Metrics

    Article views (1184) PDF downloads(152) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return