Transferability of CNN models for GAN-generated face detection

With the advancement of Generative Adversarial Networks (GANs), generated face images by models like AttGAN have become more realistic, posing challenges in detecting fake faces from real ones. In this paper, we explore the transferability of pretrained Convolutional Neural Networks (CNNs) for the t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2024-03, Vol.83 (33), p.79815-79831
Hauptverfasser: Aieprasert, Thanapat, Mahdlang, Yada, Pansiri, Chadaya, Sae-Bae, Napa, Khomkham, Banphatree
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the advancement of Generative Adversarial Networks (GANs), generated face images by models like AttGAN have become more realistic, posing challenges in detecting fake faces from real ones. In this paper, we explore the transferability of pretrained Convolutional Neural Networks (CNNs) for the task of detecting fake face images generated by the AttGAN model. In this study, we investigate the effectiveness of pretrained ResNet-50 and VGG-19 models trained on ImageNet and VGG face dataset in extracting useful features to be used for classifying whether a given face image is genuine or fake face images. In particular, the performance of pretrained models is evaluated in terms of accuracy, precision and recall. Our experimental results demonstrate the potential of pretrained ResNet-50 model with ImageNet weights for detecting fake face images generated by AttGAN and highlight their transferability for this challenging task. That is, a unified model based on the pretrained ResNet-50 model with ImageNet weights designed to detect fake face images with various attribute modifications achieved an average precision of 96.9% and a recall rate of 97.2%. In comparison, the model developed specifically for detecting fake face images with a single attribute modification achieved a precision of 96.7% and a recall rate of 96.9% on LFW dataset. The study demonstrates that employing a unified model for attribute detection in facial analysis tasks yields promising results, showcasing its potential as a simpler and more efficient alternative to developing separate models for individual attributes.
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-024-18664-4