A single-image GAN model using self-attention mechanism and DenseNets

Image generation from a single natural image using generative adversarial networks (GANs) has attracted extensive attention recently due to the GANs’ practical ability to produce photo-realistic images and their potential applications in computer vision. However, learning a powerful generative model...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2024-09, Vol.596, p.127873, Article 127873
Hauptverfasser: Yildiz, Eyyup, Yuksel, Mehmet Erkan, Sevgen, Selcuk
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Image generation from a single natural image using generative adversarial networks (GANs) has attracted extensive attention recently due to the GANs’ practical ability to produce photo-realistic images and their potential applications in computer vision. However, learning a powerful generative model that generates realistic, high-quality images from only a single natural image is still a challenging problem. Training GANs in limited data regimes often causes some issues, such as overfitting, memorization, training divergence, poor image quality, and a long training time. In this study, we investigated the state-of-the-art GAN models in computer vision tasks. We conducted several experiments to deeply understand the challenges of learning a powerful generative model. We introduced a novel unconditional GAN model that produces realistic, high-quality, diverse images based on a single training image. In our model, we employed a self-attention mechanism (SAM), a densely connected convolutional network (DenseNet) architecture, and a relativistic average least-squares GAN with gradient penalty (RaLSGAN-GP) for both the generator and discriminator networks to perform image generation tasks better. SAM controls the global contextual information level. It is complementary to convolutions for large feature maps and gives the generator and discriminator more capability to capture long-range dependencies in feature maps. It compensates for the long training time and low image quality issues. DenseNet connects each layer to every other layer in a feed-forward manner to ensure maximum information flow between layers in the network. It is highly parameter efficient and requires less computation to achieve high performance. It provides improved information and gradient flow throughout the network for easy training. It has a regularizing effect that reduces overfitting in image generation. RaLSGAN-GP further improves data generation quality and the stability of our model at no computational cost and provides much more stable training. Thanks to the appropriate combination of SAM, DenseNet, and RaLSGAN-GP, our model successfully generates realistic, high-quality, diverse images while maintaining the global context of the training image. We conducted experiments, user studies, and model evaluation methods to test our model’s performance and compared it with the previous well-known models on three datasets (Places, LSUN, ImageNet). We demonstrated our model’s capability in im
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2024.127873