Rethinking cross-domain semantic relation for few-shot image generation
Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-do...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-10, Vol.53 (19), p.22391-22404 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training well-performing Generative Adversarial Networks (GANs) with limited data has always been challenging. Existing methods either require sufficient data (over 100 training images) for training or generate images of low quality and low diversity. To solve this problem, we propose a new Cross-domain Semantic Relation (CSR) loss. The CSR loss improves the performance of the generative model by maintaining the relationship between instances in the source domain and generated images. At the same time, a perceptual similarity loss and a discriminative contrastive loss are designed to further enrich the diversity of generated images and stabilize the training process of models. Experiments on nine publicly available few-shot datasets and comparisons with the current nine methods show that our approach is superior to all baseline methods. Finally, we perform ablation studies on the proposed three loss functions and prove that these three loss functions are essential for few-shot image generation tasks. Code is available at
https://github.com/gouayao/CSR
. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-023-04602-8 |