OMGD-StarGAN: improvements to boost StarGAN v2 performance

A good image editing model should learn to map the relationships between styles from different domains, cater to the high quality and diversity of the generated images, and be highly scalable across different domains. At the same time, given the importance of multi-device deployments, especially of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Evolving systems 2024-04, Vol.15 (2), p.455-467
Hauptverfasser: Li, Rui, Gu, Jintao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A good image editing model should learn to map the relationships between styles from different domains, cater to the high quality and diversity of the generated images, and be highly scalable across different domains. At the same time, given the importance of multi-device deployments, especially of models on lightweight devices, lightweight optimization of models is an essential and critical task. Based on these key points, a new approach to optimizing and improving existing base models, namely OMGD-StarGAN, is proposed by combining PatchGAN discriminators and DynamicD dynamic training strategies, ResNet style generators, and modulated convolution, based on online multi-granularity knowledge distillation algorithms and StarGAN v2. A comparison of various experiments is conducted, and the experimental results show that the proposed model reduces the computational cost while improving the quality and diversity of the generated images.
ISSN:1868-6478
1868-6486
DOI:10.1007/s12530-023-09521-0