OSAGGAN: one-shot unsupervised image-to-image translation using attention-guided generative adversarial networks
This paper proposes a single-image translation method based on attention guidance to solve the problem of poor image quality in current single-image translation. The model uses a multi-scale pyramid architecture. First, the input image is downsampled, and then the downsampled image is input into the...
Gespeichert in:
Veröffentlicht in: | International journal of machine learning and cybernetics 2023-10, Vol.14 (10), p.3471-3482 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes a single-image translation method based on attention guidance to solve the problem of poor image quality in current single-image translation. The model uses a multi-scale pyramid architecture. First, the input image is downsampled, and then the downsampled image is input into the attention-guided generator to complete the translation of an image from the source domain X to the target domain Y. We introduce an attention module and a Scale-Add (SA) module, which can stabilize the training process of GAN and effectively improve the image quality. The attention module can retain the contour and detail of the object. In addition, the Scale-Add (SA) module can adjust the style of the image and add some low-scale detail information. Through extensive experimental verification and comparison with several baseline methods on benchmark datasets, we verify the effectiveness of the proposed framework. |
---|---|
ISSN: | 1868-8071 1868-808X |
DOI: | 10.1007/s13042-023-01844-3 |