Enhanced Unpaired Image-to-Image Translation via Transformation in Saliency Domain

Unpaired image to image translation is the task of converting images in unpaired datasets. The primary goal of the task is to translate a source image into the image aligned with the target domain while keeping the fundamental content. Existing researches have introduced effective techniques to tran...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.137495-137505
Hauptverfasser: Shibasaki, Kei, Ikehara, Masaaki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Unpaired image to image translation is the task of converting images in unpaired datasets. The primary goal of the task is to translate a source image into the image aligned with the target domain while keeping the fundamental content. Existing researches have introduced effective techniques to translate images with unpaired datasets, focusing on preserving the fundamental content. However, these techniques have limitations in dealing with significant shape changes and preserving backgrounds that should not be transformed. The proposed method attempts to address these problems by utilizing the saliency domain for translation and simultaneously learning the translation in the saliency domain as well as in the image domain. The saliency domain represents the shape and position of the main object. The explicit learning of transformations within the saliency domain improves network's ability to transform shapes while maintaining the background. Experimental results show that the proposed method successfully addresses the problems of unpaired image to image translation and achieves competitive metrics with existing methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3338629