Unsupervised Image to Image Translation With Additional Mask

With the development of deep learning, the performance of image-to-image translation is also increasing. However, most of the image-to-image translation models depend on the implicit method which does not explain why the models alter specific parts of the original input images. In this work, we assu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.110522-110529
Hauptverfasser: Choi, Hyun-Tae, Sohn, Bong-Soo, Hong, Byung-Woo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the development of deep learning, the performance of image-to-image translation is also increasing. However, most of the image-to-image translation models depend on the implicit method which does not explain why the models alter specific parts of the original input images. In this work, we assume that we can control the extent to which the models translate the input images using an explicit method. We explicitly create masks that will be added to the input images, aiming to highlight the difference between the inputs and the translated images. Since limiting the area of the masks directly affects the shape of the translated images, we can adjust the model through a simple regularization parameter. Our proposed method demonstrates that a simple regularization parameter, which regularizes the generated masks, can control where the model needs to change and remain. Furthermore, by adjusting the degree of the regularization parameter, we can generate diverse translated images from one original image.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3322146