Implicit Pairs for Boosting Unpaired Image-to-Image Translation

In image-to-image translation the goal is to learn a mapping from one image domain to another. In the case of supervised approaches the mapping is learned from paired samples. However, collecting large sets of image pairs is often either prohibitively expensive or not possible. As a result, in recen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-12
Hauptverfasser: Yiftach Ginger, Danon, Dov, Averbuch-Elor, Hadar, Cohen-Or, Daniel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In image-to-image translation the goal is to learn a mapping from one image domain to another. In the case of supervised approaches the mapping is learned from paired samples. However, collecting large sets of image pairs is often either prohibitively expensive or not possible. As a result, in recent years more attention has been given to techniques that learn the mapping from unpaired sets. In our work, we show that injecting implicit pairs into unpaired sets strengthens the mapping between the two domains, improves the compatibility of their distributions, and leads to performance boosting of unsupervised techniques by over 14% across several measurements. The competence of the implicit pairs is further displayed with the use of pseudo-pairs, i.e., paired samples which only approximate a real pair. We demonstrate the effect of the approximated implicit samples on image-to-image translation problems, where such pseudo-pairs may be synthesized in one direction, but not in the other. We further show that pseudo-pairs are significantly more effective as implicit pairs in an unpaired setting, than directly using them explicitly in a paired setting.
ISSN:2331-8422
DOI:10.48550/arxiv.1904.06913