The Synthesis of Unpaired Underwater Images Using a Multistyle Generative Adversarial Network

Underwater image datasets are crucial in underwater vision research. Because of the strong absorption and scattering effects that occur underwater, some ground truth such as the depth map, which can be easily collected in-air, becomes a great challenge in underwater environments. To solve the issues...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2018-01, Vol.6, p.54241-54257
Hauptverfasser: Li, Na, Zheng, Ziqiang, Zhang, Shaoyong, Yu, Zhibin, Zheng, Haiyong, Zheng, Bing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Underwater image datasets are crucial in underwater vision research. Because of the strong absorption and scattering effects that occur underwater, some ground truth such as the depth map, which can be easily collected in-air, becomes a great challenge in underwater environments. To solve the issues associated with the lack of underwater ground truth, we propose a trainable end-to-end system of an underwater multistyle generative adversarial network (UMGAN) that takes advantage of a cycle-consistent adversarial network (CycleGAN) and conditional generative adversarial networks. This system can generate multiple realistic underwater images from in-air images using a hybrid adversarial system and an unpaired method. Moreover, our model can translate in-air images to underwater images that retain the main content and structural information of the in-air images under specified turbidities or water styles through a style classifier and a conditional vector. Furthermore, we define the color loss and include the structural similarity index measure loss for the system to preserve the content and structure of original in-air images while transferring the backgrounds of the images from air to water. Using UMGAN, we can take advantage of the in-air ground truth and convert the corresponding in-air images into an underwater dataset with multiple water color styles. Our experiments demonstrate that our synthesized underwater images have a high score on image assessment against CycleGAN, WaterGAN, StarGAN, AdaIN, and other state-of-the-art methods. We also show that our synthesized underwater images with in-air depths can be applied to real underwater image depth map estimation.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2018.2870854