Towards domain adaptation underwater image enhancement and restoration

Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia systems 2024-04, Vol.30 (2), Article 62
Hauptverfasser: Yang, Chao, Jiang, Longyu, Li, Zhicheng, Huang, Jianxing
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Currently, deep convolutional neural networks have made significant research progress in the field of underwater image enhancement and restoration. However, most of the existing methods use fixed-scale convolutional kernels, which are easily overfitted in practice, resulting in poor domain adaptation. Therefore, in this paper, we propose an underwater image enhancement and restoration network based on an encoder and decoder framework that focuses on extracting generic features of degraded underwater images, resulting in better restoration performance with domain adaptation. We first propose the Atrous spatial attention module to perform Atrous convolutional expanding on the image receptive field, and then cooperate with the spatial attention mechanism to accurately localize the image fog region. Then, a feature aggregation method called Cross-Scale Skip connection is used to effectively fuse global features rich in spatial location information with local features and integrate them into the decoder to ensure that the recovered area is consistent with the surrounding pixels. Finally, in order to make the recovered image more close to the ground truth image, a novel weighted Euclidean color distance is used instead of L1 distance in this paper, and it is considered as a novel reconstruction loss. We have done extensive experiments to demonstrate that the proposed method is state-of-the-art in terms of performance and is highly adaptable in different aspects.
ISSN:0942-4962
1432-1882
DOI:10.1007/s00530-023-01246-z