Semantic-Aware Dehazing Network With Adaptive Feature Fusion

Despite that convolutional neural networks (CNNs) have shown high-quality reconstruction for single image dehazing, recovering natural and realistic dehazed results remains a challenging problem due to semantic confusion in the hazy scene. In this article, we show that it is possible to recover text...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2023-01, Vol.53 (1), p.454-467
Hauptverfasser: Zhang, Shengdong, Ren, Wenqi, Tan, Xin, Wang, Zhi-Jie, Liu, Yong, Zhang, Jingang, Zhang, Xiaoqin, Cao, Xiaochun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Despite that convolutional neural networks (CNNs) have shown high-quality reconstruction for single image dehazing, recovering natural and realistic dehazed results remains a challenging problem due to semantic confusion in the hazy scene. In this article, we show that it is possible to recover textures faithfully by incorporating semantic prior into dehazing network since objects in haze-free images tend to show certain shapes, textures, and colors. We propose a semantic-aware dehazing network (SDNet) in which the semantic prior is taken as a color constraint for dehazing, benefiting the acquisition of a reasonable scene configuration. In addition, we design a densely connected block to capture global and local information for dehazing and semantic prior estimation. To eliminate the unnatural appearance of some objects, we propose to fuse the features from shallow and deep layers adaptively. Experimental results demonstrate that our proposed model performs favorably against the state-of-the-art single image dehazing approaches.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2021.3124231