A bi‐stream transformer for single‐image dehazing

Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ETRI journal 2024-11
Hauptverfasser: Wang, Mingrui, Yan, Jinqiang, Wan, Chaoying, Yang, Guowei, Yu, Teng
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep‐learning methods, such as encoder–decoder networks, have achieved impressive results in image dehazing. However, these methods often rely only on synthesized data for training that limits their generalizability to hazy, real‐world images. To leverage prior knowledge of haze properties, we propose a bi‐encoder structure that integrates a prior‐based encoder into a traditional encoder–decoder network. The features from both encoders were fused using a feature enhancement module. We adopted transformer blocks instead of convolutions to model local feature associations. Experimental results demonstrate that our method surpasses state‐of‐the‐art methods for synthesized and actual hazy scenes. Therefore, we believe that our method will be a useful supplement to the collection of current artificial intelligence models and will benefit engineering applications in computer vision.
ISSN:1225-6463
2233-7326
DOI:10.4218/etrij.2024-0037