Progressive deep video dehazing without explicit alignment estimation

To solve the issue of video dehazing, there are two main tasks to attain: how to align adjacent frames to the reference frame; how to restore the reference frame. Some papers adopt explicit approaches (e.g., the Markov random field, optical flow, deformable convolution, 3D convolution) to align neig...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-05, Vol.53 (10), p.12437-12447
Hauptverfasser: Li, Runde, Chen, Lei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To solve the issue of video dehazing, there are two main tasks to attain: how to align adjacent frames to the reference frame; how to restore the reference frame. Some papers adopt explicit approaches (e.g., the Markov random field, optical flow, deformable convolution, 3D convolution) to align neighboring frames with the reference frame in feature space or image space, then various restoration methods are used to achieve the final dehazing results. In this paper, we propose a progressive alignment and restoration method for video dehazing, which fuses the process of alignment and restoration and simplifies complexity of network models. The alignment process aligns consecutive neighboring frames stage by stage without using the optical flow estimation or deformable convolution. The restoration process is not only implemented under the alignment process but also uses a refinement network to improve the dehazing performance of the whole network. The proposed networks include four fusion networks and one refinement network. To decrease the parameters of networks, three fusion networks share the same parameters in the first fusion stage. Extensive experiments demonstrate that the proposed video dehazing method achieves outstanding performance against the-state-of-art methods.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-022-04158-z