Attention fusion network for multi-spectral semantic segmentation

•A co-attention mechanism is proposedto guide the fusionof multi-spectralinformation.•The co-attention mechanismimproves the accuracy of feature classification and localization.•The fusion of RGB and IR multi-spectral inputsimproves semantic segmentation of nightscenes. To improve the accuracy of mu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition letters 2021-06, Vol.146, p.179-184
Hauptverfasser: Xu, Jiangtao, Lu, Kaige, Wang, Han
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A co-attention mechanism is proposedto guide the fusionof multi-spectralinformation.•The co-attention mechanismimproves the accuracy of feature classification and localization.•The fusion of RGB and IR multi-spectral inputsimproves semantic segmentation of nightscenes. To improve the accuracy of multi-spectral semantic segmentation, an attention fusion network (AFNet) based on deep learning is proposed. Different from current methods, the AFNet uses a co-attention mechanism by designing an attention fusion module to calculate the spatial correlation between the red-green-blue (RGB) image and infrared (IR) image feature maps to guide the fusion of features from different spectra. This approach enhances the feature presentation and makes full use of the complementary characteristics of multi-spectral sources. The proposed network is tested on RGB-IR datasets and compared with relevant state-of-the-art networks. The experimental analyses prove that the proposed AFNet can improve multi-spectral semantic segmentation results with good visual definition and high accuracy in classification and localization.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2021.03.015