A Tri-Attention fusion guided multi-modal segmentation network

•A novel correlation description block is introduced to discover the latent multi-source correlation between modalities.•A constraint based on the correlation using KL divergence is proposed to aide the segmentation network to extract the correlated feature representation for a better segmentation.•...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2022-04, Vol.124, p.108417, Article 108417
Hauptverfasser: Zhou, Tongxue, Ruan, Su, Vera, Pierre, Canu, Stéphane
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A novel correlation description block is introduced to discover the latent multi-source correlation between modalities.•A constraint based on the correlation using KL divergence is proposed to aide the segmentation network to extract the correlated feature representation for a better segmentation.•A tri-attention fusion strategy is proposed to recalibrate the feature representation along modality-attention, spatial-attention and correlation-attention paths.•The first 3D multi-modal brain tumor segmentation network guided by tri-attention fusion is proposed. In the field of multimodal segmentation, the correlation between different modalities can be considered for improving the segmentation results. Considering the correlation between different MR modalities, in this paper, we propose a multi-modality segmentation network guided by a novel tri-attention fusion. Our network includes N model-independent encoding paths with N image sources, a tri-attention fusion block, a dual-attention fusion block, and a decoding path. The model independent encoding paths can capture modality-specific features from the N modalities. Considering that not all the features extracted from the encoders are useful for segmentation, we propose to use dual attention based fusion to re-weight the features along the modality and space paths, which can suppress less informative features and emphasize the useful ones for each modality at different positions. Since there exists a strong correlation between different modalities, based on the dual attention fusion block, we propose a correlation attention module to form the tri-attention fusion block. In the correlation attention module, a correlation description block is first used to learn the correlation between modalities and then a constraint based on the correlation is used to guide the network to learn the latent correlated features which are more relevant for segmentation. Finally, the obtained fused feature representation is projected by the decoder to obtain the segmentation results. Our experiment results tested on BraTS 2018 dataset for brain tumor segmentation demonstrate the effectiveness of our proposed method.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2021.108417