GroupTransNet: Group transformer network for RGB-D salient object detection

As an active topic in computer vision, RGB-D salient object detection has witnessed substantial progress. Although the existing methods have achieved appreciable performance, there are still some challenges. The locality of convolutional neural networks requires that the model has a sufficiently dee...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2024-08, Vol.594, p.127865, Article 127865
Hauptverfasser: Fang, Xian, Jiang, Mingfeng, Zhu, Jinchao, Shao, Xiuli, Wang, Hongpeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:As an active topic in computer vision, RGB-D salient object detection has witnessed substantial progress. Although the existing methods have achieved appreciable performance, there are still some challenges. The locality of convolutional neural networks requires that the model has a sufficiently deep global receptive field, while the local characteristic represented by transformer with strong globality is always not enough. Besides, the shared information of contextual features tends to be usually overlooked. To address these bottlenecks, we propose a novel group transformer network (GroupTransNet), which is good at learning the long-range dependencies of cross layer features to promote more perfect feature expression between high-level and low-level features. Importantly, we soft group the features of the middle and latter three levels to absorb the semantic information of slightly former level features. Firstly, the input features are adaptively purified by the element-wise operation and sequential attention mechanism. Afterwards, the intermediate features are uniformly fused at different layers, and then processed by several transformers in multiple groups. Finally, the output features are clustered within different classifications and combined with underlying features. Extensive experiments demonstrate the proposed GroupTransNet outperforms the competitors and achieves new state-of-the-art performance.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2024.127865