TCGNet: Type-Correlation Guidance for Salient Object Detection

Contrast and part-whole relations induced by deep neural networks like Convolutional Neural Networks (CNNs) and Capsule Networks (CapsNets) have been known as two types of semantic cues for deep salient object detection. However, few works pay attention to their complementary properties in the conte...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2024-07, Vol.25 (7), p.6633-6644
Hauptverfasser: Liu, Yi, Zhou, Ling, Wu, Gengshen, Xu, Shoukun, Han, Jungong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Contrast and part-whole relations induced by deep neural networks like Convolutional Neural Networks (CNNs) and Capsule Networks (CapsNets) have been known as two types of semantic cues for deep salient object detection. However, few works pay attention to their complementary properties in the context of saliency prediction. In this paper, we probe into this issue and propose a Type-Correlation Guidance Network (TCGNet) for salient object detection. Specifically, a Multi-Type Cue Correlation (MTCC) covering CNNs and CapsNets is designed to extract the contrast and part-whole relational semantics, respectively. Using MTCC, two correlation matrices containing complementary information are computed with these two types of semantics. In return, these correlation matrices are used to guide the learning of the above semantics to generate better saliency cues. Besides, a Type Interaction Attention (TIA) is developed to interact semantics from CNNs and CapsNets for the aim of saliency prediction. Experiments and analysis on five benchmarks show the superiority of the proposed approach. Codes has been released on https://github.com/liuyi1989/TCGNet .
ISSN:1524-9050
1558-0016
DOI:10.1109/TITS.2023.3342811