TAGNet: Learning Configurable Context Pathways for Semantic Segmentation

State-of-the-art semantic segmentation methods capture the relationship between pixels to facilitate contextual information exchange. Advanced methods utilize fixed pathways for context exchange, lacking the flexibility to harness the most relevant context for each pixel. In this paper, we present C...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2023-02, Vol.45 (2), p.2475-2491
Hauptverfasser: Lin, Di, Shen, Dingguo, Ji, Yuanfeng, Shen, Siting, Xie, Mingrui, Feng, Wei, Huang, Hui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:State-of-the-art semantic segmentation methods capture the relationship between pixels to facilitate contextual information exchange. Advanced methods utilize fixed pathways for context exchange, lacking the flexibility to harness the most relevant context for each pixel. In this paper, we present Configurable Context Pathways (CCPs), a novel model for establishing pathways for augmenting contextual information. In contrast to previous pathway models, CCPs are learned, leveraging configurable regions to form information flows between pairs of pixels. We propose TAGNet to adaptively configure the regions, which span over the entire image space, driven by the relationships between the remote pixels. Subsequently, the information flows along the pathways are updated gradually by the information provided by sequences of configurable regions, forming more powerful contextual information. We extensively evaluate the traveling, adaption, and gathering (TAG) stages of our network on the public benchmarks, demonstrating that all of the stages successfully improve the segmentation accuracy and help to surpass the state-of-the-art results. The code package is available at: https://github.com/dilincv/TAGNet .
ISSN:0162-8828
1939-3539
2160-9292
DOI:10.1109/TPAMI.2022.3165034