Cost Aggregation with 4D Convolutional Swin Transformer for Few-Shot Segmentation
This paper presents a novel cost aggregation network, called Volumetric Aggregation with Transformers (VAT), for few-shot segmentation. The use of transformers can benefit correlation map aggregation through self-attention over a global receptive field. However, the tokenization of a correlation map...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a novel cost aggregation network, called Volumetric
Aggregation with Transformers (VAT), for few-shot segmentation. The use of
transformers can benefit correlation map aggregation through self-attention
over a global receptive field. However, the tokenization of a correlation map
for transformer processing can be detrimental, because the discontinuity at
token boundaries reduces the local context available near the token edges and
decreases inductive bias. To address this problem, we propose a 4D
Convolutional Swin Transformer, where a high-dimensional Swin Transformer is
preceded by a series of small-kernel convolutions that impart local context to
all pixels and introduce convolutional inductive bias. We additionally boost
aggregation performance by applying transformers within a pyramidal structure,
where aggregation at a coarser level guides aggregation at a finer level. Noise
in the transformer output is then filtered in the subsequent decoder with the
help of the query's appearance embedding. With this model, a new
state-of-the-art is set for all the standard benchmarks in few-shot
segmentation. It is shown that VAT attains state-of-the-art performance for
semantic correspondence as well, where cost aggregation also plays a central
role. |
---|---|
DOI: | 10.48550/arxiv.2207.10866 |