Trajectory-aligned Space-time Tokens for Few-shot Action Recognition
We propose a simple yet effective approach for few-shot action recognition, emphasizing the disentanglement of motion and appearance representations. By harnessing recent progress in tracking, specifically point trajectories and self-supervised representation learning, we build trajectory-aligned to...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a simple yet effective approach for few-shot action recognition,
emphasizing the disentanglement of motion and appearance representations. By
harnessing recent progress in tracking, specifically point trajectories and
self-supervised representation learning, we build trajectory-aligned tokens
(TATs) that capture motion and appearance information. This approach
significantly reduces the data requirements while retaining essential
information. To process these representations, we use a Masked Space-time
Transformer that effectively learns to aggregate information to facilitate
few-shot action recognition. We demonstrate state-of-the-art results on
few-shot action recognition across multiple datasets. Our project page is
available at https://www.cs.umd.edu/~pulkit/tats |
---|---|
DOI: | 10.48550/arxiv.2407.18249 |