MASA: Motion-aware Masked Autoencoder with Semantic Alignment for Sign Language Recognition
Sign language recognition (SLR) has long been plagued by insufficient model representation capabilities. Although current pre-training approaches have alleviated this dilemma to some extent and yielded promising performance by employing various pretext tasks on sign pose data, these methods still su...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sign language recognition (SLR) has long been plagued by insufficient model
representation capabilities. Although current pre-training approaches have
alleviated this dilemma to some extent and yielded promising performance by
employing various pretext tasks on sign pose data, these methods still suffer
from two primary limitations: 1) Explicit motion information is usually
disregarded in previous pretext tasks, leading to partial information loss and
limited representation capability. 2) Previous methods focus on the local
context of a sign pose sequence, without incorporating the guidance of the
global meaning of lexical signs. To this end, we propose a Motion-Aware masked
autoencoder with Semantic Alignment (MASA) that integrates rich motion cues and
global semantic information in a self-supervised learning paradigm for SLR. Our
framework contains two crucial components, i.e., a motion-aware masked
autoencoder (MA) and a momentum semantic alignment module (SA). Specifically,
in MA, we introduce an autoencoder architecture with a motion-aware masked
strategy to reconstruct motion residuals of masked frames, thereby explicitly
exploring dynamic motion cues among sign pose sequences. Moreover, in SA, we
embed our framework with global semantic awareness by aligning the embeddings
of different augmented samples from the input sequence in the shared latent
space. In this way, our framework can simultaneously learn local motion cues
and global semantic features for comprehensive sign language representation.
Furthermore, we conduct extensive experiments to validate the effectiveness of
our method, achieving new state-of-the-art performance on four public
benchmarks. |
---|---|
DOI: | 10.48550/arxiv.2405.20666 |