Structure-aware sign language recognition with spatial–temporal scene graph

Continuous sign language recognition (CSLR) is essential for the social participation of deaf individuals. The structural information of sign language motion units plays a crucial role in semantic representation. However, most existing CSLR methods treat motion units as a whole appearance in the vid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information processing & management 2024-11, Vol.61 (6), p.103850, Article 103850
Hauptverfasser: Lin, Shiquan, Xiao, Zhengye, Wang, Lixin, Wan, Xiuan, Ni, Lan, Fang, Yuchun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Continuous sign language recognition (CSLR) is essential for the social participation of deaf individuals. The structural information of sign language motion units plays a crucial role in semantic representation. However, most existing CSLR methods treat motion units as a whole appearance in the video sequence, neglecting the exploitation and explanation of structural information in the models. This paper proposes a Structure-Aware Graph Convolutional Neural Network (SA-GNN) model for CSLR. This model constructs a spatial–temporal scene graph, explicitly capturing motion units’ spatial structure and temporal variation. Furthermore, to effectively train the SA-GNN, we propose an adaptive bootstrap strategy that enhances weak supervision using dense pseudo labels. This strategy incorporates a confidence cross-entropy loss to adjust the distribution of pseudo labels adaptively. Extensive experiments validate the effectiveness of our proposed method, achieving competitive results on popular CSLR datasets.
ISSN:0306-4573
DOI:10.1016/j.ipm.2024.103850