Multi-scale local-temporal similarity fusion for continuous sign language recognition

•we propose a content-aware feature selector(CFS) and position-aware temporal convolver (PTC) to enhance feature learning.•We fuse different scales of convolved features with a content-dependent multiscale aggregator (CMA).•Our proposed model mLTSF-Net achieves the state-of-the-art accuracy compared...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2023-04, Vol.136, p.109233, Article 109233
Hauptverfasser: Xie, Pan, Cui, Zhi, Du, Yao, Zhao, Mengyi, Cui, Jianwei, Wang, Bin, Hu, Xiaohui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•we propose a content-aware feature selector(CFS) and position-aware temporal convolver (PTC) to enhance feature learning.•We fuse different scales of convolved features with a content-dependent multiscale aggregator (CMA).•Our proposed model mLTSF-Net achieves the state-of-the-art accuracy compared with many competitive baseline models. Continuous sign language recognition (cSLR) is a public significant task that transcribes a sign language video into an ordered gloss sequence. It is important to capture the fine-grained gloss-level details, since there is no explicit alignment between sign video frames and the corresponding glosses. Among the past works, one promising way is to adopt a one-dimensional convolutional network (1D-CNN) to temporally fuse the sequential frames. However, CNNs are agnostic to similarity or dissimilarity, and thus are unable to capture local consistent semantics within temporally neighboring frames. To address the issue, we propose to adaptively fuse local features via temporal similarity for this task. Specifically, we devise a Multi-scale Local-Temporal Similarity Fusion Network (mLTSF-Net) as follows: 1) In terms of a specific video frame, we firstly select its similar neighbours with multi-scale receptive regions to accommodate different lengths of glosses. 2) To ensure temporal consistency, we then use position-aware convolution to temporally convolve each scale of selected frames. 3) To obtain a local-temporally enhanced frame-wise representation, we finally fuse the results of different scales using a content-dependent aggregator. We train our model in an end-to-end fashion, and the experimental results on RWTH-PHOENIX-Weather 2014 datasets (RWTH) demonstrate that our model achieves competitive performance compared with several state-of-the-art models.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.109233