Improvement in Sign Language Translation Using Text CTC Alignment
Current sign language translation (SLT) approaches often rely on gloss-based supervision with Connectionist Temporal Classification (CTC), limiting their ability to handle non-monotonic alignments between sign language video and spoken text. In this work, we propose a novel method combining joint CT...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Current sign language translation (SLT) approaches often rely on gloss-based
supervision with Connectionist Temporal Classification (CTC), limiting their
ability to handle non-monotonic alignments between sign language video and
spoken text. In this work, we propose a novel method combining joint
CTC/Attention and transfer learning. The joint CTC/Attention introduces
hierarchical encoding and integrates CTC with the attention mechanism during
decoding, effectively managing both monotonic and non-monotonic alignments.
Meanwhile, transfer learning helps bridge the modality gap between vision and
language in SLT. Experimental results on two widely adopted benchmarks,
RWTH-PHOENIX-Weather 2014 T and CSL-Daily, show that our method achieves
results comparable to state-of-the-art and outperforms the pure-attention
baseline. Additionally, this work opens a new door for future research into
gloss-free SLT using text-based CTC alignment. |
---|---|
DOI: | 10.48550/arxiv.2412.09014 |