Efficient CTC Regularization via Coarse Labels for End-to-End Speech Translation
For end-to-end speech translation, regularizing the encoder with the Connectionist Temporal Classification (CTC) objective using the source transcript or target translation as labels can greatly improve quality metrics. However, CTC demands an extra prediction layer over the vocabulary space, bringi...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For end-to-end speech translation, regularizing the encoder with the
Connectionist Temporal Classification (CTC) objective using the source
transcript or target translation as labels can greatly improve quality metrics.
However, CTC demands an extra prediction layer over the vocabulary space,
bringing in nonnegligible model parameters and computational overheads,
although this layer is typically not used for inference. In this paper, we
re-examine the need for genuine vocabulary labels for CTC for regularization
and explore strategies to reduce the CTC label space, targeting improved
efficiency without quality degradation. We propose coarse labeling for CTC
(CoLaCTC), which merges vocabulary labels via simple heuristic rules, such as
using truncation, division or modulo (MOD) operations. Despite its simplicity,
our experiments on 4 source and 8 target languages show that CoLaCTC with MOD
particularly can compress the label space aggressively to 256 and even further,
gaining training efficiency (1.18x ~ 1.77x speedup depending on the original
vocabulary size) yet still delivering comparable or better performance than the
CTC baseline. We also show that CoLaCTC successfully generalizes to CTC
regularization regardless of using transcript or translation for labeling. |
---|---|
DOI: | 10.48550/arxiv.2302.10871 |