Non-Autoregressive Transformer for Speech Recognition
Very deep transformers outperform conventional bi-directional long short-term memory networks for automatic speech recognition (ASR) by a significant margin. However, being autoregressive models, their computational complexity is still a prohibitive factor in their deployment into production systems...
Gespeichert in:
Veröffentlicht in: | IEEE signal processing letters 2021, Vol.28, p.121-125 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Very deep transformers outperform conventional bi-directional long short-term memory networks for automatic speech recognition (ASR) by a significant margin. However, being autoregressive models, their computational complexity is still a prohibitive factor in their deployment into production systems. To amend this problem, we study two different non-autoregressive transformer structures for ASR: Audio-Conditional Masked Language Model (A-CMLM) and Audio-Factorized Masked Language Model (A-FMLM). When training these frameworks, the decoder input tokens are randomly replaced by special mask tokens. Then, the network is optimized to predict the masked tokens by taking both the unmasked context tokens and the input speech into consideration. During inference, we start from all masked tokens and the network iteratively predicts missing tokens based on partial results. A new decoding strategy is proposed as an example, which starts from the most confident predictions to the rest. Results on Mandarin (AISHELL), Japanese (CSJ), English (LibriSpeech) benchmarks show promising results to train such a non-autoregressive network for ASR. Especially in AISHELL, the proposed method outperformed the Kaldi ASR system and matched the performance of the state-of-the-art autoregressive transformer with 7\times speedup. |
---|---|
ISSN: | 1070-9908 1558-2361 |
DOI: | 10.1109/LSP.2020.3044547 |