Decoder-only Architecture for Streaming End-to-end Speech Recognition
Decoder-only language models (LMs) have been successfully adopted for speech-processing tasks including automatic speech recognition (ASR). The LMs have ample expressiveness and perform efficiently. This efficiency is a suitable characteristic for streaming applications of ASR. In this work, we prop...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Decoder-only language models (LMs) have been successfully adopted for
speech-processing tasks including automatic speech recognition (ASR). The LMs
have ample expressiveness and perform efficiently. This efficiency is a
suitable characteristic for streaming applications of ASR. In this work, we
propose to use a decoder-only architecture for blockwise streaming ASR. In our
approach, speech features are compressed using CTC output and context embedding
using blockwise speech subnetwork, and are sequentially provided as prompts to
the decoder. The decoder estimates the output tokens promptly at each block. To
this end, we also propose a novel training scheme using random-length prefix
prompts to make the model robust to the truncated prompts caused by blockwise
processing. An experimental comparison shows that our proposed decoder-only
streaming ASR achieves 8% relative word error rate reduction in the LibriSpeech
test-other set while being twice as fast as the baseline model. |
---|---|
DOI: | 10.48550/arxiv.2406.16107 |