Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion
We present a novel conversational-context aware end-to-end speech recognizer based on a gated neural network that incorporates conversational-context/word/speech embeddings. Unlike conventional speech recognition models, our model learns longer conversational-context information that spans across se...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a novel conversational-context aware end-to-end speech recognizer
based on a gated neural network that incorporates
conversational-context/word/speech embeddings. Unlike conventional speech
recognition models, our model learns longer conversational-context information
that spans across sentences and is consequently better at recognizing long
conversations. Specifically, we propose to use the text-based external word
and/or sentence embeddings (i.e., fastText, BERT) within an end-to-end
framework, yielding a significant improvement in word error rate with better
conversational-context representation. We evaluated the models on the
Switchboard conversational speech corpus and show that our model outperforms
standard end-to-end speech recognition models. |
---|---|
DOI: | 10.48550/arxiv.1906.11604 |