Large-scale Language Model Rescoring on Long-form Data
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) In this work, we study the impact of Large-scale Language Models (LLM) on Automated Speech Recognition (ASR) of YouTube videos, which we use as a source for long-form ASR. We demonstrate up to 8\% re...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ICASSP 2023 - 2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) In this work, we study the impact of Large-scale Language Models (LLM) on
Automated Speech Recognition (ASR) of YouTube videos, which we use as a source
for long-form ASR. We demonstrate up to 8\% relative reduction in Word Error
Eate (WER) on US English (en-us) and code-switched Indian English (en-in)
long-form ASR test sets and a reduction of up to 30\% relative on Salient Term
Error Rate (STER) over a strong first-pass baseline that uses a maximum-entropy
based language model. Improved lattice processing that results in a lattice
with a proper (non-tree) digraph topology and carrying context from the 1-best
hypothesis of the previous segment(s) results in significant wins in rescoring
with LLMs. We also find that the gains in performance from the combination of
LLMs trained on vast quantities of available data (such as C4) and conventional
neural LMs is additive and significantly outperforms a strong first-pass
baseline with a maximum entropy LM.
Copyright 2023 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future media,
including reprinting/republishing this material for advertising or promotional
purposes, creating new collective works, for resale or redistribution to
servers or lists, or reuse of any copyrighted component of this work in other
works. |
---|---|
DOI: | 10.48550/arxiv.2306.08133 |