Visual Information Matters for ASR Error Correction
Aiming to improve the Automatic Speech Recognition (ASR) outputs with a post-processing step, ASR error correction (EC) techniques have been widely developed due to their efficiency in using parallel text data. Previous works mainly focus on using text or/ and speech data, which hinders the performa...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Aiming to improve the Automatic Speech Recognition (ASR) outputs with a
post-processing step, ASR error correction (EC) techniques have been widely
developed due to their efficiency in using parallel text data. Previous works
mainly focus on using text or/ and speech data, which hinders the performance
gain when not only text and speech information, but other modalities, such as
visual information are critical for EC. The challenges are mainly two folds:
one is that previous work fails to emphasize visual information, thus rare
exploration has been studied. The other is that the community lacks a
high-quality benchmark where visual information matters for the EC models.
Therefore, this paper provides 1) simple yet effective methods, namely gated
fusion and image captions as prompts to incorporate visual information to help
EC; 2) large-scale benchmark datasets, namely Visual-ASR-EC, where each item in
the training data consists of visual, speech, and text information, and the
test data are carefully selected by human annotators to ensure that even humans
could make mistakes when visual information is missing. Experimental results
show that using captions as prompts could effectively use the visual
information and surpass state-of-the-art methods by upto 1.2% in Word Error
Rate(WER), which also indicates that visual information is critical in our
proposed Visual-ASR-EC dataset |
---|---|
DOI: | 10.48550/arxiv.2303.10160 |