SlideSpeech: A Large-Scale Slide-Enriched Audio-Visual Corpus
Multi-Modal automatic speech recognition (ASR) techniques aim to leverage additional modalities to improve the performance of speech recognition systems. While existing approaches primarily focus on video or contextual information, the utilization of extra supplementary textual information has been...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-Modal automatic speech recognition (ASR) techniques aim to leverage
additional modalities to improve the performance of speech recognition systems.
While existing approaches primarily focus on video or contextual information,
the utilization of extra supplementary textual information has been overlooked.
Recognizing the abundance of online conference videos with slides, which
provide rich domain-specific information in the form of text and images, we
release SlideSpeech, a large-scale audio-visual corpus enriched with slides.
The corpus contains 1,705 videos, 1,000+ hours, with 473 hours of high-quality
transcribed speech. Moreover, the corpus contains a significant amount of
real-time synchronized slides. In this work, we present the pipeline for
constructing the corpus and propose baseline methods for utilizing text
information in the visual slide context. Through the application of keyword
extraction and contextual ASR methods in the benchmark system, we demonstrate
the potential of improving speech recognition performance by incorporating
textual information from supplementary video slides. |
---|---|
DOI: | 10.48550/arxiv.2309.05396 |