I see what you hear: a vision-inspired method to localize words
This paper explores the possibility of using visual object detection techniques for word localization in speech data. Object detection has been thoroughly studied in the contemporary literature for visual data. Noting that an audio can be interpreted as a 1-dimensional image, object localization tec...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper explores the possibility of using visual object detection
techniques for word localization in speech data. Object detection has been
thoroughly studied in the contemporary literature for visual data. Noting that
an audio can be interpreted as a 1-dimensional image, object localization
techniques can be fundamentally useful for word localization. Building upon
this idea, we propose a lightweight solution for word detection and
localization. We use bounding box regression for word localization, which
enables our model to detect the occurrence, offset, and duration of keywords in
a given audio stream. We experiment with LibriSpeech and train a model to
localize 1000 words. Compared to existing work, our method reduces model size
by 94%, and improves the F1 score by 6.5\%. |
---|---|
DOI: | 10.48550/arxiv.2210.13567 |