M-BEST-RQ: A Multi-Channel Speech Foundation Model for Smart Glasses
The growing popularity of multi-channel wearable devices, such as smart glasses, has led to a surge of applications such as targeted speech recognition and enhanced hearing. However, current approaches to solve these tasks use independently trained models, which may not benefit from large amounts of...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The growing popularity of multi-channel wearable devices, such as smart
glasses, has led to a surge of applications such as targeted speech recognition
and enhanced hearing. However, current approaches to solve these tasks use
independently trained models, which may not benefit from large amounts of
unlabeled data. In this paper, we propose M-BEST-RQ, the first multi-channel
speech foundation model for smart glasses, which is designed to leverage
large-scale self-supervised learning (SSL) in an array-geometry agnostic
approach. While prior work on multi-channel speech SSL only evaluated on
simulated settings, we curate a suite of real downstream tasks to evaluate our
model, namely (i) conversational automatic speech recognition (ASR), (ii)
spherical active source localization, and (iii) glasses wearer voice activity
detection, which are sourced from the MMCSG and EasyCom datasets. We show that
a general-purpose M-BEST-RQ encoder is able to match or surpass supervised
models across all tasks. For the conversational ASR task in particular, using
only 8 hours of labeled speech, our model outperforms a supervised ASR baseline
that is trained on 2000 hours of labeled data, which demonstrates the
effectiveness of our approach. |
---|---|
DOI: | 10.48550/arxiv.2409.11494 |