Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments
Speech activity detection (SAD), which often rests on the fact that the noise is "more" stationary than speech, is particularly challenging in non-stationary environments, because the time variance of the acoustic scene makes it difficult to discriminate speech from noise. We propose two a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Speech activity detection (SAD), which often rests on the fact that the noise
is "more" stationary than speech, is particularly challenging in non-stationary
environments, because the time variance of the acoustic scene makes it
difficult to discriminate speech from noise. We propose two approaches to SAD,
where one is based on statistical signal processing, while the other utilizes
neural networks. The former employes sophisticated signal processing to track
the noise and speech energies and is meant to support the case for a resource
efficient, unsupervised signal processing approach. The latter introduces a
recurrent network layer that operates on short segments of the input speech to
do temporal smoothing in the presence of non-stationary noise. The systems are
tested on the Fearless Steps challenge, which consists of the transmission data
from the Apollo-11 space mission. The statistical SAD achieves comparable
detection performance to earlier proposed neural network based SADs, while the
neural network based approach leads to a decision cost function of 1.07% on the
evaluation set of the 2020 Fearless Steps Challenge, which sets a new state of
the art. |
---|---|
DOI: | 10.48550/arxiv.2005.09913 |