Speech Separation using Neural Audio Codecs with Embedding Loss
Neural audio codecs have revolutionized audio processing by enabling speech tasks to be performed on highly compressed representations. Recent work has shown that speech separation can be achieved within these compressed domains, offering faster training and reduced inference costs. However, current...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural audio codecs have revolutionized audio processing by enabling speech
tasks to be performed on highly compressed representations. Recent work has
shown that speech separation can be achieved within these compressed domains,
offering faster training and reduced inference costs. However, current
approaches still rely on waveform-based loss functions, necessitating
unnecessary decoding steps during training. We propose a novel embedding loss
for neural audio codec-based speech separation that operates directly on
compressed audio representations, eliminating the need for decoding during
training. To validate our approach, we conduct comprehensive evaluations using
both objective metrics and perceptual assessment techniques, including
intrusive and non-intrusive methods. Our results demonstrate that embedding
loss can be used to train codec-based speech separation models with a 2x
improvement in training speed and computational cost while achieving better
DNSMOS and STOI performance on the WSJ0-2mix dataset across 3 different
pre-trained codecs. |
---|---|
DOI: | 10.48550/arxiv.2411.17998 |