Universal Source Separation with Weakly Labelled Data
Universal source separation (USS) is a fundamental research task for computational auditory scene analysis, which aims to separate mono recordings into individual source tracks. There are three potential challenges awaiting the solution to the audio source separation task. First, previous audio sour...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Universal source separation (USS) is a fundamental research task for
computational auditory scene analysis, which aims to separate mono recordings
into individual source tracks. There are three potential challenges awaiting
the solution to the audio source separation task. First, previous audio source
separation systems mainly focus on separating one or a limited number of
specific sources. There is a lack of research on building a unified system that
can separate arbitrary sources via a single model. Second, most previous
systems require clean source data to train a separator, while clean source data
are scarce. Third, there is a lack of USS system that can automatically detect
and separate active sound classes in a hierarchical level. To use large-scale
weakly labeled/unlabeled audio data for audio source separation, we propose a
universal audio source separation framework containing: 1) an audio tagging
model trained on weakly labeled data as a query net; and 2) a conditional
source separation model that takes query net outputs as conditions to separate
arbitrary sound sources. We investigate various query nets, source separation
models, and training strategies and propose a hierarchical USS strategy to
automatically detect and separate sound classes from the AudioSet ontology. By
solely leveraging the weakly labelled AudioSet, our USS system is successful in
separating a wide variety of sound classes, including sound event separation,
music source separation, and speech enhancement. The USS system achieves an
average signal-to-distortion ratio improvement (SDRi) of 5.57 dB over 527 sound
classes of AudioSet; 10.57 dB on the DCASE 2018 Task 2 dataset; 8.12 dB on the
MUSDB18 dataset; an SDRi of 7.28 dB on the Slakh2100 dataset; and an SSNR of
9.00 dB on the voicebank-demand dataset. We release the source code at
https://github.com/bytedance/uss |
---|---|
DOI: | 10.48550/arxiv.2305.07447 |