CatNet: music source separation system with mix-audio augmentation
Music source separation (MSS) is the task of separating a music piece into individual sources, such as vocals and accompaniment. Recently, neural network based methods have been applied to address the MSS problem, and can be categorized into spectrogram and time-domain based methods. However, there...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Music source separation (MSS) is the task of separating a music piece into
individual sources, such as vocals and accompaniment. Recently, neural network
based methods have been applied to address the MSS problem, and can be
categorized into spectrogram and time-domain based methods. However, there is a
lack of research of using complementary information of spectrogram and
time-domain inputs for MSS. In this article, we propose a CatNet framework that
concatenates a UNet separation branch using spectrogram as input and a WavUNet
separation branch using time-domain waveform as input for MSS. We propose an
end-to-end and fully differentiable system that incorporate spectrogram
calculation into CatNet. In addition, we propose a novel mix-audio data
augmentation method that randomly mix audio segments from the same source as
augmented audio segments for training. Our proposed CatNet MSS system achieves
a state-of-the-art vocals separation source distortion ratio (SDR) of 7.54 dB,
outperforming MMDenseNet of 6.57 dB evaluated on the MUSDB18 dataset. |
---|---|
DOI: | 10.48550/arxiv.2102.09966 |