Class-conditional embeddings for music source separation
Isolating individual instruments in a musical mixture has a myriad of potential applications, and seems imminently achievable given the levels of performance reached by recent deep learning methods. While most musical source separation techniques learn an independent model for each instrument, we pr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Seetharaman, Prem Wichern, Gordon Venkataramani, Shrikant Roux, Jonathan Le |
description | Isolating individual instruments in a musical mixture has a myriad of
potential applications, and seems imminently achievable given the levels of
performance reached by recent deep learning methods. While most musical source
separation techniques learn an independent model for each instrument, we
propose using a common embedding space for the time-frequency bins of all
instruments in a mixture inspired by deep clustering and deep attractor
networks. Additionally, an auxiliary network is used to generate parameters of
a Gaussian mixture model (GMM) where the posterior distribution over GMM
components in the embedding space can be used to create a mask that separates
individual sources from a mixture. In addition to outperforming a
mask-inference baseline on the MUSDB-18 dataset, our embedding space is easily
interpretable and can be used for query-based separation. |
doi_str_mv | 10.48550/arxiv.1811.03076 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1811_03076</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1811_03076</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-7b9f324b96960bf7c446dba8b97ca496e4f1a0c9d70d109328095939fa8658893</originalsourceid><addsrcrecordid>eNotj81KAzEURrPpQlofwJV5gRlvmkySuyyDVaHgpvvh5k8C81OSVuzbS6urs_k4fIexJwGtsl0HL1R-8ncrrBAtSDD6gdl-pFobv8whn_My08jj5GIIef6qPC2FT5eaPa_LpfjIazxRodtww1aJxhof_7lmx_3rsX9vDp9vH_3u0JA2ujEOk9wqhxo1uGS8Ujo4sg6NJ4U6qiQIPAYDQQDKrQXsUGIiqztrUa7Z85_2fn04lTxRuQ63hOGeIH8B1fdAvw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Class-conditional embeddings for music source separation</title><source>arXiv.org</source><creator>Seetharaman, Prem ; Wichern, Gordon ; Venkataramani, Shrikant ; Roux, Jonathan Le</creator><creatorcontrib>Seetharaman, Prem ; Wichern, Gordon ; Venkataramani, Shrikant ; Roux, Jonathan Le</creatorcontrib><description>Isolating individual instruments in a musical mixture has a myriad of
potential applications, and seems imminently achievable given the levels of
performance reached by recent deep learning methods. While most musical source
separation techniques learn an independent model for each instrument, we
propose using a common embedding space for the time-frequency bins of all
instruments in a mixture inspired by deep clustering and deep attractor
networks. Additionally, an auxiliary network is used to generate parameters of
a Gaussian mixture model (GMM) where the posterior distribution over GMM
components in the embedding space can be used to create a mask that separates
individual sources from a mixture. In addition to outperforming a
mask-inference baseline on the MUSDB-18 dataset, our embedding space is easily
interpretable and can be used for query-based separation.</description><identifier>DOI: 10.48550/arxiv.1811.03076</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Sound ; Statistics - Machine Learning</subject><creationdate>2018-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1811.03076$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1811.03076$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Seetharaman, Prem</creatorcontrib><creatorcontrib>Wichern, Gordon</creatorcontrib><creatorcontrib>Venkataramani, Shrikant</creatorcontrib><creatorcontrib>Roux, Jonathan Le</creatorcontrib><title>Class-conditional embeddings for music source separation</title><description>Isolating individual instruments in a musical mixture has a myriad of
potential applications, and seems imminently achievable given the levels of
performance reached by recent deep learning methods. While most musical source
separation techniques learn an independent model for each instrument, we
propose using a common embedding space for the time-frequency bins of all
instruments in a mixture inspired by deep clustering and deep attractor
networks. Additionally, an auxiliary network is used to generate parameters of
a Gaussian mixture model (GMM) where the posterior distribution over GMM
components in the embedding space can be used to create a mask that separates
individual sources from a mixture. In addition to outperforming a
mask-inference baseline on the MUSDB-18 dataset, our embedding space is easily
interpretable and can be used for query-based separation.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KAzEURrPpQlofwJV5gRlvmkySuyyDVaHgpvvh5k8C81OSVuzbS6urs_k4fIexJwGtsl0HL1R-8ncrrBAtSDD6gdl-pFobv8whn_My08jj5GIIef6qPC2FT5eaPa_LpfjIazxRodtww1aJxhof_7lmx_3rsX9vDp9vH_3u0JA2ujEOk9wqhxo1uGS8Ujo4sg6NJ4U6qiQIPAYDQQDKrQXsUGIiqztrUa7Z85_2fn04lTxRuQ63hOGeIH8B1fdAvw</recordid><startdate>20181107</startdate><enddate>20181107</enddate><creator>Seetharaman, Prem</creator><creator>Wichern, Gordon</creator><creator>Venkataramani, Shrikant</creator><creator>Roux, Jonathan Le</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20181107</creationdate><title>Class-conditional embeddings for music source separation</title><author>Seetharaman, Prem ; Wichern, Gordon ; Venkataramani, Shrikant ; Roux, Jonathan Le</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-7b9f324b96960bf7c446dba8b97ca496e4f1a0c9d70d109328095939fa8658893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Seetharaman, Prem</creatorcontrib><creatorcontrib>Wichern, Gordon</creatorcontrib><creatorcontrib>Venkataramani, Shrikant</creatorcontrib><creatorcontrib>Roux, Jonathan Le</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Seetharaman, Prem</au><au>Wichern, Gordon</au><au>Venkataramani, Shrikant</au><au>Roux, Jonathan Le</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Class-conditional embeddings for music source separation</atitle><date>2018-11-07</date><risdate>2018</risdate><abstract>Isolating individual instruments in a musical mixture has a myriad of
potential applications, and seems imminently achievable given the levels of
performance reached by recent deep learning methods. While most musical source
separation techniques learn an independent model for each instrument, we
propose using a common embedding space for the time-frequency bins of all
instruments in a mixture inspired by deep clustering and deep attractor
networks. Additionally, an auxiliary network is used to generate parameters of
a Gaussian mixture model (GMM) where the posterior distribution over GMM
components in the embedding space can be used to create a mask that separates
individual sources from a mixture. In addition to outperforming a
mask-inference baseline on the MUSDB-18 dataset, our embedding space is easily
interpretable and can be used for query-based separation.</abstract><doi>10.48550/arxiv.1811.03076</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1811.03076 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1811_03076 |
source | arXiv.org |
subjects | Computer Science - Learning Computer Science - Sound Statistics - Machine Learning |
title | Class-conditional embeddings for music source separation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T19%3A05%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Class-conditional%20embeddings%20for%20music%20source%20separation&rft.au=Seetharaman,%20Prem&rft.date=2018-11-07&rft_id=info:doi/10.48550/arxiv.1811.03076&rft_dat=%3Carxiv_GOX%3E1811_03076%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |