Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence
A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events "dishes" and "glass jingling" are likely to co-occur in the acoustic scene "cooking". In this paper, we propose a metho...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-04 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Imoto, Keisuke Kyochi, Seisuke |
description | A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events "dishes" and "glass jingling" are likely to co-occur in the acoustic scene "cooking". In this paper, we propose a method of sound event detection using graph Laplacian regularization with sound event co-occurrence taken into account. In the proposed method, the occurrences of sound events are expressed as a graph whose nodes indicate the frequencies of event occurrence and whose edges indicate the sound event co-occurrences. This graph representation is then utilized for the model training of sound event detection, which is optimized under an objective function with a regularization term considering the graph structure of sound event occurrence and co-occurrence. Evaluation experiments using the TUT Sound Events 2016 and 2017 detasets, and the TUT Acoustic Scenes 2016 dataset show that the proposed method improves the performance of sound event detection by 7.9 percentage points compared with the conventional CNN-BiGRU-based detection method in terms of the segment-based F1 score. In particular, the experimental results indicate that the proposed method enables the detection of co-occurring sound events more accurately than the conventional method. |
doi_str_mv | 10.48550/arxiv.2004.12046 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2004_12046</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2395431574</sourcerecordid><originalsourceid>FETCH-LOGICAL-a956-2d0e05150ce640bc30bc3703547ac91fcefe57d2edc9cd147afd880ee5df96ef3</originalsourceid><addsrcrecordid>eNotj01Lw0AQhhdBsNT-AE8GPCfOfuXjKLG2QkDQisew7k7aLXETN0nV_nrbtIdhYHjm5X0IuaEQiVRKuFf-1-4iBiAiykDEF2TCOKdhKhi7IrOu2wIAixMmJZ-Qj7dmcCaY79D1wSP2qHvbuOC9t7XdW7cOFl61m6BQba20VS54xfVQK2_3agR_bL85f-dN2Gg9eI9O4zW5rFTd4ey8p2T1NF_ly7B4WTznD0WoMhmHzACCpBI0xgI-NT9OAlyKROmMVhorlIlhaHSmDT1cK5OmgChNlcVY8Sm5PcWO1mXr7Zfyf-XRvhztD8TdiWh98z1g15fbZvDu0KlkPJOCU5kI_g_CIl5i</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2395431574</pqid></control><display><type>article</type><title>Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Imoto, Keisuke ; Kyochi, Seisuke</creator><creatorcontrib>Imoto, Keisuke ; Kyochi, Seisuke</creatorcontrib><description>A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events "dishes" and "glass jingling" are likely to co-occur in the acoustic scene "cooking". In this paper, we propose a method of sound event detection using graph Laplacian regularization with sound event co-occurrence taken into account. In the proposed method, the occurrences of sound events are expressed as a graph whose nodes indicate the frequencies of event occurrence and whose edges indicate the sound event co-occurrences. This graph representation is then utilized for the model training of sound event detection, which is optimized under an objective function with a regularization term considering the graph structure of sound event occurrence and co-occurrence. Evaluation experiments using the TUT Sound Events 2016 and 2017 detasets, and the TUT Acoustic Scenes 2016 dataset show that the proposed method improves the performance of sound event detection by 7.9 percentage points compared with the conventional CNN-BiGRU-based detection method in terms of the segment-based F1 score. In particular, the experimental results indicate that the proposed method enables the detection of co-occurring sound events more accurately than the conventional method.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2004.12046</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Acoustics ; Computer Science - Sound ; Cooking ; Graph representations ; Graph theory ; Graphical representations ; Performance enhancement ; Regularization ; Sound</subject><ispartof>arXiv.org, 2020-04</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.1587/transinf.2019EDP7323$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2004.12046$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Imoto, Keisuke</creatorcontrib><creatorcontrib>Kyochi, Seisuke</creatorcontrib><title>Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence</title><title>arXiv.org</title><description>A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events "dishes" and "glass jingling" are likely to co-occur in the acoustic scene "cooking". In this paper, we propose a method of sound event detection using graph Laplacian regularization with sound event co-occurrence taken into account. In the proposed method, the occurrences of sound events are expressed as a graph whose nodes indicate the frequencies of event occurrence and whose edges indicate the sound event co-occurrences. This graph representation is then utilized for the model training of sound event detection, which is optimized under an objective function with a regularization term considering the graph structure of sound event occurrence and co-occurrence. Evaluation experiments using the TUT Sound Events 2016 and 2017 detasets, and the TUT Acoustic Scenes 2016 dataset show that the proposed method improves the performance of sound event detection by 7.9 percentage points compared with the conventional CNN-BiGRU-based detection method in terms of the segment-based F1 score. In particular, the experimental results indicate that the proposed method enables the detection of co-occurring sound events more accurately than the conventional method.</description><subject>Acoustics</subject><subject>Computer Science - Sound</subject><subject>Cooking</subject><subject>Graph representations</subject><subject>Graph theory</subject><subject>Graphical representations</subject><subject>Performance enhancement</subject><subject>Regularization</subject><subject>Sound</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01Lw0AQhhdBsNT-AE8GPCfOfuXjKLG2QkDQisew7k7aLXETN0nV_nrbtIdhYHjm5X0IuaEQiVRKuFf-1-4iBiAiykDEF2TCOKdhKhi7IrOu2wIAixMmJZ-Qj7dmcCaY79D1wSP2qHvbuOC9t7XdW7cOFl61m6BQba20VS54xfVQK2_3agR_bL85f-dN2Gg9eI9O4zW5rFTd4ey8p2T1NF_ly7B4WTznD0WoMhmHzACCpBI0xgI-NT9OAlyKROmMVhorlIlhaHSmDT1cK5OmgChNlcVY8Sm5PcWO1mXr7Zfyf-XRvhztD8TdiWh98z1g15fbZvDu0KlkPJOCU5kI_g_CIl5i</recordid><startdate>20200425</startdate><enddate>20200425</enddate><creator>Imoto, Keisuke</creator><creator>Kyochi, Seisuke</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200425</creationdate><title>Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence</title><author>Imoto, Keisuke ; Kyochi, Seisuke</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a956-2d0e05150ce640bc30bc3703547ac91fcefe57d2edc9cd147afd880ee5df96ef3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Acoustics</topic><topic>Computer Science - Sound</topic><topic>Cooking</topic><topic>Graph representations</topic><topic>Graph theory</topic><topic>Graphical representations</topic><topic>Performance enhancement</topic><topic>Regularization</topic><topic>Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Imoto, Keisuke</creatorcontrib><creatorcontrib>Kyochi, Seisuke</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Imoto, Keisuke</au><au>Kyochi, Seisuke</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence</atitle><jtitle>arXiv.org</jtitle><date>2020-04-25</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events "dishes" and "glass jingling" are likely to co-occur in the acoustic scene "cooking". In this paper, we propose a method of sound event detection using graph Laplacian regularization with sound event co-occurrence taken into account. In the proposed method, the occurrences of sound events are expressed as a graph whose nodes indicate the frequencies of event occurrence and whose edges indicate the sound event co-occurrences. This graph representation is then utilized for the model training of sound event detection, which is optimized under an objective function with a regularization term considering the graph structure of sound event occurrence and co-occurrence. Evaluation experiments using the TUT Sound Events 2016 and 2017 detasets, and the TUT Acoustic Scenes 2016 dataset show that the proposed method improves the performance of sound event detection by 7.9 percentage points compared with the conventional CNN-BiGRU-based detection method in terms of the segment-based F1 score. In particular, the experimental results indicate that the proposed method enables the detection of co-occurring sound events more accurately than the conventional method.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2004.12046</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2004_12046 |
source | arXiv.org; Free E- Journals |
subjects | Acoustics Computer Science - Sound Cooking Graph representations Graph theory Graphical representations Performance enhancement Regularization Sound |
title | Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T19%3A03%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sound%20Event%20Detection%20Utilizing%20Graph%20Laplacian%20Regularization%20with%20Event%20Co-occurrence&rft.jtitle=arXiv.org&rft.au=Imoto,%20Keisuke&rft.date=2020-04-25&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2004.12046&rft_dat=%3Cproquest_arxiv%3E2395431574%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2395431574&rft_id=info:pmid/&rfr_iscdi=true |