CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection

Recent singing voice synthesis and conversion advancements necessitate robust singing voice deepfake detection (SVDD) models. Current SVDD datasets face challenges due to limited controllability, diversity in deepfake methods, and licensing restrictions. Addressing these gaps, we introduce CtrSVDD,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Zang, Yongyi, Shi, Jiatong, Zhang, You, Yamamoto, Ryuichi, Han, Jionghao, Tang, Yuxun, Xu, Shengyuan, Zhao, Wenxiao, Guo, Jing, Toda, Tomoki, Duan, Zhiyao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zang, Yongyi
Shi, Jiatong
Zhang, You
Yamamoto, Ryuichi
Han, Jionghao
Tang, Yuxun
Xu, Shengyuan
Zhao, Wenxiao
Guo, Jing
Toda, Tomoki
Duan, Zhiyao
description Recent singing voice synthesis and conversion advancements necessitate robust singing voice deepfake detection (SVDD) models. Current SVDD datasets face challenges due to limited controllability, diversity in deepfake methods, and licensing restrictions. Addressing these gaps, we introduce CtrSVDD, a large-scale, diverse collection of bonafide and deepfake singing vocals. These vocals are synthesized using state-of-the-art methods from publicly accessible singing voice datasets. CtrSVDD includes 47.64 hours of bonafide and 260.34 hours of deepfake singing vocals, spanning 14 deepfake methods and involving 164 singer identities. We also present a baseline system with flexible front-end features, evaluated against a structured train/dev/eval split. The experiments show the importance of feature selection and highlight a need for generalization towards deepfake methods that deviate further from training distribution. The CtrSVDD dataset and baselines are publicly accessible.
doi_str_mv 10.48550/arxiv.2406.02438
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2406_02438</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3064732795</sourcerecordid><originalsourceid>FETCH-LOGICAL-a958-4f081f7a287f6b5457359bc1b7aff02fb4e1c3b0799b12e72f1fcf5198d06ed83</originalsourceid><addsrcrecordid>eNotkE9LwzAAxYMgOOY-gCcDnjvzt0m9ba1TYeBhY9eStolmq8lMMnHf3m4THrx3eDx4PwDuMJoyyTl6VOHX_kwJQ_kUEUblFRgRSnEmGSE3YBLjFiFEckE4pyPQlCmsNlX1BGdwrl37-aXCDlYqqagTVK6D8yH11mk4c6o_Rhuh8QGW3qXg-153cGXdxyC48bbVsNJ6b9TuFJJuk_XuFlwb1Uc9-fcxWC-e1-Vrtnx_eStny0wVXGbMIImNUEQKkzeccUF50bS4EcoYREzDNG5pg0RRNJhoQQw2reG4kB3KdSfpGNxfZs8A6n2ww5VjfQJRn0EMjYdLYx_890HHVG_9IQyvYk1RzgQlouD0D9KeYCc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3064732795</pqid></control><display><type>article</type><title>CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Zang, Yongyi ; Shi, Jiatong ; Zhang, You ; Yamamoto, Ryuichi ; Han, Jionghao ; Tang, Yuxun ; Xu, Shengyuan ; Zhao, Wenxiao ; Guo, Jing ; Toda, Tomoki ; Duan, Zhiyao</creator><creatorcontrib>Zang, Yongyi ; Shi, Jiatong ; Zhang, You ; Yamamoto, Ryuichi ; Han, Jionghao ; Tang, Yuxun ; Xu, Shengyuan ; Zhao, Wenxiao ; Guo, Jing ; Toda, Tomoki ; Duan, Zhiyao</creatorcontrib><description>Recent singing voice synthesis and conversion advancements necessitate robust singing voice deepfake detection (SVDD) models. Current SVDD datasets face challenges due to limited controllability, diversity in deepfake methods, and licensing restrictions. Addressing these gaps, we introduce CtrSVDD, a large-scale, diverse collection of bonafide and deepfake singing vocals. These vocals are synthesized using state-of-the-art methods from publicly accessible singing voice datasets. CtrSVDD includes 47.64 hours of bonafide and 260.34 hours of deepfake singing vocals, spanning 14 deepfake methods and involving 164 singer identities. We also present a baseline system with flexible front-end features, evaluated against a structured train/dev/eval split. The experiments show the importance of feature selection and highlight a need for generalization towards deepfake methods that deviate further from training distribution. The CtrSVDD dataset and baselines are publicly accessible.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2406.02438</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accessibility ; Computer Science - Multimedia ; Computer Science - Sound ; Datasets ; Deception ; Singing ; Voice recognition</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.21437/Interspeech.2024-2242$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.02438$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zang, Yongyi</creatorcontrib><creatorcontrib>Shi, Jiatong</creatorcontrib><creatorcontrib>Zhang, You</creatorcontrib><creatorcontrib>Yamamoto, Ryuichi</creatorcontrib><creatorcontrib>Han, Jionghao</creatorcontrib><creatorcontrib>Tang, Yuxun</creatorcontrib><creatorcontrib>Xu, Shengyuan</creatorcontrib><creatorcontrib>Zhao, Wenxiao</creatorcontrib><creatorcontrib>Guo, Jing</creatorcontrib><creatorcontrib>Toda, Tomoki</creatorcontrib><creatorcontrib>Duan, Zhiyao</creatorcontrib><title>CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection</title><title>arXiv.org</title><description>Recent singing voice synthesis and conversion advancements necessitate robust singing voice deepfake detection (SVDD) models. Current SVDD datasets face challenges due to limited controllability, diversity in deepfake methods, and licensing restrictions. Addressing these gaps, we introduce CtrSVDD, a large-scale, diverse collection of bonafide and deepfake singing vocals. These vocals are synthesized using state-of-the-art methods from publicly accessible singing voice datasets. CtrSVDD includes 47.64 hours of bonafide and 260.34 hours of deepfake singing vocals, spanning 14 deepfake methods and involving 164 singer identities. We also present a baseline system with flexible front-end features, evaluated against a structured train/dev/eval split. The experiments show the importance of feature selection and highlight a need for generalization towards deepfake methods that deviate further from training distribution. The CtrSVDD dataset and baselines are publicly accessible.</description><subject>Accessibility</subject><subject>Computer Science - Multimedia</subject><subject>Computer Science - Sound</subject><subject>Datasets</subject><subject>Deception</subject><subject>Singing</subject><subject>Voice recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE9LwzAAxYMgOOY-gCcDnjvzt0m9ba1TYeBhY9eStolmq8lMMnHf3m4THrx3eDx4PwDuMJoyyTl6VOHX_kwJQ_kUEUblFRgRSnEmGSE3YBLjFiFEckE4pyPQlCmsNlX1BGdwrl37-aXCDlYqqagTVK6D8yH11mk4c6o_Rhuh8QGW3qXg-153cGXdxyC48bbVsNJ6b9TuFJJuk_XuFlwb1Uc9-fcxWC-e1-Vrtnx_eStny0wVXGbMIImNUEQKkzeccUF50bS4EcoYREzDNG5pg0RRNJhoQQw2reG4kB3KdSfpGNxfZs8A6n2ww5VjfQJRn0EMjYdLYx_890HHVG_9IQyvYk1RzgQlouD0D9KeYCc</recordid><startdate>20240618</startdate><enddate>20240618</enddate><creator>Zang, Yongyi</creator><creator>Shi, Jiatong</creator><creator>Zhang, You</creator><creator>Yamamoto, Ryuichi</creator><creator>Han, Jionghao</creator><creator>Tang, Yuxun</creator><creator>Xu, Shengyuan</creator><creator>Zhao, Wenxiao</creator><creator>Guo, Jing</creator><creator>Toda, Tomoki</creator><creator>Duan, Zhiyao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240618</creationdate><title>CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection</title><author>Zang, Yongyi ; Shi, Jiatong ; Zhang, You ; Yamamoto, Ryuichi ; Han, Jionghao ; Tang, Yuxun ; Xu, Shengyuan ; Zhao, Wenxiao ; Guo, Jing ; Toda, Tomoki ; Duan, Zhiyao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a958-4f081f7a287f6b5457359bc1b7aff02fb4e1c3b0799b12e72f1fcf5198d06ed83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accessibility</topic><topic>Computer Science - Multimedia</topic><topic>Computer Science - Sound</topic><topic>Datasets</topic><topic>Deception</topic><topic>Singing</topic><topic>Voice recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Zang, Yongyi</creatorcontrib><creatorcontrib>Shi, Jiatong</creatorcontrib><creatorcontrib>Zhang, You</creatorcontrib><creatorcontrib>Yamamoto, Ryuichi</creatorcontrib><creatorcontrib>Han, Jionghao</creatorcontrib><creatorcontrib>Tang, Yuxun</creatorcontrib><creatorcontrib>Xu, Shengyuan</creatorcontrib><creatorcontrib>Zhao, Wenxiao</creatorcontrib><creatorcontrib>Guo, Jing</creatorcontrib><creatorcontrib>Toda, Tomoki</creatorcontrib><creatorcontrib>Duan, Zhiyao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zang, Yongyi</au><au>Shi, Jiatong</au><au>Zhang, You</au><au>Yamamoto, Ryuichi</au><au>Han, Jionghao</au><au>Tang, Yuxun</au><au>Xu, Shengyuan</au><au>Zhao, Wenxiao</au><au>Guo, Jing</au><au>Toda, Tomoki</au><au>Duan, Zhiyao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection</atitle><jtitle>arXiv.org</jtitle><date>2024-06-18</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recent singing voice synthesis and conversion advancements necessitate robust singing voice deepfake detection (SVDD) models. Current SVDD datasets face challenges due to limited controllability, diversity in deepfake methods, and licensing restrictions. Addressing these gaps, we introduce CtrSVDD, a large-scale, diverse collection of bonafide and deepfake singing vocals. These vocals are synthesized using state-of-the-art methods from publicly accessible singing voice datasets. CtrSVDD includes 47.64 hours of bonafide and 260.34 hours of deepfake singing vocals, spanning 14 deepfake methods and involving 164 singer identities. We also present a baseline system with flexible front-end features, evaluated against a structured train/dev/eval split. The experiments show the importance of feature selection and highlight a need for generalization towards deepfake methods that deviate further from training distribution. The CtrSVDD dataset and baselines are publicly accessible.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2406.02438</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2406_02438
source arXiv.org; Free E- Journals
subjects Accessibility
Computer Science - Multimedia
Computer Science - Sound
Datasets
Deception
Singing
Voice recognition
title CtrSVDD: A Benchmark Dataset and Baseline Analysis for Controlled Singing Voice Deepfake Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T22%3A48%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CtrSVDD:%20A%20Benchmark%20Dataset%20and%20Baseline%20Analysis%20for%20Controlled%20Singing%20Voice%20Deepfake%20Detection&rft.jtitle=arXiv.org&rft.au=Zang,%20Yongyi&rft.date=2024-06-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2406.02438&rft_dat=%3Cproquest_arxiv%3E3064732795%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3064732795&rft_id=info:pmid/&rfr_iscdi=true