VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Disong, Deng, Liqun, Yeung, Yu Ting, Chen, Xiao, Liu, Xunying, Meng, Helen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Disong
Deng, Liqun
Yeung, Yu Ting
Chen, Xiao
Liu, Xunying
Meng, Helen
description One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC.
doi_str_mv 10.48550/arxiv.2106.10132
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2106_10132</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2106_10132</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-22931f73601b4253a13fafe61ba03944531f300c9a4aa1fe763601f38e083d63</originalsourceid><addsrcrecordid>eNotkMtOhEAQRdm4MKMf4Mr-AbAf0IA7xRfJTCbjKFtSA9VOJ9CQ5hH1D-avBcZV3bp16y6O49ww6vlRENA7sN969Dij0mOUCX7pnLLdJs2Se5Jh0TeW7AYwvf6FXjeGgCnJZugHqEhqVGPrxXYfocOSfJpuaNGOel72LWJxJO_YWuzQ9Of_J71o81VhPQkyVZCtQbc7Nj3JGl0gSRozou2m9JVzoaDq8Pp_rpz9y_NH8uaut69p8rB2QYbc5TwWTIVCUnbweSCACQUKJTsAFbHvB9NVUFrE4AMwhaGco0pESCNRSrFybs-tC4q8tboG-5PPSPIFifgDoWBd3A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion</title><source>arXiv.org</source><creator>Wang, Disong ; Deng, Liqun ; Yeung, Yu Ting ; Chen, Xiao ; Liu, Xunying ; Meng, Helen</creator><creatorcontrib>Wang, Disong ; Deng, Liqun ; Yeung, Yu Ting ; Chen, Xiao ; Liu, Xunying ; Meng, Helen</creatorcontrib><description>One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC.</description><identifier>DOI: 10.48550/arxiv.2106.10132</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Multimedia ; Computer Science - Sound</subject><creationdate>2021-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2106.10132$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2106.10132$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Disong</creatorcontrib><creatorcontrib>Deng, Liqun</creatorcontrib><creatorcontrib>Yeung, Yu Ting</creatorcontrib><creatorcontrib>Chen, Xiao</creatorcontrib><creatorcontrib>Liu, Xunying</creatorcontrib><creatorcontrib>Meng, Helen</creatorcontrib><title>VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion</title><description>One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Multimedia</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkMtOhEAQRdm4MKMf4Mr-AbAf0IA7xRfJTCbjKFtSA9VOJ9CQ5hH1D-avBcZV3bp16y6O49ww6vlRENA7sN969Dij0mOUCX7pnLLdJs2Se5Jh0TeW7AYwvf6FXjeGgCnJZugHqEhqVGPrxXYfocOSfJpuaNGOel72LWJxJO_YWuzQ9Of_J71o81VhPQkyVZCtQbc7Nj3JGl0gSRozou2m9JVzoaDq8Pp_rpz9y_NH8uaut69p8rB2QYbc5TwWTIVCUnbweSCACQUKJTsAFbHvB9NVUFrE4AMwhaGco0pESCNRSrFybs-tC4q8tboG-5PPSPIFifgDoWBd3A</recordid><startdate>20210618</startdate><enddate>20210618</enddate><creator>Wang, Disong</creator><creator>Deng, Liqun</creator><creator>Yeung, Yu Ting</creator><creator>Chen, Xiao</creator><creator>Liu, Xunying</creator><creator>Meng, Helen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210618</creationdate><title>VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion</title><author>Wang, Disong ; Deng, Liqun ; Yeung, Yu Ting ; Chen, Xiao ; Liu, Xunying ; Meng, Helen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-22931f73601b4253a13fafe61ba03944531f300c9a4aa1fe763601f38e083d63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Multimedia</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Disong</creatorcontrib><creatorcontrib>Deng, Liqun</creatorcontrib><creatorcontrib>Yeung, Yu Ting</creatorcontrib><creatorcontrib>Chen, Xiao</creatorcontrib><creatorcontrib>Liu, Xunying</creatorcontrib><creatorcontrib>Meng, Helen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Disong</au><au>Deng, Liqun</au><au>Yeung, Yu Ting</au><au>Chen, Xiao</au><au>Liu, Xunying</au><au>Meng, Helen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion</atitle><date>2021-06-18</date><risdate>2021</risdate><abstract>One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC.</abstract><doi>10.48550/arxiv.2106.10132</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2106.10132
ispartof
issn
language eng
recordid cdi_arxiv_primary_2106_10132
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Multimedia
Computer Science - Sound
title VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T08%3A40%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=VQMIVC:%20Vector%20Quantization%20and%20Mutual%20Information-Based%20Unsupervised%20Speech%20Representation%20Disentanglement%20for%20One-shot%20Voice%20Conversion&rft.au=Wang,%20Disong&rft.date=2021-06-18&rft_id=info:doi/10.48550/arxiv.2106.10132&rft_dat=%3Carxiv_GOX%3E2106_10132%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true