Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling

This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach, which utilizes text supervision during training. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module. During the training stage...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2021, Vol.29, p.1717-1728
Hauptverfasser: Liu, Songxiang, Cao, Yuewen, Wang, Disong, Wu, Xixin, Liu, Xunying, Meng, Helen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1728
container_issue
container_start_page 1717
container_title IEEE/ACM transactions on audio, speech, and language processing
container_volume 29
creator Liu, Songxiang
Cao, Yuewen
Wang, Disong
Wu, Xixin
Liu, Xunying
Meng, Helen
description This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach, which utilizes text supervision during training. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module. During the training stage, an encoder-decoder-based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich spoken content representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.
doi_str_mv 10.1109/TASLP.2021.3076867
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2530114479</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9420297</ieee_id><sourcerecordid>2530114479</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-a6419030fcb4a0a7590347fa8c84f3b9c326937c9bc9803773d78f25c3be54ce3</originalsourceid><addsrcrecordid>eNo9kFtPwkAQhTdGEwnyB_Slic_F2Uu7nUdCvCUlXkB93GyXqZZgF9tCwr93EfBpzknONzM5jF1yGHIOeDMbTfPnoQDBhxJ0mqX6hPWEFBijBHV61ALhnA3adgEAHDSiVj32Mqq3cefjia230buvHEVjX2-oaStfRx9V9xXl3tkuuPiVlkFsKJrSz5pqRzvwqKOJn9Oyqj8v2Flply0NDrPP3u5uZ-OHOH-6fxyP8tgJTLrYpoojSChdoSxYnQSjdGkzl6lSFuikSFFqh4XDDKTWcq6zUiROFpQoR7LPrvd7V40PL7SdWfh1U4eTRiQSOFdKY0iJfco1vm0bKs2qqb5tszUczK4989ee2bVnDu0F6GoPVUT0D6AKIdTyF-j2aog</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2530114479</pqid></control><display><type>article</type><title>Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling</title><source>IEEE Electronic Library (IEL)</source><creator>Liu, Songxiang ; Cao, Yuewen ; Wang, Disong ; Wu, Xixin ; Liu, Xunying ; Meng, Helen</creator><creatorcontrib>Liu, Songxiang ; Cao, Yuewen ; Wang, Disong ; Wu, Xixin ; Liu, Xunying ; Meng, Helen</creatorcontrib><description>This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach, which utilizes text supervision during training. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module. During the training stage, an encoder-decoder-based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich spoken content representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.</description><identifier>ISSN: 2329-9290</identifier><identifier>EISSN: 2329-9304</identifier><identifier>DOI: 10.1109/TASLP.2021.3076867</identifier><identifier>CODEN: ITASD8</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Ablation ; Acoustics ; Any-to-many ; Coders ; Computational modeling ; Conversion ; Decoding ; Encoders-Decoders ; Feature extraction ; Hidden Markov models ; location relative attention ; Phonemes ; Pipelines ; Representations ; sequence-to-sequence modeling ; Spectra ; Speech recognition ; Synthesis ; Training ; voice conversion ; Voice recognition</subject><ispartof>IEEE/ACM transactions on audio, speech, and language processing, 2021, Vol.29, p.1717-1728</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-a6419030fcb4a0a7590347fa8c84f3b9c326937c9bc9803773d78f25c3be54ce3</citedby><cites>FETCH-LOGICAL-c295t-a6419030fcb4a0a7590347fa8c84f3b9c326937c9bc9803773d78f25c3be54ce3</cites><orcidid>0000-0002-0943-2446 ; 0000-0001-9543-1572 ; 0000-0001-6725-1160 ; 0000-0002-1432-161X ; 0000-0001-8481-6880</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9420297$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9420297$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Songxiang</creatorcontrib><creatorcontrib>Cao, Yuewen</creatorcontrib><creatorcontrib>Wang, Disong</creatorcontrib><creatorcontrib>Wu, Xixin</creatorcontrib><creatorcontrib>Liu, Xunying</creatorcontrib><creatorcontrib>Meng, Helen</creatorcontrib><title>Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling</title><title>IEEE/ACM transactions on audio, speech, and language processing</title><addtitle>TASLP</addtitle><description>This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach, which utilizes text supervision during training. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module. During the training stage, an encoder-decoder-based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich spoken content representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.</description><subject>Ablation</subject><subject>Acoustics</subject><subject>Any-to-many</subject><subject>Coders</subject><subject>Computational modeling</subject><subject>Conversion</subject><subject>Decoding</subject><subject>Encoders-Decoders</subject><subject>Feature extraction</subject><subject>Hidden Markov models</subject><subject>location relative attention</subject><subject>Phonemes</subject><subject>Pipelines</subject><subject>Representations</subject><subject>sequence-to-sequence modeling</subject><subject>Spectra</subject><subject>Speech recognition</subject><subject>Synthesis</subject><subject>Training</subject><subject>voice conversion</subject><subject>Voice recognition</subject><issn>2329-9290</issn><issn>2329-9304</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kFtPwkAQhTdGEwnyB_Slic_F2Uu7nUdCvCUlXkB93GyXqZZgF9tCwr93EfBpzknONzM5jF1yGHIOeDMbTfPnoQDBhxJ0mqX6hPWEFBijBHV61ALhnA3adgEAHDSiVj32Mqq3cefjia230buvHEVjX2-oaStfRx9V9xXl3tkuuPiVlkFsKJrSz5pqRzvwqKOJn9Oyqj8v2Flply0NDrPP3u5uZ-OHOH-6fxyP8tgJTLrYpoojSChdoSxYnQSjdGkzl6lSFuikSFFqh4XDDKTWcq6zUiROFpQoR7LPrvd7V40PL7SdWfh1U4eTRiQSOFdKY0iJfco1vm0bKs2qqb5tszUczK4989ee2bVnDu0F6GoPVUT0D6AKIdTyF-j2aog</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Liu, Songxiang</creator><creator>Cao, Yuewen</creator><creator>Wang, Disong</creator><creator>Wu, Xixin</creator><creator>Liu, Xunying</creator><creator>Meng, Helen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-0943-2446</orcidid><orcidid>https://orcid.org/0000-0001-9543-1572</orcidid><orcidid>https://orcid.org/0000-0001-6725-1160</orcidid><orcidid>https://orcid.org/0000-0002-1432-161X</orcidid><orcidid>https://orcid.org/0000-0001-8481-6880</orcidid></search><sort><creationdate>2021</creationdate><title>Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling</title><author>Liu, Songxiang ; Cao, Yuewen ; Wang, Disong ; Wu, Xixin ; Liu, Xunying ; Meng, Helen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-a6419030fcb4a0a7590347fa8c84f3b9c326937c9bc9803773d78f25c3be54ce3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Ablation</topic><topic>Acoustics</topic><topic>Any-to-many</topic><topic>Coders</topic><topic>Computational modeling</topic><topic>Conversion</topic><topic>Decoding</topic><topic>Encoders-Decoders</topic><topic>Feature extraction</topic><topic>Hidden Markov models</topic><topic>location relative attention</topic><topic>Phonemes</topic><topic>Pipelines</topic><topic>Representations</topic><topic>sequence-to-sequence modeling</topic><topic>Spectra</topic><topic>Speech recognition</topic><topic>Synthesis</topic><topic>Training</topic><topic>voice conversion</topic><topic>Voice recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Songxiang</creatorcontrib><creatorcontrib>Cao, Yuewen</creatorcontrib><creatorcontrib>Wang, Disong</creatorcontrib><creatorcontrib>Wu, Xixin</creatorcontrib><creatorcontrib>Liu, Xunying</creatorcontrib><creatorcontrib>Meng, Helen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Songxiang</au><au>Cao, Yuewen</au><au>Wang, Disong</au><au>Wu, Xixin</au><au>Liu, Xunying</au><au>Meng, Helen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling</atitle><jtitle>IEEE/ACM transactions on audio, speech, and language processing</jtitle><stitle>TASLP</stitle><date>2021</date><risdate>2021</risdate><volume>29</volume><spage>1717</spage><epage>1728</epage><pages>1717-1728</pages><issn>2329-9290</issn><eissn>2329-9304</eissn><coden>ITASD8</coden><abstract>This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach, which utilizes text supervision during training. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module. During the training stage, an encoder-decoder-based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich spoken content representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TASLP.2021.3076867</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-0943-2446</orcidid><orcidid>https://orcid.org/0000-0001-9543-1572</orcidid><orcidid>https://orcid.org/0000-0001-6725-1160</orcidid><orcidid>https://orcid.org/0000-0002-1432-161X</orcidid><orcidid>https://orcid.org/0000-0001-8481-6880</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2329-9290
ispartof IEEE/ACM transactions on audio, speech, and language processing, 2021, Vol.29, p.1717-1728
issn 2329-9290
2329-9304
language eng
recordid cdi_proquest_journals_2530114479
source IEEE Electronic Library (IEL)
subjects Ablation
Acoustics
Any-to-many
Coders
Computational modeling
Conversion
Decoding
Encoders-Decoders
Feature extraction
Hidden Markov models
location relative attention
Phonemes
Pipelines
Representations
sequence-to-sequence modeling
Spectra
Speech recognition
Synthesis
Training
voice conversion
Voice recognition
title Any-to-Many Voice Conversion With Location-Relative Sequence-to-Sequence Modeling
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T06%3A54%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Any-to-Many%20Voice%20Conversion%20With%20Location-Relative%20Sequence-to-Sequence%20Modeling&rft.jtitle=IEEE/ACM%20transactions%20on%20audio,%20speech,%20and%20language%20processing&rft.au=Liu,%20Songxiang&rft.date=2021&rft.volume=29&rft.spage=1717&rft.epage=1728&rft.pages=1717-1728&rft.issn=2329-9290&rft.eissn=2329-9304&rft.coden=ITASD8&rft_id=info:doi/10.1109/TASLP.2021.3076867&rft_dat=%3Cproquest_RIE%3E2530114479%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2530114479&rft_id=info:pmid/&rft_ieee_id=9420297&rfr_iscdi=true