Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems

In this paper, we propose a high-quality generative text-to-speech (TTS) system using an effective spectrum and excitation estimation method. Our previous research verified the effectiveness of the ExcitNet-based speech generation model in a parametric TTS framework. However, the challenge remains t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kwon, Ohsung, Song, Eunwoo, Kim, Jae-Min, Kang, Hong-Goo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kwon, Ohsung
Song, Eunwoo
Kim, Jae-Min
Kang, Hong-Goo
description In this paper, we propose a high-quality generative text-to-speech (TTS) system using an effective spectrum and excitation estimation method. Our previous research verified the effectiveness of the ExcitNet-based speech generation model in a parametric TTS framework. However, the challenge remains to build a high-quality speech synthesis system because auxiliary conditional features estimated by a simple deep neural network often contain large prediction errors, and the errors are inevitably propagated throughout the autoregressive generation process of the ExcitNet vocoder. To generate more natural speech signals, we exploited a sequence-to-sequence (seq2seq) acoustic model with an attention-based generative network (e.g., Tacotron 2) to estimate the condition parameters of the ExcitNet vocoder. Because the seq2seq acoustic model accurately estimates spectral parameters, and because the ExcitNet model effectively generates the corresponding time-domain excitation signals, combining these two models can synthesize natural speech signals. Furthermore, we verified the merit of the proposed method in producing expressive speech segments by adopting a global style token-based emotion embedding method. The experimental results confirmed that the proposed system significantly outperforms the systems with a similarly configured conventional WaveNet vocoder and our best prior parametric TTS counterpart.
doi_str_mv 10.48550/arxiv.1905.08486
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_08486</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_08486</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-90b490155b89202f7d05a2a332809d6809344f47cf21a2ae084e70a1c70b830b3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QMJNbMf2ElXhIVV00310k1xTS81DtlWlf08IbGakWRzNYeypgFwapeAFw-KveWFB5WCkqe4Z1s5Rl_yV-IwBB0oUOMXkB0x-Gvk6nKc-cjcFjiOvl86nL0p8mHq6cD_ybxop4AZItKQsTVmcibozj7eYaIgP7M7hJdLjf-_Y6a0-7T-yw_H9c_96yLDSVWahlRYKpVpjSyid7kFhiUKUBmxfrSGkdFJ3rizWndb7pAGLTkNrBLRix57_sJtjM4fVINyaX9dmcxU_htpPhw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems</title><source>arXiv.org</source><creator>Kwon, Ohsung ; Song, Eunwoo ; Kim, Jae-Min ; Kang, Hong-Goo</creator><creatorcontrib>Kwon, Ohsung ; Song, Eunwoo ; Kim, Jae-Min ; Kang, Hong-Goo</creatorcontrib><description>In this paper, we propose a high-quality generative text-to-speech (TTS) system using an effective spectrum and excitation estimation method. Our previous research verified the effectiveness of the ExcitNet-based speech generation model in a parametric TTS framework. However, the challenge remains to build a high-quality speech synthesis system because auxiliary conditional features estimated by a simple deep neural network often contain large prediction errors, and the errors are inevitably propagated throughout the autoregressive generation process of the ExcitNet vocoder. To generate more natural speech signals, we exploited a sequence-to-sequence (seq2seq) acoustic model with an attention-based generative network (e.g., Tacotron 2) to estimate the condition parameters of the ExcitNet vocoder. Because the seq2seq acoustic model accurately estimates spectral parameters, and because the ExcitNet model effectively generates the corresponding time-domain excitation signals, combining these two models can synthesize natural speech signals. Furthermore, we verified the merit of the proposed method in producing expressive speech segments by adopting a global style token-based emotion embedding method. The experimental results confirmed that the proposed system significantly outperforms the systems with a similarly configured conventional WaveNet vocoder and our best prior parametric TTS counterpart.</description><identifier>DOI: 10.48550/arxiv.1905.08486</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.08486$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.08486$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kwon, Ohsung</creatorcontrib><creatorcontrib>Song, Eunwoo</creatorcontrib><creatorcontrib>Kim, Jae-Min</creatorcontrib><creatorcontrib>Kang, Hong-Goo</creatorcontrib><title>Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems</title><description>In this paper, we propose a high-quality generative text-to-speech (TTS) system using an effective spectrum and excitation estimation method. Our previous research verified the effectiveness of the ExcitNet-based speech generation model in a parametric TTS framework. However, the challenge remains to build a high-quality speech synthesis system because auxiliary conditional features estimated by a simple deep neural network often contain large prediction errors, and the errors are inevitably propagated throughout the autoregressive generation process of the ExcitNet vocoder. To generate more natural speech signals, we exploited a sequence-to-sequence (seq2seq) acoustic model with an attention-based generative network (e.g., Tacotron 2) to estimate the condition parameters of the ExcitNet vocoder. Because the seq2seq acoustic model accurately estimates spectral parameters, and because the ExcitNet model effectively generates the corresponding time-domain excitation signals, combining these two models can synthesize natural speech signals. Furthermore, we verified the merit of the proposed method in producing expressive speech segments by adopting a global style token-based emotion embedding method. The experimental results confirmed that the proposed system significantly outperforms the systems with a similarly configured conventional WaveNet vocoder and our best prior parametric TTS counterpart.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QMJNbMf2ElXhIVV00310k1xTS81DtlWlf08IbGakWRzNYeypgFwapeAFw-KveWFB5WCkqe4Z1s5Rl_yV-IwBB0oUOMXkB0x-Gvk6nKc-cjcFjiOvl86nL0p8mHq6cD_ybxop4AZItKQsTVmcibozj7eYaIgP7M7hJdLjf-_Y6a0-7T-yw_H9c_96yLDSVWahlRYKpVpjSyid7kFhiUKUBmxfrSGkdFJ3rizWndb7pAGLTkNrBLRix57_sJtjM4fVINyaX9dmcxU_htpPhw</recordid><startdate>20190521</startdate><enddate>20190521</enddate><creator>Kwon, Ohsung</creator><creator>Song, Eunwoo</creator><creator>Kim, Jae-Min</creator><creator>Kang, Hong-Goo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190521</creationdate><title>Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems</title><author>Kwon, Ohsung ; Song, Eunwoo ; Kim, Jae-Min ; Kang, Hong-Goo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-90b490155b89202f7d05a2a332809d6809344f47cf21a2ae084e70a1c70b830b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Kwon, Ohsung</creatorcontrib><creatorcontrib>Song, Eunwoo</creatorcontrib><creatorcontrib>Kim, Jae-Min</creatorcontrib><creatorcontrib>Kang, Hong-Goo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kwon, Ohsung</au><au>Song, Eunwoo</au><au>Kim, Jae-Min</au><au>Kang, Hong-Goo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems</atitle><date>2019-05-21</date><risdate>2019</risdate><abstract>In this paper, we propose a high-quality generative text-to-speech (TTS) system using an effective spectrum and excitation estimation method. Our previous research verified the effectiveness of the ExcitNet-based speech generation model in a parametric TTS framework. However, the challenge remains to build a high-quality speech synthesis system because auxiliary conditional features estimated by a simple deep neural network often contain large prediction errors, and the errors are inevitably propagated throughout the autoregressive generation process of the ExcitNet vocoder. To generate more natural speech signals, we exploited a sequence-to-sequence (seq2seq) acoustic model with an attention-based generative network (e.g., Tacotron 2) to estimate the condition parameters of the ExcitNet vocoder. Because the seq2seq acoustic model accurately estimates spectral parameters, and because the ExcitNet model effectively generates the corresponding time-domain excitation signals, combining these two models can synthesize natural speech signals. Furthermore, we verified the merit of the proposed method in producing expressive speech segments by adopting a global style token-based emotion embedding method. The experimental results confirmed that the proposed system significantly outperforms the systems with a similarly configured conventional WaveNet vocoder and our best prior parametric TTS counterpart.</abstract><doi>10.48550/arxiv.1905.08486</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1905.08486
ispartof
issn
language eng
recordid cdi_arxiv_primary_1905_08486
source arXiv.org
subjects Computer Science - Learning
Computer Science - Sound
title Effective parameter estimation methods for an ExcitNet model in generative text-to-speech systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T02%3A17%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Effective%20parameter%20estimation%20methods%20for%20an%20ExcitNet%20model%20in%20generative%20text-to-speech%20systems&rft.au=Kwon,%20Ohsung&rft.date=2019-05-21&rft_id=info:doi/10.48550/arxiv.1905.08486&rft_dat=%3Carxiv_GOX%3E1905_08486%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true