Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions

Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For inst...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Paul, Dipjyoti, Pantazis, Yannis, Stylianou, Yannis
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Paul, Dipjyoti
Pantazis, Yannis
Stylianou, Yannis
description Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For instance, conventional neural vocoders are adjusted to the training speaker and have poor generalization capabilities to unseen speakers. In this work, we propose a variant of WaveRNN, referred to as speaker conditional WaveRNN (SC-WaveRNN). We target towards the development of an efficient universal vocoder even for unseen speakers and recording conditions. In contrast to standard WaveRNN, SC-WaveRNN exploits additional information given in the form of speaker embeddings. Using publicly-available data for training, SC-WaveRNN achieves significantly better performance over baseline WaveRNN on both subjective and objective metrics. In MOS, SC-WaveRNN achieves an improvement of about 23% for seen speaker and seen recording condition and up to 95% for unseen speaker and unseen condition. Finally, we extend our work by implementing a multi-speaker text-to-speech (TTS) synthesis similar to zero-shot speaker adaptation. In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15.5% and by 60.9% over 32.6%, for seen and unseen speakers, respectively.
doi_str_mv 10.48550/arxiv.2008.05289
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2008_05289</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2008_05289</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-88a30fc62ca6070ba3a45396ef97452ddb3d52ac5b87d7941a8796ec32cc68683</originalsourceid><addsrcrecordid>eNpFj8tKxDAYRrNxIaMP4Mq8QGsmaW7upHiDocJYdVn-JqkEx2RItOrbmxkVV9_i8B04CJ0sSd0ozskZpE8_15QQVRNOlT5E_n7r4MUl3MZg_ZuPATb4CWa37rpz3McPSDbjh-Bnl3JBnXtPZR6jiba8ppgKzM4F_CeCYPHamZisD8__2nyEDibYZHf8uwvUX1327U21uru-bS9WFQipK6WAkckIakAQSUZg0HCmhZu0bDi1dmSWUzB8VNJK3SxByUINo8YIJRRboNMf7b512Cb_Culr2DUP-2b2DRqdUpg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions</title><source>arXiv.org</source><creator>Paul, Dipjyoti ; Pantazis, Yannis ; Stylianou, Yannis</creator><creatorcontrib>Paul, Dipjyoti ; Pantazis, Yannis ; Stylianou, Yannis</creatorcontrib><description>Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For instance, conventional neural vocoders are adjusted to the training speaker and have poor generalization capabilities to unseen speakers. In this work, we propose a variant of WaveRNN, referred to as speaker conditional WaveRNN (SC-WaveRNN). We target towards the development of an efficient universal vocoder even for unseen speakers and recording conditions. In contrast to standard WaveRNN, SC-WaveRNN exploits additional information given in the form of speaker embeddings. Using publicly-available data for training, SC-WaveRNN achieves significantly better performance over baseline WaveRNN on both subjective and objective metrics. In MOS, SC-WaveRNN achieves an improvement of about 23% for seen speaker and seen recording condition and up to 95% for unseen speaker and unseen condition. Finally, we extend our work by implementing a multi-speaker text-to-speech (TTS) synthesis similar to zero-shot speaker adaptation. In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15.5% and by 60.9% over 32.6%, for seen and unseen speakers, respectively.</description><identifier>DOI: 10.48550/arxiv.2008.05289</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2020-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2008.05289$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2008.05289$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Paul, Dipjyoti</creatorcontrib><creatorcontrib>Pantazis, Yannis</creatorcontrib><creatorcontrib>Stylianou, Yannis</creatorcontrib><title>Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions</title><description>Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For instance, conventional neural vocoders are adjusted to the training speaker and have poor generalization capabilities to unseen speakers. In this work, we propose a variant of WaveRNN, referred to as speaker conditional WaveRNN (SC-WaveRNN). We target towards the development of an efficient universal vocoder even for unseen speakers and recording conditions. In contrast to standard WaveRNN, SC-WaveRNN exploits additional information given in the form of speaker embeddings. Using publicly-available data for training, SC-WaveRNN achieves significantly better performance over baseline WaveRNN on both subjective and objective metrics. In MOS, SC-WaveRNN achieves an improvement of about 23% for seen speaker and seen recording condition and up to 95% for unseen speaker and unseen condition. Finally, we extend our work by implementing a multi-speaker text-to-speech (TTS) synthesis similar to zero-shot speaker adaptation. In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15.5% and by 60.9% over 32.6%, for seen and unseen speakers, respectively.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpFj8tKxDAYRrNxIaMP4Mq8QGsmaW7upHiDocJYdVn-JqkEx2RItOrbmxkVV9_i8B04CJ0sSd0ozskZpE8_15QQVRNOlT5E_n7r4MUl3MZg_ZuPATb4CWa37rpz3McPSDbjh-Bnl3JBnXtPZR6jiba8ppgKzM4F_CeCYPHamZisD8__2nyEDibYZHf8uwvUX1327U21uru-bS9WFQipK6WAkckIakAQSUZg0HCmhZu0bDi1dmSWUzB8VNJK3SxByUINo8YIJRRboNMf7b512Cb_Culr2DUP-2b2DRqdUpg</recordid><startdate>20200809</startdate><enddate>20200809</enddate><creator>Paul, Dipjyoti</creator><creator>Pantazis, Yannis</creator><creator>Stylianou, Yannis</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200809</creationdate><title>Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions</title><author>Paul, Dipjyoti ; Pantazis, Yannis ; Stylianou, Yannis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-88a30fc62ca6070ba3a45396ef97452ddb3d52ac5b87d7941a8796ec32cc68683</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Paul, Dipjyoti</creatorcontrib><creatorcontrib>Pantazis, Yannis</creatorcontrib><creatorcontrib>Stylianou, Yannis</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Paul, Dipjyoti</au><au>Pantazis, Yannis</au><au>Stylianou, Yannis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions</atitle><date>2020-08-09</date><risdate>2020</risdate><abstract>Recent advancements in deep learning led to human-level performance in single-speaker speech synthesis. However, there are still limitations in terms of speech quality when generalizing those systems into multiple-speaker models especially for unseen speakers and unseen recording qualities. For instance, conventional neural vocoders are adjusted to the training speaker and have poor generalization capabilities to unseen speakers. In this work, we propose a variant of WaveRNN, referred to as speaker conditional WaveRNN (SC-WaveRNN). We target towards the development of an efficient universal vocoder even for unseen speakers and recording conditions. In contrast to standard WaveRNN, SC-WaveRNN exploits additional information given in the form of speaker embeddings. Using publicly-available data for training, SC-WaveRNN achieves significantly better performance over baseline WaveRNN on both subjective and objective metrics. In MOS, SC-WaveRNN achieves an improvement of about 23% for seen speaker and seen recording condition and up to 95% for unseen speaker and unseen condition. Finally, we extend our work by implementing a multi-speaker text-to-speech (TTS) synthesis similar to zero-shot speaker adaptation. In terms of performance, our system has been preferred over the baseline TTS system by 60% over 15.5% and by 60.9% over 32.6%, for seen and unseen speakers, respectively.</abstract><doi>10.48550/arxiv.2008.05289</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2008.05289
ispartof
issn
language eng
recordid cdi_arxiv_primary_2008_05289
source arXiv.org
subjects Computer Science - Learning
Computer Science - Sound
title Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T04%3A07%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Speaker%20Conditional%20WaveRNN:%20Towards%20Universal%20Neural%20Vocoder%20for%20Unseen%20Speaker%20and%20Recording%20Conditions&rft.au=Paul,%20Dipjyoti&rft.date=2020-08-09&rft_id=info:doi/10.48550/arxiv.2008.05289&rft_dat=%3Carxiv_GOX%3E2008_05289%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true