SAR: Self-Supervised Anti-Distortion Representation for End-To-End Speech Model
In recent Text-to-Speech (TTS) systems, a neural vocoder often generates speech samples by solely conditioning on acoustic features predicted from an acoustic model. However, there are always distortions existing in the predicted acoustic features, compared to those of the groundtruth, especially in...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wang, Jianzong Zhang, Xulong Tang, Haobin Sun, Aolan Cheng, Ning Xiao, Jing |
description | In recent Text-to-Speech (TTS) systems, a neural vocoder often generates
speech samples by solely conditioning on acoustic features predicted from an
acoustic model. However, there are always distortions existing in the predicted
acoustic features, compared to those of the groundtruth, especially in the
common case of poor acoustic modeling due to low-quality training data. To
overcome such limits, we propose a Self-supervised learning framework to learn
an Anti-distortion acoustic Representation (SAR) to replace human-crafted
acoustic features by introducing distortion prior to an auto-encoder
pre-training process. The learned acoustic representation from the proposed
framework is proved anti-distortion compared to the most commonly used
mel-spectrogram through both objective and subjective evaluation. |
doi_str_mv | 10.48550/arxiv.2304.11547 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_11547</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_11547</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-a35c1a7b4111f301b15fbf610c329ff4948d1185a43567cac27349f698b53c853</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKjlApjwDTj4i-3Y6RaV8iMVVWqyR47zWVgKceSECu4eCEyvznKkh5Bb4Jk0SvF7mz7DJcsFlxmAkvqanOrqvKM1Dp7VHxOmS5ixp9W4BPYQ5iWmJcSRnnFKOOO42HX6mOhh7FkT2U9oPSG6N_oaexy25MrbYcab_25I83ho9s_seHp62VdHZgutmRXKgdWdBAAvOHSgfOcL4E7kpfeylKYHMMpKoQrtrMu1kKUvStMp4YwSG3L3d7uK2imFd5u-2l9Zu8rENyHbR4M</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SAR: Self-Supervised Anti-Distortion Representation for End-To-End Speech Model</title><source>arXiv.org</source><creator>Wang, Jianzong ; Zhang, Xulong ; Tang, Haobin ; Sun, Aolan ; Cheng, Ning ; Xiao, Jing</creator><creatorcontrib>Wang, Jianzong ; Zhang, Xulong ; Tang, Haobin ; Sun, Aolan ; Cheng, Ning ; Xiao, Jing</creatorcontrib><description>In recent Text-to-Speech (TTS) systems, a neural vocoder often generates
speech samples by solely conditioning on acoustic features predicted from an
acoustic model. However, there are always distortions existing in the predicted
acoustic features, compared to those of the groundtruth, especially in the
common case of poor acoustic modeling due to low-quality training data. To
overcome such limits, we propose a Self-supervised learning framework to learn
an Anti-distortion acoustic Representation (SAR) to replace human-crafted
acoustic features by introducing distortion prior to an auto-encoder
pre-training process. The learned acoustic representation from the proposed
framework is proved anti-distortion compared to the most commonly used
mel-spectrogram through both objective and subjective evaluation.</description><identifier>DOI: 10.48550/arxiv.2304.11547</identifier><language>eng</language><subject>Computer Science - Sound</subject><creationdate>2023-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.11547$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.11547$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Jianzong</creatorcontrib><creatorcontrib>Zhang, Xulong</creatorcontrib><creatorcontrib>Tang, Haobin</creatorcontrib><creatorcontrib>Sun, Aolan</creatorcontrib><creatorcontrib>Cheng, Ning</creatorcontrib><creatorcontrib>Xiao, Jing</creatorcontrib><title>SAR: Self-Supervised Anti-Distortion Representation for End-To-End Speech Model</title><description>In recent Text-to-Speech (TTS) systems, a neural vocoder often generates
speech samples by solely conditioning on acoustic features predicted from an
acoustic model. However, there are always distortions existing in the predicted
acoustic features, compared to those of the groundtruth, especially in the
common case of poor acoustic modeling due to low-quality training data. To
overcome such limits, we propose a Self-supervised learning framework to learn
an Anti-distortion acoustic Representation (SAR) to replace human-crafted
acoustic features by introducing distortion prior to an auto-encoder
pre-training process. The learned acoustic representation from the proposed
framework is proved anti-distortion compared to the most commonly used
mel-spectrogram through both objective and subjective evaluation.</description><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKjlApjwDTj4i-3Y6RaV8iMVVWqyR47zWVgKceSECu4eCEyvznKkh5Bb4Jk0SvF7mz7DJcsFlxmAkvqanOrqvKM1Dp7VHxOmS5ixp9W4BPYQ5iWmJcSRnnFKOOO42HX6mOhh7FkT2U9oPSG6N_oaexy25MrbYcab_25I83ho9s_seHp62VdHZgutmRXKgdWdBAAvOHSgfOcL4E7kpfeylKYHMMpKoQrtrMu1kKUvStMp4YwSG3L3d7uK2imFd5u-2l9Zu8rENyHbR4M</recordid><startdate>20230423</startdate><enddate>20230423</enddate><creator>Wang, Jianzong</creator><creator>Zhang, Xulong</creator><creator>Tang, Haobin</creator><creator>Sun, Aolan</creator><creator>Cheng, Ning</creator><creator>Xiao, Jing</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230423</creationdate><title>SAR: Self-Supervised Anti-Distortion Representation for End-To-End Speech Model</title><author>Wang, Jianzong ; Zhang, Xulong ; Tang, Haobin ; Sun, Aolan ; Cheng, Ning ; Xiao, Jing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-a35c1a7b4111f301b15fbf610c329ff4948d1185a43567cac27349f698b53c853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Jianzong</creatorcontrib><creatorcontrib>Zhang, Xulong</creatorcontrib><creatorcontrib>Tang, Haobin</creatorcontrib><creatorcontrib>Sun, Aolan</creatorcontrib><creatorcontrib>Cheng, Ning</creatorcontrib><creatorcontrib>Xiao, Jing</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Jianzong</au><au>Zhang, Xulong</au><au>Tang, Haobin</au><au>Sun, Aolan</au><au>Cheng, Ning</au><au>Xiao, Jing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SAR: Self-Supervised Anti-Distortion Representation for End-To-End Speech Model</atitle><date>2023-04-23</date><risdate>2023</risdate><abstract>In recent Text-to-Speech (TTS) systems, a neural vocoder often generates
speech samples by solely conditioning on acoustic features predicted from an
acoustic model. However, there are always distortions existing in the predicted
acoustic features, compared to those of the groundtruth, especially in the
common case of poor acoustic modeling due to low-quality training data. To
overcome such limits, we propose a Self-supervised learning framework to learn
an Anti-distortion acoustic Representation (SAR) to replace human-crafted
acoustic features by introducing distortion prior to an auto-encoder
pre-training process. The learned acoustic representation from the proposed
framework is proved anti-distortion compared to the most commonly used
mel-spectrogram through both objective and subjective evaluation.</abstract><doi>10.48550/arxiv.2304.11547</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2304.11547 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2304_11547 |
source | arXiv.org |
subjects | Computer Science - Sound |
title | SAR: Self-Supervised Anti-Distortion Representation for End-To-End Speech Model |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T21%3A42%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SAR:%20Self-Supervised%20Anti-Distortion%20Representation%20for%20End-To-End%20Speech%20Model&rft.au=Wang,%20Jianzong&rft.date=2023-04-23&rft_id=info:doi/10.48550/arxiv.2304.11547&rft_dat=%3Carxiv_GOX%3E2304_11547%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |