Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
We introduce Wav2Seq, the first self-supervised approach to pre-train both parts of encoder-decoder models for speech data. We induce a pseudo language as a compact discrete representation, and formulate a self-supervised pseudo speech recognition task -- transcribing audio inputs into pseudo subwor...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wu, Felix Kim, Kwangyoun Watanabe, Shinji Han, Kyu McDonald, Ryan Weinberger, Kilian Q Artzi, Yoav |
description | We introduce Wav2Seq, the first self-supervised approach to pre-train both
parts of encoder-decoder models for speech data. We induce a pseudo language as
a compact discrete representation, and formulate a self-supervised pseudo
speech recognition task -- transcribing audio inputs into pseudo subword
sequences. This process stands on its own, or can be applied as low-cost
second-stage pre-training. We experiment with automatic speech recognition
(ASR), spoken named entity recognition, and speech-to-text translation. We set
new state-of-the-art results for end-to-end spoken named entity recognition,
and show consistent improvements on 20 language pairs for speech-to-text
translation, even when competing methods use additional text data for training.
Finally, on ASR, our approach enables encoder-decoder methods to benefit from
pre-training for all parts of the network, and shows comparable performance to
highly optimized recent methods. |
doi_str_mv | 10.48550/arxiv.2205.01086 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2205_01086</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2205_01086</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-197d71361106ea4a3db068187fc79196c02a2d5a2e9471408fdc8ec0cab9e1143</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDMOEXcPB1EtthQ6X8SEFUNIgxurVvQqSSFDutytvTBpbvLJ-OdBi7AplkNs_lDYZDt0-UknkiQVp9zt4-cK9W9H3Ll4HEGLDru77lqy2R-xTjICo6jHzRu8FTEPc0kb8cdxP5ezx9l5F2fuAl9u0OW4oX7KzBTaTLf85Y9bCo5k-ifH18nt-VArXRAgrjDaQaQGrCDFO_ltqCNY0zBRTaSYXK56ioyAxk0jbeWXLS4boggCydses_7RRVb0P3heGnPsXVU1z6CxzxSQQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages</title><source>arXiv.org</source><creator>Wu, Felix ; Kim, Kwangyoun ; Watanabe, Shinji ; Han, Kyu ; McDonald, Ryan ; Weinberger, Kilian Q ; Artzi, Yoav</creator><creatorcontrib>Wu, Felix ; Kim, Kwangyoun ; Watanabe, Shinji ; Han, Kyu ; McDonald, Ryan ; Weinberger, Kilian Q ; Artzi, Yoav</creatorcontrib><description>We introduce Wav2Seq, the first self-supervised approach to pre-train both
parts of encoder-decoder models for speech data. We induce a pseudo language as
a compact discrete representation, and formulate a self-supervised pseudo
speech recognition task -- transcribing audio inputs into pseudo subword
sequences. This process stands on its own, or can be applied as low-cost
second-stage pre-training. We experiment with automatic speech recognition
(ASR), spoken named entity recognition, and speech-to-text translation. We set
new state-of-the-art results for end-to-end spoken named entity recognition,
and show consistent improvements on 20 language pairs for speech-to-text
translation, even when competing methods use additional text data for training.
Finally, on ASR, our approach enables encoder-decoder methods to benefit from
pre-training for all parts of the network, and shows comparable performance to
highly optimized recent methods.</description><identifier>DOI: 10.48550/arxiv.2205.01086</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Sound</subject><creationdate>2022-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2205.01086$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2205.01086$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Felix</creatorcontrib><creatorcontrib>Kim, Kwangyoun</creatorcontrib><creatorcontrib>Watanabe, Shinji</creatorcontrib><creatorcontrib>Han, Kyu</creatorcontrib><creatorcontrib>McDonald, Ryan</creatorcontrib><creatorcontrib>Weinberger, Kilian Q</creatorcontrib><creatorcontrib>Artzi, Yoav</creatorcontrib><title>Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages</title><description>We introduce Wav2Seq, the first self-supervised approach to pre-train both
parts of encoder-decoder models for speech data. We induce a pseudo language as
a compact discrete representation, and formulate a self-supervised pseudo
speech recognition task -- transcribing audio inputs into pseudo subword
sequences. This process stands on its own, or can be applied as low-cost
second-stage pre-training. We experiment with automatic speech recognition
(ASR), spoken named entity recognition, and speech-to-text translation. We set
new state-of-the-art results for end-to-end spoken named entity recognition,
and show consistent improvements on 20 language pairs for speech-to-text
translation, even when competing methods use additional text data for training.
Finally, on ASR, our approach enables encoder-decoder methods to benefit from
pre-training for all parts of the network, and shows comparable performance to
highly optimized recent methods.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDMOEXcPB1EtthQ6X8SEFUNIgxurVvQqSSFDutytvTBpbvLJ-OdBi7AplkNs_lDYZDt0-UknkiQVp9zt4-cK9W9H3Ll4HEGLDru77lqy2R-xTjICo6jHzRu8FTEPc0kb8cdxP5ezx9l5F2fuAl9u0OW4oX7KzBTaTLf85Y9bCo5k-ifH18nt-VArXRAgrjDaQaQGrCDFO_ltqCNY0zBRTaSYXK56ioyAxk0jbeWXLS4boggCydses_7RRVb0P3heGnPsXVU1z6CxzxSQQ</recordid><startdate>20220502</startdate><enddate>20220502</enddate><creator>Wu, Felix</creator><creator>Kim, Kwangyoun</creator><creator>Watanabe, Shinji</creator><creator>Han, Kyu</creator><creator>McDonald, Ryan</creator><creator>Weinberger, Kilian Q</creator><creator>Artzi, Yoav</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220502</creationdate><title>Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages</title><author>Wu, Felix ; Kim, Kwangyoun ; Watanabe, Shinji ; Han, Kyu ; McDonald, Ryan ; Weinberger, Kilian Q ; Artzi, Yoav</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-197d71361106ea4a3db068187fc79196c02a2d5a2e9471408fdc8ec0cab9e1143</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Felix</creatorcontrib><creatorcontrib>Kim, Kwangyoun</creatorcontrib><creatorcontrib>Watanabe, Shinji</creatorcontrib><creatorcontrib>Han, Kyu</creatorcontrib><creatorcontrib>McDonald, Ryan</creatorcontrib><creatorcontrib>Weinberger, Kilian Q</creatorcontrib><creatorcontrib>Artzi, Yoav</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Felix</au><au>Kim, Kwangyoun</au><au>Watanabe, Shinji</au><au>Han, Kyu</au><au>McDonald, Ryan</au><au>Weinberger, Kilian Q</au><au>Artzi, Yoav</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages</atitle><date>2022-05-02</date><risdate>2022</risdate><abstract>We introduce Wav2Seq, the first self-supervised approach to pre-train both
parts of encoder-decoder models for speech data. We induce a pseudo language as
a compact discrete representation, and formulate a self-supervised pseudo
speech recognition task -- transcribing audio inputs into pseudo subword
sequences. This process stands on its own, or can be applied as low-cost
second-stage pre-training. We experiment with automatic speech recognition
(ASR), spoken named entity recognition, and speech-to-text translation. We set
new state-of-the-art results for end-to-end spoken named entity recognition,
and show consistent improvements on 20 language pairs for speech-to-text
translation, even when competing methods use additional text data for training.
Finally, on ASR, our approach enables encoder-decoder methods to benefit from
pre-training for all parts of the network, and shows comparable performance to
highly optimized recent methods.</abstract><doi>10.48550/arxiv.2205.01086</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2205.01086 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2205_01086 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning Computer Science - Sound |
title | Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T19%3A52%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Wav2Seq:%20Pre-training%20Speech-to-Text%20Encoder-Decoder%20Models%20Using%20Pseudo%20Languages&rft.au=Wu,%20Felix&rft.date=2022-05-02&rft_id=info:doi/10.48550/arxiv.2205.01086&rft_dat=%3Carxiv_GOX%3E2205_01086%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |