Rethinking Perturbations in Encoder-Decoders for Fast Training

We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study ad...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Takase, Sho, Kiyono, Shun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Takase, Sho
Kiyono, Shun
description We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster. Our code is publicly available at https://github.com/takase/rethink_perturbations.
doi_str_mv 10.48550/arxiv.2104.01853
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2104_01853</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2104_01853</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-30d5244a75b7afc8b1bfec33c6ff5bd19d3f7aa0a4ebb30ceace4b07d8a0bc8e3</originalsourceid><addsrcrecordid>eNotz8FKAzEUheFsXEj1AVw1LzDjTW_SxI0gtVWhoMjsh5vkpgY1I5lR9O3V0dW_Ogc-Ic4UtNoZA-dUP_NHu1KgW1DO4LG4fOTpKZfnXA7ygev0Xj1NeSijzEVuSxgi1-aa544yDVXuaJxkVymXn82JOEr0MvLpfxei2227zW2zv7-521ztG1pbbBCiWWlN1nhLKTivfOKAGNYpGR_VRcRkiYA0e48QmAJrDzY6Ah8c40Is_25nQP9W8yvVr_4X0s8Q_AbtXUTq</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Rethinking Perturbations in Encoder-Decoders for Fast Training</title><source>arXiv.org</source><creator>Takase, Sho ; Kiyono, Shun</creator><creatorcontrib>Takase, Sho ; Kiyono, Shun</creatorcontrib><description>We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster. Our code is publicly available at https://github.com/takase/rethink_perturbations.</description><identifier>DOI: 10.48550/arxiv.2104.01853</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2021-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2104.01853$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2104.01853$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Takase, Sho</creatorcontrib><creatorcontrib>Kiyono, Shun</creatorcontrib><title>Rethinking Perturbations in Encoder-Decoders for Fast Training</title><description>We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster. Our code is publicly available at https://github.com/takase/rethink_perturbations.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8FKAzEUheFsXEj1AVw1LzDjTW_SxI0gtVWhoMjsh5vkpgY1I5lR9O3V0dW_Ogc-Ic4UtNoZA-dUP_NHu1KgW1DO4LG4fOTpKZfnXA7ygev0Xj1NeSijzEVuSxgi1-aa544yDVXuaJxkVymXn82JOEr0MvLpfxei2227zW2zv7-521ztG1pbbBCiWWlN1nhLKTivfOKAGNYpGR_VRcRkiYA0e48QmAJrDzY6Ah8c40Is_25nQP9W8yvVr_4X0s8Q_AbtXUTq</recordid><startdate>20210405</startdate><enddate>20210405</enddate><creator>Takase, Sho</creator><creator>Kiyono, Shun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210405</creationdate><title>Rethinking Perturbations in Encoder-Decoders for Fast Training</title><author>Takase, Sho ; Kiyono, Shun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-30d5244a75b7afc8b1bfec33c6ff5bd19d3f7aa0a4ebb30ceace4b07d8a0bc8e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Takase, Sho</creatorcontrib><creatorcontrib>Kiyono, Shun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Takase, Sho</au><au>Kiyono, Shun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rethinking Perturbations in Encoder-Decoders for Fast Training</atitle><date>2021-04-05</date><risdate>2021</risdate><abstract>We often use perturbations to regularize neural models. For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time. Thus, this study addresses the question of whether these approaches are efficient enough for training time. We compare several perturbations in sequence-to-sequence problems with respect to computational time. Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster. Our code is publicly available at https://github.com/takase/rethink_perturbations.</abstract><doi>10.48550/arxiv.2104.01853</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2104.01853
ispartof
issn
language eng
recordid cdi_arxiv_primary_2104_01853
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title Rethinking Perturbations in Encoder-Decoders for Fast Training
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T07%3A28%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rethinking%20Perturbations%20in%20Encoder-Decoders%20for%20Fast%20Training&rft.au=Takase,%20Sho&rft.date=2021-04-05&rft_id=info:doi/10.48550/arxiv.2104.01853&rft_dat=%3Carxiv_GOX%3E2104_01853%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true