An Exploration into why Output Regularization Mitigates Label Noise

Label noise presents a real challenge for supervised learning algorithms. Consequently, mitigating label noise has attracted immense research in recent years. Noise robust losses is one of the more promising approaches for dealing with label noise, as these methods only require changing the loss fun...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-04
Hauptverfasser: Shoham, Neta, Avidor, Tomer, Israel, Nadav
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Shoham, Neta
Avidor, Tomer
Israel, Nadav
description Label noise presents a real challenge for supervised learning algorithms. Consequently, mitigating label noise has attracted immense research in recent years. Noise robust losses is one of the more promising approaches for dealing with label noise, as these methods only require changing the loss function and do not require changing the design of the classifier itself, which can be expensive in terms of development time. In this work we focus on losses that use output regularization (such as label smoothing and entropy). Although these losses perform well in practice, their ability to mitigate label noise lack mathematical rigor. In this work we aim at closing this gap by showing that losses, which incorporate an output regularization term, become symmetric as the regularization coefficient goes to infinity. We argue that the regularization coefficient can be seen as a hyper-parameter controlling the symmetricity, and thus, the noise robustness of the loss function.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2518859819</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2518859819</sourcerecordid><originalsourceid>FETCH-proquest_journals_25188598193</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwdsxTcK0oyMkvSizJzM9TyMwryVcoz6hU8C8tKSgtUQhKTS_NSSzKrIJI-2aWZKYnlqQWK_gkJqXmKPjlZxan8jCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGpoYWFqaWFoaUxcaoAFI86dA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2518859819</pqid></control><display><type>article</type><title>An Exploration into why Output Regularization Mitigates Label Noise</title><source>Free E- Journals</source><creator>Shoham, Neta ; Avidor, Tomer ; Israel, Nadav</creator><creatorcontrib>Shoham, Neta ; Avidor, Tomer ; Israel, Nadav</creatorcontrib><description>Label noise presents a real challenge for supervised learning algorithms. Consequently, mitigating label noise has attracted immense research in recent years. Noise robust losses is one of the more promising approaches for dealing with label noise, as these methods only require changing the loss function and do not require changing the design of the classifier itself, which can be expensive in terms of development time. In this work we focus on losses that use output regularization (such as label smoothing and entropy). Although these losses perform well in practice, their ability to mitigate label noise lack mathematical rigor. In this work we aim at closing this gap by showing that losses, which incorporate an output regularization term, become symmetric as the regularization coefficient goes to infinity. We argue that the regularization coefficient can be seen as a hyper-parameter controlling the symmetricity, and thus, the noise robustness of the loss function.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Machine learning ; Mathematical analysis ; Noise ; Regularization ; Robustness (mathematics)</subject><ispartof>arXiv.org, 2021-04</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Shoham, Neta</creatorcontrib><creatorcontrib>Avidor, Tomer</creatorcontrib><creatorcontrib>Israel, Nadav</creatorcontrib><title>An Exploration into why Output Regularization Mitigates Label Noise</title><title>arXiv.org</title><description>Label noise presents a real challenge for supervised learning algorithms. Consequently, mitigating label noise has attracted immense research in recent years. Noise robust losses is one of the more promising approaches for dealing with label noise, as these methods only require changing the loss function and do not require changing the design of the classifier itself, which can be expensive in terms of development time. In this work we focus on losses that use output regularization (such as label smoothing and entropy). Although these losses perform well in practice, their ability to mitigate label noise lack mathematical rigor. In this work we aim at closing this gap by showing that losses, which incorporate an output regularization term, become symmetric as the regularization coefficient goes to infinity. We argue that the regularization coefficient can be seen as a hyper-parameter controlling the symmetricity, and thus, the noise robustness of the loss function.</description><subject>Algorithms</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Noise</subject><subject>Regularization</subject><subject>Robustness (mathematics)</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRwdsxTcK0oyMkvSizJzM9TyMwryVcoz6hU8C8tKSgtUQhKTS_NSSzKrIJI-2aWZKYnlqQWK_gkJqXmKPjlZxan8jCwpiXmFKfyQmluBmU31xBnD92CovzC0tTikvis_NKiPKBUvJGpoYWFqaWFoaUxcaoAFI86dA</recordid><startdate>20210426</startdate><enddate>20210426</enddate><creator>Shoham, Neta</creator><creator>Avidor, Tomer</creator><creator>Israel, Nadav</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210426</creationdate><title>An Exploration into why Output Regularization Mitigates Label Noise</title><author>Shoham, Neta ; Avidor, Tomer ; Israel, Nadav</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25188598193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Noise</topic><topic>Regularization</topic><topic>Robustness (mathematics)</topic><toplevel>online_resources</toplevel><creatorcontrib>Shoham, Neta</creatorcontrib><creatorcontrib>Avidor, Tomer</creatorcontrib><creatorcontrib>Israel, Nadav</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shoham, Neta</au><au>Avidor, Tomer</au><au>Israel, Nadav</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>An Exploration into why Output Regularization Mitigates Label Noise</atitle><jtitle>arXiv.org</jtitle><date>2021-04-26</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Label noise presents a real challenge for supervised learning algorithms. Consequently, mitigating label noise has attracted immense research in recent years. Noise robust losses is one of the more promising approaches for dealing with label noise, as these methods only require changing the loss function and do not require changing the design of the classifier itself, which can be expensive in terms of development time. In this work we focus on losses that use output regularization (such as label smoothing and entropy). Although these losses perform well in practice, their ability to mitigate label noise lack mathematical rigor. In this work we aim at closing this gap by showing that losses, which incorporate an output regularization term, become symmetric as the regularization coefficient goes to infinity. We argue that the regularization coefficient can be seen as a hyper-parameter controlling the symmetricity, and thus, the noise robustness of the loss function.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2518859819
source Free E- Journals
subjects Algorithms
Machine learning
Mathematical analysis
Noise
Regularization
Robustness (mathematics)
title An Exploration into why Output Regularization Mitigates Label Noise
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T08%3A05%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=An%20Exploration%20into%20why%20Output%20Regularization%20Mitigates%20Label%20Noise&rft.jtitle=arXiv.org&rft.au=Shoham,%20Neta&rft.date=2021-04-26&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2518859819%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2518859819&rft_id=info:pmid/&rfr_iscdi=true