Learning Not to Learn in the Presence of Noisy Labels
Learning in the presence of label noise is a challenging yet important task: it is crucial to design models that are robust in the presence of mislabeled datasets. In this paper, we discover that a new class of loss functions called the gambler's loss provides strong robustness to label noise a...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ziyin, Liu Chen, Blair Wang, Ru Liang, Paul Pu Salakhutdinov, Ruslan Morency, Louis-Philippe Ueda, Masahito |
description | Learning in the presence of label noise is a challenging yet important task:
it is crucial to design models that are robust in the presence of mislabeled
datasets. In this paper, we discover that a new class of loss functions called
the gambler's loss provides strong robustness to label noise across various
levels of corruption. We show that training with this loss function encourages
the model to "abstain" from learning on the data points with noisy labels,
resulting in a simple and effective method to improve robustness and
generalization. In addition, we propose two practical extensions of the method:
1) an analytical early stopping criterion to approximately stop training before
the memorization of noisy labels, as well as 2) a heuristic for setting
hyperparameters which do not require knowledge of the noise corruption rate. We
demonstrate the effectiveness of our method by achieving strong results across
three image and text classification tasks as compared to existing baselines. |
doi_str_mv | 10.48550/arxiv.2002.06541 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_06541</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_06541</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-60443e1e5030ef315d66c210117c12502f5188486f5bd2afa0739dfafb0075673</originalsourceid><addsrcrecordid>eNotzs1OwzAQBGBfekAtD8AJv0DCru210yOqyo8UAYfeo02yppZKUjkRom8PBE6j0UijT6kbhNJVRHDH-St9lgbAlODJ4ZWiWjgPaXjXL-Os51EvXadBz0fRb1kmGTrRY_zZ03TRNbdymjZqFfk0yfV_rtXhYX_YPRX16-Pz7r4u2AcsPDhnBYXAgkSL1HvfGQTE0KEhMJGwqlzlI7W94cgQ7LaPHFuAQD7Ytbr9u13czTmnD86X5tffLH77DZr9PXg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning Not to Learn in the Presence of Noisy Labels</title><source>arXiv.org</source><creator>Ziyin, Liu ; Chen, Blair ; Wang, Ru ; Liang, Paul Pu ; Salakhutdinov, Ruslan ; Morency, Louis-Philippe ; Ueda, Masahito</creator><creatorcontrib>Ziyin, Liu ; Chen, Blair ; Wang, Ru ; Liang, Paul Pu ; Salakhutdinov, Ruslan ; Morency, Louis-Philippe ; Ueda, Masahito</creatorcontrib><description>Learning in the presence of label noise is a challenging yet important task:
it is crucial to design models that are robust in the presence of mislabeled
datasets. In this paper, we discover that a new class of loss functions called
the gambler's loss provides strong robustness to label noise across various
levels of corruption. We show that training with this loss function encourages
the model to "abstain" from learning on the data points with noisy labels,
resulting in a simple and effective method to improve robustness and
generalization. In addition, we propose two practical extensions of the method:
1) an analytical early stopping criterion to approximately stop training before
the memorization of noisy labels, as well as 2) a heuristic for setting
hyperparameters which do not require knowledge of the noise corruption rate. We
demonstrate the effectiveness of our method by achieving strong results across
three image and text classification tasks as compared to existing baselines.</description><identifier>DOI: 10.48550/arxiv.2002.06541</identifier><language>eng</language><subject>Computer Science - Information Theory ; Computer Science - Learning ; Mathematics - Information Theory ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.06541$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.06541$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ziyin, Liu</creatorcontrib><creatorcontrib>Chen, Blair</creatorcontrib><creatorcontrib>Wang, Ru</creatorcontrib><creatorcontrib>Liang, Paul Pu</creatorcontrib><creatorcontrib>Salakhutdinov, Ruslan</creatorcontrib><creatorcontrib>Morency, Louis-Philippe</creatorcontrib><creatorcontrib>Ueda, Masahito</creatorcontrib><title>Learning Not to Learn in the Presence of Noisy Labels</title><description>Learning in the presence of label noise is a challenging yet important task:
it is crucial to design models that are robust in the presence of mislabeled
datasets. In this paper, we discover that a new class of loss functions called
the gambler's loss provides strong robustness to label noise across various
levels of corruption. We show that training with this loss function encourages
the model to "abstain" from learning on the data points with noisy labels,
resulting in a simple and effective method to improve robustness and
generalization. In addition, we propose two practical extensions of the method:
1) an analytical early stopping criterion to approximately stop training before
the memorization of noisy labels, as well as 2) a heuristic for setting
hyperparameters which do not require knowledge of the noise corruption rate. We
demonstrate the effectiveness of our method by achieving strong results across
three image and text classification tasks as compared to existing baselines.</description><subject>Computer Science - Information Theory</subject><subject>Computer Science - Learning</subject><subject>Mathematics - Information Theory</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs1OwzAQBGBfekAtD8AJv0DCru210yOqyo8UAYfeo02yppZKUjkRom8PBE6j0UijT6kbhNJVRHDH-St9lgbAlODJ4ZWiWjgPaXjXL-Os51EvXadBz0fRb1kmGTrRY_zZ03TRNbdymjZqFfk0yfV_rtXhYX_YPRX16-Pz7r4u2AcsPDhnBYXAgkSL1HvfGQTE0KEhMJGwqlzlI7W94cgQ7LaPHFuAQD7Ytbr9u13czTmnD86X5tffLH77DZr9PXg</recordid><startdate>20200216</startdate><enddate>20200216</enddate><creator>Ziyin, Liu</creator><creator>Chen, Blair</creator><creator>Wang, Ru</creator><creator>Liang, Paul Pu</creator><creator>Salakhutdinov, Ruslan</creator><creator>Morency, Louis-Philippe</creator><creator>Ueda, Masahito</creator><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200216</creationdate><title>Learning Not to Learn in the Presence of Noisy Labels</title><author>Ziyin, Liu ; Chen, Blair ; Wang, Ru ; Liang, Paul Pu ; Salakhutdinov, Ruslan ; Morency, Louis-Philippe ; Ueda, Masahito</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-60443e1e5030ef315d66c210117c12502f5188486f5bd2afa0739dfafb0075673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Information Theory</topic><topic>Computer Science - Learning</topic><topic>Mathematics - Information Theory</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ziyin, Liu</creatorcontrib><creatorcontrib>Chen, Blair</creatorcontrib><creatorcontrib>Wang, Ru</creatorcontrib><creatorcontrib>Liang, Paul Pu</creatorcontrib><creatorcontrib>Salakhutdinov, Ruslan</creatorcontrib><creatorcontrib>Morency, Louis-Philippe</creatorcontrib><creatorcontrib>Ueda, Masahito</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ziyin, Liu</au><au>Chen, Blair</au><au>Wang, Ru</au><au>Liang, Paul Pu</au><au>Salakhutdinov, Ruslan</au><au>Morency, Louis-Philippe</au><au>Ueda, Masahito</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Not to Learn in the Presence of Noisy Labels</atitle><date>2020-02-16</date><risdate>2020</risdate><abstract>Learning in the presence of label noise is a challenging yet important task:
it is crucial to design models that are robust in the presence of mislabeled
datasets. In this paper, we discover that a new class of loss functions called
the gambler's loss provides strong robustness to label noise across various
levels of corruption. We show that training with this loss function encourages
the model to "abstain" from learning on the data points with noisy labels,
resulting in a simple and effective method to improve robustness and
generalization. In addition, we propose two practical extensions of the method:
1) an analytical early stopping criterion to approximately stop training before
the memorization of noisy labels, as well as 2) a heuristic for setting
hyperparameters which do not require knowledge of the noise corruption rate. We
demonstrate the effectiveness of our method by achieving strong results across
three image and text classification tasks as compared to existing baselines.</abstract><doi>10.48550/arxiv.2002.06541</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2002.06541 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2002_06541 |
source | arXiv.org |
subjects | Computer Science - Information Theory Computer Science - Learning Mathematics - Information Theory Statistics - Machine Learning |
title | Learning Not to Learn in the Presence of Noisy Labels |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T02%3A55%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Not%20to%20Learn%20in%20the%20Presence%20of%20Noisy%20Labels&rft.au=Ziyin,%20Liu&rft.date=2020-02-16&rft_id=info:doi/10.48550/arxiv.2002.06541&rft_dat=%3Carxiv_GOX%3E2002_06541%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |