Label Confusion Learning to Enhance Text Classification Models

Representing a true label as a one-hot vector is a common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instances and labels, as labels are often not completely independent and instances may relate to multiple...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Guo, Biyang, Han, Songqiao, Han, Xiao, Huang, Hailiang, Lu, Ting
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Guo, Biyang
Han, Songqiao
Han, Xiao
Huang, Hailiang
Lu, Ting
description Representing a true label as a one-hot vector is a common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instances and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing (LS) can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.
doi_str_mv 10.48550/arxiv.2012.04987
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2012_04987</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2012_04987</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-fb44cb46c11bc01b25a97274bf4868998b677443a35ef4dff8cfa8f6c011b50e3</originalsourceid><addsrcrecordid>eNotz71uwjAUhmEvHSraC-iEbyCpnRzHzoKEIvojBbFkj46ND1gKDorTit59gXb6lkef9DL2IkUORinxitMlfOeFkEUuoDb6ka1atH7gzRjpK4Ux8tbjFEM88Hnkm3jE6Dzv_GXmzYApBQoO55vbjns_pCf2QDgk__y_C9a9bbrmI2t375_Nus2w0jojC-AsVE5K64S0hcJaFxosgalMXRt7VQAllsoT7ImMIzRUXa20SvhywZZ_t_eA_jyFE04__S2kv4eUv0QFQvM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Label Confusion Learning to Enhance Text Classification Models</title><source>arXiv.org</source><creator>Guo, Biyang ; Han, Songqiao ; Han, Xiao ; Huang, Hailiang ; Lu, Ting</creator><creatorcontrib>Guo, Biyang ; Han, Songqiao ; Han, Xiao ; Huang, Hailiang ; Lu, Ting</creatorcontrib><description>Representing a true label as a one-hot vector is a common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instances and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing (LS) can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.</description><identifier>DOI: 10.48550/arxiv.2012.04987</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2020-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2012.04987$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2012.04987$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Guo, Biyang</creatorcontrib><creatorcontrib>Han, Songqiao</creatorcontrib><creatorcontrib>Han, Xiao</creatorcontrib><creatorcontrib>Huang, Hailiang</creatorcontrib><creatorcontrib>Lu, Ting</creatorcontrib><title>Label Confusion Learning to Enhance Text Classification Models</title><description>Representing a true label as a one-hot vector is a common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instances and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing (LS) can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71uwjAUhmEvHSraC-iEbyCpnRzHzoKEIvojBbFkj46ND1gKDorTit59gXb6lkef9DL2IkUORinxitMlfOeFkEUuoDb6ka1atH7gzRjpK4Ux8tbjFEM88Hnkm3jE6Dzv_GXmzYApBQoO55vbjns_pCf2QDgk__y_C9a9bbrmI2t375_Nus2w0jojC-AsVE5K64S0hcJaFxosgalMXRt7VQAllsoT7ImMIzRUXa20SvhywZZ_t_eA_jyFE04__S2kv4eUv0QFQvM</recordid><startdate>20201209</startdate><enddate>20201209</enddate><creator>Guo, Biyang</creator><creator>Han, Songqiao</creator><creator>Han, Xiao</creator><creator>Huang, Hailiang</creator><creator>Lu, Ting</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201209</creationdate><title>Label Confusion Learning to Enhance Text Classification Models</title><author>Guo, Biyang ; Han, Songqiao ; Han, Xiao ; Huang, Hailiang ; Lu, Ting</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-fb44cb46c11bc01b25a97274bf4868998b677443a35ef4dff8cfa8f6c011b50e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Guo, Biyang</creatorcontrib><creatorcontrib>Han, Songqiao</creatorcontrib><creatorcontrib>Han, Xiao</creatorcontrib><creatorcontrib>Huang, Hailiang</creatorcontrib><creatorcontrib>Lu, Ting</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Guo, Biyang</au><au>Han, Songqiao</au><au>Han, Xiao</au><au>Huang, Hailiang</au><au>Lu, Ting</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Label Confusion Learning to Enhance Text Classification Models</atitle><date>2020-12-09</date><risdate>2020</risdate><abstract>Representing a true label as a one-hot vector is a common practice in training text classification models. However, the one-hot representation may not adequately reflect the relation between the instances and labels, as labels are often not completely independent and instances may relate to multiple labels in practice. The inadequate one-hot representations tend to train the model to be over-confident, which may result in arbitrary prediction and model overfitting, especially for confused datasets (datasets with very similar labels) or noisy datasets (datasets with labeling errors). While training models with label smoothing (LS) can ease this problem in some degree, it still fails to capture the realistic relation among labels. In this paper, we propose a novel Label Confusion Model (LCM) as an enhancement component to current popular text classification models. LCM can learn label confusion to capture semantic overlap among labels by calculating the similarity between instances and labels during training and generate a better label distribution to replace the original one-hot label vector, thus improving the final classification performance. Extensive experiments on five text classification benchmark datasets reveal the effectiveness of LCM for several widely used deep learning classification models. Further experiments also verify that LCM is especially helpful for confused or noisy datasets and superior to the label smoothing method.</abstract><doi>10.48550/arxiv.2012.04987</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2012.04987
ispartof
issn
language eng
recordid cdi_arxiv_primary_2012_04987
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Label Confusion Learning to Enhance Text Classification Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T12%3A33%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Label%20Confusion%20Learning%20to%20Enhance%20Text%20Classification%20Models&rft.au=Guo,%20Biyang&rft.date=2020-12-09&rft_id=info:doi/10.48550/arxiv.2012.04987&rft_dat=%3Carxiv_GOX%3E2012_04987%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true