UNM: A Universal Approach for Noisy Multi-Label Learning

Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2024-09, Vol.36 (9), p.4968-4980
Hauptverfasser: Chen, Jia-Yao, Li, Shao-Yuan, Huang, Sheng-Jun, Chen, Songcan, Wang, Lei, Xie, Ming-Kun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4980
container_issue 9
container_start_page 4968
container_title IEEE transactions on knowledge and data engineering
container_volume 36
creator Chen, Jia-Yao
Li, Shao-Yuan
Huang, Sheng-Jun
Chen, Songcan
Wang, Lei
Xie, Ming-Kun
description Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy single-label learning, noisy multi-label learning is more practical and challenging. To reduce the negative impact of noisy multi-annotations, we propose a universal approach for noisy multi-label learning (UNM). In UNM, we propose the label-wise embedding network which investigates the semantic alignment between label embeddings and their corresponding output features to learn robust feature representations. Meanwhile, mining the co-occurrence of multi-labels is also added to regularize the noisy network predictions. We cyclically change the fitting status of our label-wise embedding network to distinguish the noisy samples and generate pseudo labels for them. As a result, UNM provides an effective way to exploit the label-wise features and semantic label embeddings in noisy scenarios. To verify the generalizability of our method, we also test our method on Partial Multi-label Learning (PML) and Multi-label Learning with Missing Labels (MLML). Extensive experiments on benchmark datasets including Microsoft COCO, Pascal VOC, and Visual Genome explicitly validate the proposed method.
doi_str_mv 10.1109/TKDE.2024.3373500
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10460124</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10460124</ieee_id><sourcerecordid>10_1109_TKDE_2024_3373500</sourcerecordid><originalsourceid>FETCH-LOGICAL-c218t-bad6e1e9dcd86942f5dc1028935c4c530ad15390e200db941d76ea880fa799863</originalsourceid><addsrcrecordid>eNpNj8tOwzAURC0EEqXwAUgs_AMJ9_qR2OyiUh4iLZtmHTm2A0EhqeyC1L-nUbtgNbOYM9Ih5BYhRQR9v3l7XKYMmEg5z7kEOCMzlFIlDDWeHzoITAQX-SW5ivELAFSucEZUtV490IJWQ_frQzQ9LbbbMBr7Sdsx0PXYxT1d_fS7LilN43taehOGbvi4Jhet6aO_OeWcVE_LzeIlKd-fXxdFmViGapc0xmUevXbWqUwL1kpnEZjSXFphJQfjUHINngG4Rgt0eeaNUtCaXGuV8TnB468NY4zBt_U2dN8m7GuEelKvJ_V6Uq9P6gfm7sh03vt_e5EBMsH_AEAHU28</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>UNM: A Universal Approach for Noisy Multi-Label Learning</title><source>IEEE Electronic Library (IEL)</source><creator>Chen, Jia-Yao ; Li, Shao-Yuan ; Huang, Sheng-Jun ; Chen, Songcan ; Wang, Lei ; Xie, Ming-Kun</creator><creatorcontrib>Chen, Jia-Yao ; Li, Shao-Yuan ; Huang, Sheng-Jun ; Chen, Songcan ; Wang, Lei ; Xie, Ming-Kun</creatorcontrib><description>Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy single-label learning, noisy multi-label learning is more practical and challenging. To reduce the negative impact of noisy multi-annotations, we propose a universal approach for noisy multi-label learning (UNM). In UNM, we propose the label-wise embedding network which investigates the semantic alignment between label embeddings and their corresponding output features to learn robust feature representations. Meanwhile, mining the co-occurrence of multi-labels is also added to regularize the noisy network predictions. We cyclically change the fitting status of our label-wise embedding network to distinguish the noisy samples and generate pseudo labels for them. As a result, UNM provides an effective way to exploit the label-wise features and semantic label embeddings in noisy scenarios. To verify the generalizability of our method, we also test our method on Partial Multi-label Learning (PML) and Multi-label Learning with Missing Labels (MLML). Extensive experiments on benchmark datasets including Microsoft COCO, Pascal VOC, and Visual Genome explicitly validate the proposed method.</description><identifier>ISSN: 1041-4347</identifier><identifier>EISSN: 1558-2191</identifier><identifier>DOI: 10.1109/TKDE.2024.3373500</identifier><identifier>CODEN: ITKEEH</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; Correlation ; Image classification ; Label refinement ; multi-label classification ; Noise measurement ; noisy labels ; Semantics ; Task analysis ; Training</subject><ispartof>IEEE transactions on knowledge and data engineering, 2024-09, Vol.36 (9), p.4968-4980</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c218t-bad6e1e9dcd86942f5dc1028935c4c530ad15390e200db941d76ea880fa799863</cites><orcidid>0000-0002-5164-0070 ; 0009-0004-9716-9749 ; 0009-0001-5560-793X ; 0000-0002-1053-1409 ; 0000-0002-7673-5367 ; 0000-0003-0610-8568</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10460124$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10460124$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chen, Jia-Yao</creatorcontrib><creatorcontrib>Li, Shao-Yuan</creatorcontrib><creatorcontrib>Huang, Sheng-Jun</creatorcontrib><creatorcontrib>Chen, Songcan</creatorcontrib><creatorcontrib>Wang, Lei</creatorcontrib><creatorcontrib>Xie, Ming-Kun</creatorcontrib><title>UNM: A Universal Approach for Noisy Multi-Label Learning</title><title>IEEE transactions on knowledge and data engineering</title><addtitle>TKDE</addtitle><description>Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy single-label learning, noisy multi-label learning is more practical and challenging. To reduce the negative impact of noisy multi-annotations, we propose a universal approach for noisy multi-label learning (UNM). In UNM, we propose the label-wise embedding network which investigates the semantic alignment between label embeddings and their corresponding output features to learn robust feature representations. Meanwhile, mining the co-occurrence of multi-labels is also added to regularize the noisy network predictions. We cyclically change the fitting status of our label-wise embedding network to distinguish the noisy samples and generate pseudo labels for them. As a result, UNM provides an effective way to exploit the label-wise features and semantic label embeddings in noisy scenarios. To verify the generalizability of our method, we also test our method on Partial Multi-label Learning (PML) and Multi-label Learning with Missing Labels (MLML). Extensive experiments on benchmark datasets including Microsoft COCO, Pascal VOC, and Visual Genome explicitly validate the proposed method.</description><subject>Computational modeling</subject><subject>Correlation</subject><subject>Image classification</subject><subject>Label refinement</subject><subject>multi-label classification</subject><subject>Noise measurement</subject><subject>noisy labels</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Training</subject><issn>1041-4347</issn><issn>1558-2191</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNj8tOwzAURC0EEqXwAUgs_AMJ9_qR2OyiUh4iLZtmHTm2A0EhqeyC1L-nUbtgNbOYM9Ih5BYhRQR9v3l7XKYMmEg5z7kEOCMzlFIlDDWeHzoITAQX-SW5ivELAFSucEZUtV490IJWQ_frQzQ9LbbbMBr7Sdsx0PXYxT1d_fS7LilN43taehOGbvi4Jhet6aO_OeWcVE_LzeIlKd-fXxdFmViGapc0xmUevXbWqUwL1kpnEZjSXFphJQfjUHINngG4Rgt0eeaNUtCaXGuV8TnB468NY4zBt_U2dN8m7GuEelKvJ_V6Uq9P6gfm7sh03vt_e5EBMsH_AEAHU28</recordid><startdate>20240901</startdate><enddate>20240901</enddate><creator>Chen, Jia-Yao</creator><creator>Li, Shao-Yuan</creator><creator>Huang, Sheng-Jun</creator><creator>Chen, Songcan</creator><creator>Wang, Lei</creator><creator>Xie, Ming-Kun</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-5164-0070</orcidid><orcidid>https://orcid.org/0009-0004-9716-9749</orcidid><orcidid>https://orcid.org/0009-0001-5560-793X</orcidid><orcidid>https://orcid.org/0000-0002-1053-1409</orcidid><orcidid>https://orcid.org/0000-0002-7673-5367</orcidid><orcidid>https://orcid.org/0000-0003-0610-8568</orcidid></search><sort><creationdate>20240901</creationdate><title>UNM: A Universal Approach for Noisy Multi-Label Learning</title><author>Chen, Jia-Yao ; Li, Shao-Yuan ; Huang, Sheng-Jun ; Chen, Songcan ; Wang, Lei ; Xie, Ming-Kun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c218t-bad6e1e9dcd86942f5dc1028935c4c530ad15390e200db941d76ea880fa799863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computational modeling</topic><topic>Correlation</topic><topic>Image classification</topic><topic>Label refinement</topic><topic>multi-label classification</topic><topic>Noise measurement</topic><topic>noisy labels</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Jia-Yao</creatorcontrib><creatorcontrib>Li, Shao-Yuan</creatorcontrib><creatorcontrib>Huang, Sheng-Jun</creatorcontrib><creatorcontrib>Chen, Songcan</creatorcontrib><creatorcontrib>Wang, Lei</creatorcontrib><creatorcontrib>Xie, Ming-Kun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on knowledge and data engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Jia-Yao</au><au>Li, Shao-Yuan</au><au>Huang, Sheng-Jun</au><au>Chen, Songcan</au><au>Wang, Lei</au><au>Xie, Ming-Kun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>UNM: A Universal Approach for Noisy Multi-Label Learning</atitle><jtitle>IEEE transactions on knowledge and data engineering</jtitle><stitle>TKDE</stitle><date>2024-09-01</date><risdate>2024</risdate><volume>36</volume><issue>9</issue><spage>4968</spage><epage>4980</epage><pages>4968-4980</pages><issn>1041-4347</issn><eissn>1558-2191</eissn><coden>ITKEEH</coden><abstract>Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy single-label learning, noisy multi-label learning is more practical and challenging. To reduce the negative impact of noisy multi-annotations, we propose a universal approach for noisy multi-label learning (UNM). In UNM, we propose the label-wise embedding network which investigates the semantic alignment between label embeddings and their corresponding output features to learn robust feature representations. Meanwhile, mining the co-occurrence of multi-labels is also added to regularize the noisy network predictions. We cyclically change the fitting status of our label-wise embedding network to distinguish the noisy samples and generate pseudo labels for them. As a result, UNM provides an effective way to exploit the label-wise features and semantic label embeddings in noisy scenarios. To verify the generalizability of our method, we also test our method on Partial Multi-label Learning (PML) and Multi-label Learning with Missing Labels (MLML). Extensive experiments on benchmark datasets including Microsoft COCO, Pascal VOC, and Visual Genome explicitly validate the proposed method.</abstract><pub>IEEE</pub><doi>10.1109/TKDE.2024.3373500</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-5164-0070</orcidid><orcidid>https://orcid.org/0009-0004-9716-9749</orcidid><orcidid>https://orcid.org/0009-0001-5560-793X</orcidid><orcidid>https://orcid.org/0000-0002-1053-1409</orcidid><orcidid>https://orcid.org/0000-0002-7673-5367</orcidid><orcidid>https://orcid.org/0000-0003-0610-8568</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1041-4347
ispartof IEEE transactions on knowledge and data engineering, 2024-09, Vol.36 (9), p.4968-4980
issn 1041-4347
1558-2191
language eng
recordid cdi_ieee_primary_10460124
source IEEE Electronic Library (IEL)
subjects Computational modeling
Correlation
Image classification
Label refinement
multi-label classification
Noise measurement
noisy labels
Semantics
Task analysis
Training
title UNM: A Universal Approach for Noisy Multi-Label Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T09%3A38%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=UNM:%20A%20Universal%20Approach%20for%20Noisy%20Multi-Label%20Learning&rft.jtitle=IEEE%20transactions%20on%20knowledge%20and%20data%20engineering&rft.au=Chen,%20Jia-Yao&rft.date=2024-09-01&rft.volume=36&rft.issue=9&rft.spage=4968&rft.epage=4980&rft.pages=4968-4980&rft.issn=1041-4347&rft.eissn=1558-2191&rft.coden=ITKEEH&rft_id=info:doi/10.1109/TKDE.2024.3373500&rft_dat=%3Ccrossref_RIE%3E10_1109_TKDE_2024_3373500%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10460124&rfr_iscdi=true