Label smoothing and task-adaptive loss function based on prototype network for few-shot learning

Aiming at solving the problems of prototype network that the label information is not reliable enough and that the hyperparameters of the loss function cannot follow the changes of image feature information, we propose a method that combines label smoothing and hyperparameters. First, the label info...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2022-12, Vol.156, p.39-48
Hauptverfasser: Gao, Farong, Luo, Xingsheng, Yang, Zhangyi, Zhang, Qizhong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 48
container_issue
container_start_page 39
container_title Neural networks
container_volume 156
creator Gao, Farong
Luo, Xingsheng
Yang, Zhangyi
Zhang, Qizhong
description Aiming at solving the problems of prototype network that the label information is not reliable enough and that the hyperparameters of the loss function cannot follow the changes of image feature information, we propose a method that combines label smoothing and hyperparameters. First, the label information of an image is processed by label smoothing regularization. Then, according to different classification tasks, the distance matrix and logarithmic operation of the image feature are used to fuse the distance matrix of the image with the hyperparameters of the loss function. Finally, the hyperparameters are associated with the smoothed label and the distance matrix for predictive classification. The method is validated on the miniImageNet, FC100 and tieredImageNet datasets. The results show that, compared with the unsmoothed label and fixed hyperparameters methods, the classification accuracy of the flexible hyperparameters in the loss function under the condition of few-shot learning is improved by 2%–3%. The result shows that the proposed method can suppress the interference of false labels, and the flexibility of hyperparameters can improve classification accuracy.
doi_str_mv 10.1016/j.neunet.2022.09.018
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2725194492</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608022003689</els_id><sourcerecordid>2725194492</sourcerecordid><originalsourceid>FETCH-LOGICAL-c269t-d1870ff05e575b88d40556a00fadfd4e20c8f19ce3ff29514c04ca3d4c8a0d213</originalsourceid><addsrcrecordid>eNp9kE9vGyEQxVHUSHHdfIMeOPaym4Fl13CpVEX9J1nKJT0TDEOMswYXsCN_-xI5555mDu-9mfcj5DODngGb7nZ9xGPE2nPgvAfVA5NXZMHkSnV8JfkHsgCphm4CCTfkYyk7AJikGBbkaW02ONOyT6luQ3ymJjpaTXnpjDOHGk5I51QK9cdoa0iRbkxBR9tyyKmmej4gbZdfU36hPmXq8bUr21TpjCbHFviJXHszF7x9n0vy58f3x_tf3frh5-_7b-vO8knVzrVnwXsYcVyNGymdgHGcDIA3zjuBHKz0TFkcvOdqZMKCsGZwwkoDjrNhSb5ccttff49Yqt6HYnGeTcR0LJqv-MiUEIo3qbhIbW7VMnp9yGFv8lkz0G9A9U5fgOo3oBqUbkCb7evFhq3GKWDWxQaMFl3IaKt2Kfw_4B_5f4LL</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2725194492</pqid></control><display><type>article</type><title>Label smoothing and task-adaptive loss function based on prototype network for few-shot learning</title><source>Access via ScienceDirect (Elsevier)</source><creator>Gao, Farong ; Luo, Xingsheng ; Yang, Zhangyi ; Zhang, Qizhong</creator><creatorcontrib>Gao, Farong ; Luo, Xingsheng ; Yang, Zhangyi ; Zhang, Qizhong</creatorcontrib><description>Aiming at solving the problems of prototype network that the label information is not reliable enough and that the hyperparameters of the loss function cannot follow the changes of image feature information, we propose a method that combines label smoothing and hyperparameters. First, the label information of an image is processed by label smoothing regularization. Then, according to different classification tasks, the distance matrix and logarithmic operation of the image feature are used to fuse the distance matrix of the image with the hyperparameters of the loss function. Finally, the hyperparameters are associated with the smoothed label and the distance matrix for predictive classification. The method is validated on the miniImageNet, FC100 and tieredImageNet datasets. The results show that, compared with the unsmoothed label and fixed hyperparameters methods, the classification accuracy of the flexible hyperparameters in the loss function under the condition of few-shot learning is improved by 2%–3%. The result shows that the proposed method can suppress the interference of false labels, and the flexibility of hyperparameters can improve classification accuracy.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2022.09.018</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Deep learning ; Few-shot learning ; Flexible hyperparameters ; Image classification ; Improved loss function</subject><ispartof>Neural networks, 2022-12, Vol.156, p.39-48</ispartof><rights>2022 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c269t-d1870ff05e575b88d40556a00fadfd4e20c8f19ce3ff29514c04ca3d4c8a0d213</citedby><cites>FETCH-LOGICAL-c269t-d1870ff05e575b88d40556a00fadfd4e20c8f19ce3ff29514c04ca3d4c8a0d213</cites><orcidid>0000-0003-4984-2500</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2022.09.018$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>315,781,785,3551,27928,27929,45999</link.rule.ids></links><search><creatorcontrib>Gao, Farong</creatorcontrib><creatorcontrib>Luo, Xingsheng</creatorcontrib><creatorcontrib>Yang, Zhangyi</creatorcontrib><creatorcontrib>Zhang, Qizhong</creatorcontrib><title>Label smoothing and task-adaptive loss function based on prototype network for few-shot learning</title><title>Neural networks</title><description>Aiming at solving the problems of prototype network that the label information is not reliable enough and that the hyperparameters of the loss function cannot follow the changes of image feature information, we propose a method that combines label smoothing and hyperparameters. First, the label information of an image is processed by label smoothing regularization. Then, according to different classification tasks, the distance matrix and logarithmic operation of the image feature are used to fuse the distance matrix of the image with the hyperparameters of the loss function. Finally, the hyperparameters are associated with the smoothed label and the distance matrix for predictive classification. The method is validated on the miniImageNet, FC100 and tieredImageNet datasets. The results show that, compared with the unsmoothed label and fixed hyperparameters methods, the classification accuracy of the flexible hyperparameters in the loss function under the condition of few-shot learning is improved by 2%–3%. The result shows that the proposed method can suppress the interference of false labels, and the flexibility of hyperparameters can improve classification accuracy.</description><subject>Deep learning</subject><subject>Few-shot learning</subject><subject>Flexible hyperparameters</subject><subject>Image classification</subject><subject>Improved loss function</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kE9vGyEQxVHUSHHdfIMeOPaym4Fl13CpVEX9J1nKJT0TDEOMswYXsCN_-xI5555mDu-9mfcj5DODngGb7nZ9xGPE2nPgvAfVA5NXZMHkSnV8JfkHsgCphm4CCTfkYyk7AJikGBbkaW02ONOyT6luQ3ymJjpaTXnpjDOHGk5I51QK9cdoa0iRbkxBR9tyyKmmej4gbZdfU36hPmXq8bUr21TpjCbHFviJXHszF7x9n0vy58f3x_tf3frh5-_7b-vO8knVzrVnwXsYcVyNGymdgHGcDIA3zjuBHKz0TFkcvOdqZMKCsGZwwkoDjrNhSb5ccttff49Yqt6HYnGeTcR0LJqv-MiUEIo3qbhIbW7VMnp9yGFv8lkz0G9A9U5fgOo3oBqUbkCb7evFhq3GKWDWxQaMFl3IaKt2Kfw_4B_5f4LL</recordid><startdate>202212</startdate><enddate>202212</enddate><creator>Gao, Farong</creator><creator>Luo, Xingsheng</creator><creator>Yang, Zhangyi</creator><creator>Zhang, Qizhong</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-4984-2500</orcidid></search><sort><creationdate>202212</creationdate><title>Label smoothing and task-adaptive loss function based on prototype network for few-shot learning</title><author>Gao, Farong ; Luo, Xingsheng ; Yang, Zhangyi ; Zhang, Qizhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c269t-d1870ff05e575b88d40556a00fadfd4e20c8f19ce3ff29514c04ca3d4c8a0d213</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Deep learning</topic><topic>Few-shot learning</topic><topic>Flexible hyperparameters</topic><topic>Image classification</topic><topic>Improved loss function</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gao, Farong</creatorcontrib><creatorcontrib>Luo, Xingsheng</creatorcontrib><creatorcontrib>Yang, Zhangyi</creatorcontrib><creatorcontrib>Zhang, Qizhong</creatorcontrib><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gao, Farong</au><au>Luo, Xingsheng</au><au>Yang, Zhangyi</au><au>Zhang, Qizhong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Label smoothing and task-adaptive loss function based on prototype network for few-shot learning</atitle><jtitle>Neural networks</jtitle><date>2022-12</date><risdate>2022</risdate><volume>156</volume><spage>39</spage><epage>48</epage><pages>39-48</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>Aiming at solving the problems of prototype network that the label information is not reliable enough and that the hyperparameters of the loss function cannot follow the changes of image feature information, we propose a method that combines label smoothing and hyperparameters. First, the label information of an image is processed by label smoothing regularization. Then, according to different classification tasks, the distance matrix and logarithmic operation of the image feature are used to fuse the distance matrix of the image with the hyperparameters of the loss function. Finally, the hyperparameters are associated with the smoothed label and the distance matrix for predictive classification. The method is validated on the miniImageNet, FC100 and tieredImageNet datasets. The results show that, compared with the unsmoothed label and fixed hyperparameters methods, the classification accuracy of the flexible hyperparameters in the loss function under the condition of few-shot learning is improved by 2%–3%. The result shows that the proposed method can suppress the interference of false labels, and the flexibility of hyperparameters can improve classification accuracy.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.neunet.2022.09.018</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0003-4984-2500</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2022-12, Vol.156, p.39-48
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2725194492
source Access via ScienceDirect (Elsevier)
subjects Deep learning
Few-shot learning
Flexible hyperparameters
Image classification
Improved loss function
title Label smoothing and task-adaptive loss function based on prototype network for few-shot learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T14%3A48%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Label%20smoothing%20and%20task-adaptive%20loss%20function%20based%20on%20prototype%20network%20for%20few-shot%20learning&rft.jtitle=Neural%20networks&rft.au=Gao,%20Farong&rft.date=2022-12&rft.volume=156&rft.spage=39&rft.epage=48&rft.pages=39-48&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2022.09.018&rft_dat=%3Cproquest_cross%3E2725194492%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2725194492&rft_id=info:pmid/&rft_els_id=S0893608022003689&rfr_iscdi=true