Multi-local feature relation network for few-shot learning
Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number o...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2022-05, Vol.34 (10), p.7393-7403 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 7403 |
---|---|
container_issue | 10 |
container_start_page | 7393 |
container_title | Neural computing & applications |
container_volume | 34 |
creator | Ren, Li Duan, Guiduo Huang, Tianxi Kang, Zhao |
description | Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number of samples cannot be obtained. However, effectively extracting features from a limited number of samples are the most important problem in few-shot learning. To solve this limitation, a multi-local feature relation network (MLFRNet) is proposed to improve the accuracy of few-shot image classification. First, we obtain the local sub-images of each image by random cropping, which is used to obtain local features. Second, we propose support-query local feature attention by exploring local feature relationships between the support and query sets. Using the local feature attention, the importance of local features of each class prototype can be calculated to classify query data. Moreover, we explore local feature relationship between the support set and the support set, and we propose support-support local feature similarity. Using local feature similarity, we can adaptively determine the margin loss of the local features, which then improves the network accuracy. Experiments on two benchmark datasets show that the proposed MLFRNet achieves state-of-the-art performance. In particular, for the miniImageNet dataset, the proposed method achieves 66.79% (1-shot) and 83.16% (5-shot) accuracy. |
doi_str_mv | 10.1007/s00521-021-06840-8 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2651905781</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2651905781</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-5923369a4cdb2df7148d49a728dff1dc36136cc5201f576c351e24bfe246a6523</originalsourceid><addsrcrecordid>eNp9UMtKxDAUDaJgHf0BVwXX0ZtnU3cy6CiMuNF1yKTJ2LE2Y5Iy-Pe2VHDn4ty7OC84CF0SuCYA1U0CEJRgmCAVB6yOUEE4Y5iBUMeogJpPFGen6CylHQBwqUSBbp-HLre4C9Z0pXcmD9GV0XUmt6Eve5cPIX6UPsSRPOD0HnLZORP7tt-eoxNvuuQufv8CvT3cvy4f8fpl9bS8W2PLSJ2xqCljsjbcNhva-Ipw1fDaVFQ13pPGMkmYtFZQIF5U0jJBHOUbPx5ppKBsga7m3H0MX4NLWe_CEPuxUlMpSA2iUmRU0VllY0gpOq_3sf008VsT0NNGet5Iw4RpI61GE5tNaRT3Wxf_ov9x_QC1imh5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2651905781</pqid></control><display><type>article</type><title>Multi-local feature relation network for few-shot learning</title><source>SpringerNature Journals</source><creator>Ren, Li ; Duan, Guiduo ; Huang, Tianxi ; Kang, Zhao</creator><creatorcontrib>Ren, Li ; Duan, Guiduo ; Huang, Tianxi ; Kang, Zhao</creatorcontrib><description>Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number of samples cannot be obtained. However, effectively extracting features from a limited number of samples are the most important problem in few-shot learning. To solve this limitation, a multi-local feature relation network (MLFRNet) is proposed to improve the accuracy of few-shot image classification. First, we obtain the local sub-images of each image by random cropping, which is used to obtain local features. Second, we propose support-query local feature attention by exploring local feature relationships between the support and query sets. Using the local feature attention, the importance of local features of each class prototype can be calculated to classify query data. Moreover, we explore local feature relationship between the support set and the support set, and we propose support-support local feature similarity. Using local feature similarity, we can adaptively determine the margin loss of the local features, which then improves the network accuracy. Experiments on two benchmark datasets show that the proposed MLFRNet achieves state-of-the-art performance. In particular, for the miniImageNet dataset, the proposed method achieves 66.79% (1-shot) and 83.16% (5-shot) accuracy.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-021-06840-8</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Accuracy ; Artificial Intelligence ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Datasets ; Deep learning ; Feature extraction ; Image classification ; Image Processing and Computer Vision ; Machine learning ; Original Article ; Probability and Statistics in Computer Science ; Queries ; Similarity</subject><ispartof>Neural computing & applications, 2022-05, Vol.34 (10), p.7393-7403</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-5923369a4cdb2df7148d49a728dff1dc36136cc5201f576c351e24bfe246a6523</citedby><cites>FETCH-LOGICAL-c319t-5923369a4cdb2df7148d49a728dff1dc36136cc5201f576c351e24bfe246a6523</cites><orcidid>0000-0001-6857-7744</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-021-06840-8$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-021-06840-8$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Ren, Li</creatorcontrib><creatorcontrib>Duan, Guiduo</creatorcontrib><creatorcontrib>Huang, Tianxi</creatorcontrib><creatorcontrib>Kang, Zhao</creatorcontrib><title>Multi-local feature relation network for few-shot learning</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number of samples cannot be obtained. However, effectively extracting features from a limited number of samples are the most important problem in few-shot learning. To solve this limitation, a multi-local feature relation network (MLFRNet) is proposed to improve the accuracy of few-shot image classification. First, we obtain the local sub-images of each image by random cropping, which is used to obtain local features. Second, we propose support-query local feature attention by exploring local feature relationships between the support and query sets. Using the local feature attention, the importance of local features of each class prototype can be calculated to classify query data. Moreover, we explore local feature relationship between the support set and the support set, and we propose support-support local feature similarity. Using local feature similarity, we can adaptively determine the margin loss of the local features, which then improves the network accuracy. Experiments on two benchmark datasets show that the proposed MLFRNet achieves state-of-the-art performance. In particular, for the miniImageNet dataset, the proposed method achieves 66.79% (1-shot) and 83.16% (5-shot) accuracy.</description><subject>Accuracy</subject><subject>Artificial Intelligence</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image Processing and Computer Vision</subject><subject>Machine learning</subject><subject>Original Article</subject><subject>Probability and Statistics in Computer Science</subject><subject>Queries</subject><subject>Similarity</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9UMtKxDAUDaJgHf0BVwXX0ZtnU3cy6CiMuNF1yKTJ2LE2Y5Iy-Pe2VHDn4ty7OC84CF0SuCYA1U0CEJRgmCAVB6yOUEE4Y5iBUMeogJpPFGen6CylHQBwqUSBbp-HLre4C9Z0pXcmD9GV0XUmt6Eve5cPIX6UPsSRPOD0HnLZORP7tt-eoxNvuuQufv8CvT3cvy4f8fpl9bS8W2PLSJ2xqCljsjbcNhva-Ipw1fDaVFQ13pPGMkmYtFZQIF5U0jJBHOUbPx5ppKBsga7m3H0MX4NLWe_CEPuxUlMpSA2iUmRU0VllY0gpOq_3sf008VsT0NNGet5Iw4RpI61GE5tNaRT3Wxf_ov9x_QC1imh5</recordid><startdate>20220501</startdate><enddate>20220501</enddate><creator>Ren, Li</creator><creator>Duan, Guiduo</creator><creator>Huang, Tianxi</creator><creator>Kang, Zhao</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0001-6857-7744</orcidid></search><sort><creationdate>20220501</creationdate><title>Multi-local feature relation network for few-shot learning</title><author>Ren, Li ; Duan, Guiduo ; Huang, Tianxi ; Kang, Zhao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-5923369a4cdb2df7148d49a728dff1dc36136cc5201f576c351e24bfe246a6523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Artificial Intelligence</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image Processing and Computer Vision</topic><topic>Machine learning</topic><topic>Original Article</topic><topic>Probability and Statistics in Computer Science</topic><topic>Queries</topic><topic>Similarity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ren, Li</creatorcontrib><creatorcontrib>Duan, Guiduo</creatorcontrib><creatorcontrib>Huang, Tianxi</creatorcontrib><creatorcontrib>Kang, Zhao</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ren, Li</au><au>Duan, Guiduo</au><au>Huang, Tianxi</au><au>Kang, Zhao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-local feature relation network for few-shot learning</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2022-05-01</date><risdate>2022</risdate><volume>34</volume><issue>10</issue><spage>7393</spage><epage>7403</epage><pages>7393-7403</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number of samples cannot be obtained. However, effectively extracting features from a limited number of samples are the most important problem in few-shot learning. To solve this limitation, a multi-local feature relation network (MLFRNet) is proposed to improve the accuracy of few-shot image classification. First, we obtain the local sub-images of each image by random cropping, which is used to obtain local features. Second, we propose support-query local feature attention by exploring local feature relationships between the support and query sets. Using the local feature attention, the importance of local features of each class prototype can be calculated to classify query data. Moreover, we explore local feature relationship between the support set and the support set, and we propose support-support local feature similarity. Using local feature similarity, we can adaptively determine the margin loss of the local features, which then improves the network accuracy. Experiments on two benchmark datasets show that the proposed MLFRNet achieves state-of-the-art performance. In particular, for the miniImageNet dataset, the proposed method achieves 66.79% (1-shot) and 83.16% (5-shot) accuracy.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-021-06840-8</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-6857-7744</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2022-05, Vol.34 (10), p.7393-7403 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_2651905781 |
source | SpringerNature Journals |
subjects | Accuracy Artificial Intelligence Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Datasets Deep learning Feature extraction Image classification Image Processing and Computer Vision Machine learning Original Article Probability and Statistics in Computer Science Queries Similarity |
title | Multi-local feature relation network for few-shot learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T09%3A04%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-local%20feature%20relation%20network%20for%20few-shot%20learning&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Ren,%20Li&rft.date=2022-05-01&rft.volume=34&rft.issue=10&rft.spage=7393&rft.epage=7403&rft.pages=7393-7403&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-021-06840-8&rft_dat=%3Cproquest_cross%3E2651905781%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2651905781&rft_id=info:pmid/&rfr_iscdi=true |