Multi-local feature relation network for few-shot learning
Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number o...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2022-05, Vol.34 (10), p.7393-7403 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, few-shot learning has received considerable attention from researchers. Compared to deep learning, which requires abundant data for training, few-shot learning only requires a few labeled samples. Therefore, few-shot learning has been extensively used in scenarios in which a large number of samples cannot be obtained. However, effectively extracting features from a limited number of samples are the most important problem in few-shot learning. To solve this limitation, a multi-local feature relation network (MLFRNet) is proposed to improve the accuracy of few-shot image classification. First, we obtain the local sub-images of each image by random cropping, which is used to obtain local features. Second, we propose support-query local feature attention by exploring local feature relationships between the support and query sets. Using the local feature attention, the importance of local features of each class prototype can be calculated to classify query data. Moreover, we explore local feature relationship between the support set and the support set, and we propose support-support local feature similarity. Using local feature similarity, we can adaptively determine the margin loss of the local features, which then improves the network accuracy. Experiments on two benchmark datasets show that the proposed MLFRNet achieves state-of-the-art performance. In particular, for the miniImageNet dataset, the proposed method achieves 66.79% (1-shot) and 83.16% (5-shot) accuracy. |
---|---|
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-021-06840-8 |