Coarse-to-Fine Few-Shot Defect Recognition With Dynamic Weighting and Joint Metric
Deep-learning-based methods have been widely used in defect recognition and achieved great success. However, deep-learning-based methods need a large-scale dataset. While in real industrial scenarios, the training samples are always insufficient since the defect data acquisition is difficult and tim...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-10 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep-learning-based methods have been widely used in defect recognition and achieved great success. However, deep-learning-based methods need a large-scale dataset. While in real industrial scenarios, the training samples are always insufficient since the defect data acquisition is difficult and time-consuming. Therefore, in this article, the few-shot learning theory is introduced to address the challenge. We propose to achieve the few-shot defect recognition (FSDR) in a coarse-to-fine manner with dynamic weighting and joint metric. In the coarse-grained phase, following feature embedding, we propose an affine dynamic weighting (ADW) module to control the embedding output of all the channels according to the global context. By the dynamic weighting, the model can extract discriminative features better with learnable affine parameters. In the fine-grained phase, we propose a joint metric method which contains a Kullback-Leibler (K-L) divergence-based covariance metric module (KLCM) and cosine classifier. In this method, KLCM exploits the covariance matrix from the local descriptors to represent the distribution of a special defect class and then measures the similarity between the support and query defect images. A novel FSDR dataset which contains a variety of defects from four different surfaces is constructed to evaluate our method. The results show the state-of-the-art performance compared with the mainstream methods. |
---|---|
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2022.3193204 |