Self-Adaptation Graph Attention Network via Meta-Learning for Machinery Fault Diagnosis With Few Labeled Data

Effective application of fault diagnosis models requires that new fault types can be recognized rapidly after they occur few times, even only one time. To this end, a self-adaptation graph attention network via meta-learning (SGANM) is proposed. Specifically, based on a collected large-scale labeled...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-11
Hauptverfasser: Long, Jianyu, Zhang, Rongxin, Yang, Zhe, Huang, Yunwei, Liu, Yu, Li, Chuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Effective application of fault diagnosis models requires that new fault types can be recognized rapidly after they occur few times, even only one time. To this end, a self-adaptation graph attention network via meta-learning (SGANM) is proposed. Specifically, based on a collected large-scale labeled dataset containing abundant disjoint categories (i.e., any of the categories in the target diagnosis task is not contained), meta-learning is used to train a meta-learner across abundant randomly generated meta-tasks, and the meta-learner can rapidly generalize to the target fault diagnosis task containing only few labeled samples. To full exploitation of the relationships among the samples in the support set and query set of each meta-task, an SGAN is designed to realize the meta-learner. The effective strategies including spatial-temporal graph-based node initial embedding, 2-D edge embedding, and multihead masked attention mechanism-based embedding propagating make the proposed meta-learner have powerful meta-knowledge learning ability. Experiments are conducted on a benchmark dataset and a dataset collected from a practical experimental platform, and competitive performance has been achieved by the proposed SGANM compared to the other few-shot learning (FSL) algorithms.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2022.3181894