Lightweight fault diagnosis method in embedded system based on knowledge distillation

Deep learning (DL) has garnered attention in mechanical device health management for its ability to accurately identify faults and predict component life. However, its high computational cost presents a significant challenge for resource-limited embedded devices. To address this issue, we propose a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of mechanical science and technology 2023, 37(11), , pp.5649-5660
Hauptverfasser: Gong, Ran, Wang, Chenlin, Li, Jinxiao, Xu, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning (DL) has garnered attention in mechanical device health management for its ability to accurately identify faults and predict component life. However, its high computational cost presents a significant challenge for resource-limited embedded devices. To address this issue, we propose a lightweight fault diagnosis model based on knowledge distillation. The model employs complex residual networks with high classification accuracy as teachers and simple combinatorial convolutional networks as students. The student model has a similar structure to the teacher model, with fewer layers, and uses pixel-wise convolution and channel-wise convolution instead of the original convolution. Students learn the probability distribution rule of the output layer of teacher models to enhance their fault classification accuracy and achieve model compression. This process is called knowledge distillation. The combination of a lightweight model structure and the model training method of knowledge distillation results in a model that not only achieves higher classification accuracy than other small-sized classical models, but also has faster inference speed in embedded devices.
ISSN:1738-494X
1976-3824
DOI:10.1007/s12206-023-1007-3