Multiscale attentional residual neural network framework for remaining useful life prediction of bearings
•An attention mapping can resist noise and improve feature extraction capability.•A multiscale pooling method can extract multiscale features.•A method combines EEMD, ResNet, attention mapping, and multiscale pooling methods.•A ResNet-MA framework has a high accuracy for predicting the RUL of rollin...
Gespeichert in:
Veröffentlicht in: | Measurement : journal of the International Measurement Confederation 2021-06, Vol.177, p.109310, Article 109310 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •An attention mapping can resist noise and improve feature extraction capability.•A multiscale pooling method can extract multiscale features.•A method combines EEMD, ResNet, attention mapping, and multiscale pooling methods.•A ResNet-MA framework has a high accuracy for predicting the RUL of rolling bearing.•A ResNet-MA framework learn from raw data and need no prior knowledge.
Traditional deep learning methods do not effectively extract degradation features from vibration signals widely used for remaining useful life (RUL) prediction while avoiding the gradient problem. This paper proposes a residual neural network framework with multiscale attention mapping (ResNet-MA) based on the vibration signal's characteristics to solve the above problems. In the framework, we first use the decomposed vibration signal processed by the ensemble empirical mode decomposition method (EEMD) as the model's input. To improve the neural network's ability to extract degradation signals, channel attention mapping, time attention mapping, and multiscale pooling methods are used in ResNet-MA. Experimental verification and analysis are carried out with the available data sets, which show that the proposed method's prediction accuracy is 14% higher than the residual neural network of deep learning before the improvement, and 3–6% higher than the state-of-the-art related algorithms. |
---|---|
ISSN: | 0263-2241 1873-412X |
DOI: | 10.1016/j.measurement.2021.109310 |