An Explainable Regression Framework for Predicting Remaining Useful Life of Machines
Prediction of a machine's Remaining Useful Life (RUL) is one of the key tasks in predictive maintenance. The task is treated as a regression problem where Machine Learning (ML) algorithms are used to predict the RUL of machine components. These ML algorithms are generally used as a black box wi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Prediction of a machine's Remaining Useful Life (RUL) is one of the key tasks
in predictive maintenance. The task is treated as a regression problem where
Machine Learning (ML) algorithms are used to predict the RUL of machine
components. These ML algorithms are generally used as a black box with a total
focus on the performance without identifying the potential causes behind the
algorithms' decisions and their working mechanism. We believe, the performance
(in terms of Mean Squared Error (MSE), etc.,) alone is not enough to build the
trust of the stakeholders in ML prediction rather more insights on the causes
behind the predictions are needed. To this aim, in this paper, we explore the
potential of Explainable AI (XAI) techniques by proposing an explainable
regression framework for the prediction of machines' RUL. We also evaluate
several ML algorithms including classical and Neural Networks (NNs) based
solutions for the task. For the explanations, we rely on two model agnostic XAI
methods namely Local Interpretable Model-Agnostic Explanations (LIME) and
Shapley Additive Explanations (SHAP). We believe, this work will provide a
baseline for future research in the domain. |
---|---|
DOI: | 10.48550/arxiv.2204.13574 |