Knowledge Driven Machine Learning Towards Interpretable Intelligent Prognostics and Health Management: Review and Case Study
Despite significant progress in the Prognostics and Health Management (PHM) domain using pattern learning systems from data, machine learning (ML) still faces challenges related to limited generalization and weak interpretability. A promising approach to overcoming these challenges is to embed domai...
Gespeichert in:
Veröffentlicht in: | Chinese journal of mechanical engineering 2025-01, Vol.38 (1), p.5-31, Article 5 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite significant progress in the Prognostics and Health Management (PHM) domain using pattern learning systems from data, machine learning (ML) still faces challenges related to limited generalization and weak interpretability. A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline, enhancing the model with additional pattern information. In this paper, we review the latest developments in PHM, encapsulated under the concept of Knowledge Driven Machine Learning (KDML). We propose a hierarchical framework to define KDML in PHM, which includes scientific paradigms, knowledge sources, knowledge representations, and knowledge embedding methods. Using this framework, we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage. Furthermore, we present several case studies that illustrate specific implementations of KDML in the PHM domain, including inductive experience, physical model, and signal processing. We analyze the improvements in generalization capability and interpretability that KDML can achieve. Finally, we discuss the challenges, potential applications, and usage recommendations of KDML in PHM, with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM. |
---|---|
ISSN: | 2192-8258 1000-9345 2192-8258 |
DOI: | 10.1186/s10033-024-01173-8 |