Explainable Unsupervised Machine Learning for Cyber-Physical Systems

Cyber-Physical Systems (CPSs) play a critical role in our modern infrastructure due to their capability to connect computing resources with physical systems. As such, topics such as reliability, performance, and security of CPSs continue to receive increased attention from the research community. CP...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.131824-131843
Hauptverfasser: Wickramasinghe, Chathurika S, Amarasinghe, Kasun, Marino, Daniel L., Rieger, Craig, Manic, Milos
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Cyber-Physical Systems (CPSs) play a critical role in our modern infrastructure due to their capability to connect computing resources with physical systems. As such, topics such as reliability, performance, and security of CPSs continue to receive increased attention from the research community. CPSs produce massive amounts of data, creating opportunities to use predictive Machine Learning (ML) models for performance monitoring and optimization, preventive maintenance, and threat detection. However, the "black-box" nature of complex ML models is a drawback when used in safety-critical systems such as CPSs. While explainable ML has been an active research area in recent years, much of the work has been focused on supervised learning. As CPSs rapidly produce massive amounts of unlabeled data, relying on supervised learning alone is not sufficient for data-driven decision making in CPSs. Therefore, if we are to maximize the use of ML in CPSs, it is necessary to have explainable unsupervised ML models. In this paper, we outline how unsupervised explainable ML could be used within CPSs. We review the existing work in unsupervised ML, present initial desiderata of explainable unsupervised ML for CPS, and present a Self-Organizing Maps based explainable clustering methodology which generates global and local explanations. We evaluate the fidelity of the generated explanations using feature perturbation techniques. The results show that the proposed method identifies the most important features responsible for the decision-making process of Self-organizing Maps. Further, we demonstrated that explainable Self-Organizing Maps are a strong candidate for explainable unsupervised machine learning by comparing its model capabilities and limitations with current explainable unsupervised methods.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3112397