Interpretable Mamdani neuro-fuzzy model through context awareness and linguistic adaptation

Interpretable machine learning is trending as it aims to build a human-understandable decision process. There are two main types of machine learning systems: white-box and black-box models. White-box models are inherently interpretable but commonly suffer from under-fitting phenomena; on the other h...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2022-03, Vol.189, p.116098, Article 116098
Hauptverfasser: Navarro-Almanza, Raul, Sanchez, Mauricio A., Castro, Juan R., Mendoza, Olivia, Licea, Guillermo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Interpretable machine learning is trending as it aims to build a human-understandable decision process. There are two main types of machine learning systems: white-box and black-box models. White-box models are inherently interpretable but commonly suffer from under-fitting phenomena; on the other hand, black-box models perform quite well in a wide range of application domain problems, but their reasoning behind a decision is hard or even impossible to understand. In the soft-computing area, fuzzy inference systems are rule-based systems that use fuzzy reasoning, bringing human perception modeling and computing with word capability. These rule-based systems are designed either manually or automatically but are commonly optimized to fit better some phenomena’ data (in a supervised learning task). After the optimization process, the initial semantic meaning of fuzzy sets is modified (slightly, in the best cases), creating a gray-box model. The principal objective of the proposed methodology in this paper is to extract a high-quality rule in terms of comprehensibility, accuracy and fidelity. This is accomplished by using a fuzzy linguistic interpretable model from an optimized neuro-fuzzy model, considering the initial knowledge context with which it was built. A grammar-guided genetic algorithm is used as the optimization process to find the interpretable description of the model. A collection of 16 datasets for classification tasks were used to evaluate our proposal, obtaining an f1-score of 0.814 with 0.026 standard deviation in the optimized model; the obtained fidelity, in terms of similarity from the interpretable model to the optimized one, was 0.93 of mean with 0.018 standard deviation. Obtained results show that neuro-fuzzy systems could play an important role in interpretable machine learning, providing natural language explanations from previous knowledge. •Interpretable Neuro-fuzzy Mamdani-type model.•Methodology for turning Neuro-fuzzy models into white-box models.•Binary hedge relationships of fuzzy sets.•GGGP for the construction of semantic labels to optimized fuzzy sets.•Methodology for the automatic construction interpretable fuzzy inference systems.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2021.116098