The performance-interpretability trade-off: a comparative study of machine learning models
Machine learning models are increasingly being integrated into various aspects of society, impacting decision-making processes across domains such as healthcare, finance, and autonomous systems. However, as these models become more complex, concerns about their transparency and interpretability have...
Gespeichert in:
Veröffentlicht in: | Journal of reliable intelligent environments 2025, Vol.11 (1) |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning models are increasingly being integrated into various aspects of society, impacting decision-making processes across domains such as healthcare, finance, and autonomous systems. However, as these models become more complex, concerns about their transparency and interpretability have emerged. Transparent models, which provide detailed and understandable explanations, stand in contrast to opaque models, which often achieve higher accuracy but lack interpretability. This study presents a comparative analysis, examining the performance and explainability of transparent models (K-Nearest Neighbors (KNN), Decision Trees, and Logistic Regression) and opaque models (Convolutional Neural Networks (CNN), Random Forests, and Support Vector Machines (SVM)) in an intelligent environment. Our experimental evaluation explores the balance between performance (accuracy and response time) and explainability, a crucial aspect for the effective deployment of Artificial Intelligence (AI) in smart systems. Our results indicate that opaque models such as CNN, SVM, and Random Forest achieved higher accuracy (up to 98% on MNIST and 95% on Fake and Real News) compared to transparent models (up to 94% on MNIST and 92% on Fake and Real News). However, transparent models exhibited faster response times and greater interpretability, especially under high workload conditions, highlighting the trade-off between performance and interpretability. |
---|---|
ISSN: | 2199-4668 2199-4676 |
DOI: | 10.1007/s40860-024-00240-0 |