Regularization and False Alarms Quantification: Two Sides of the Explainability Coin
Regularization is a well-established technique in machine learning (ML) to achieve an optimal bias-variance trade-off which in turn reduces model complexity and enhances explainability. To this end, some hyper-parameters must be tuned, enabling the ML model to accurately fit the unseen data as well...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Regularization is a well-established technique in machine learning (ML) to
achieve an optimal bias-variance trade-off which in turn reduces model
complexity and enhances explainability. To this end, some hyper-parameters must
be tuned, enabling the ML model to accurately fit the unseen data as well as
the seen data. In this article, the authors argue that the regularization of
hyper-parameters and quantification of costs and risks of false alarms are in
reality two sides of the same coin, explainability. Incorrect or non-existent
estimation of either quantities undermines the measurability of the economic
value of using ML, to the extent that might make it practically useless. |
---|---|
DOI: | 10.48550/arxiv.2012.01273 |