MDL-motivated compression of GLM ensembles increases interpretability and retains predictive power
Over the years, ensemble methods have become a staple of machine learning. Similarly, generalized linear models (GLMs) have become very popular for a wide variety of statistical inference tasks. The former have been shown to enhance out- of-sample predictive power and the latter possess easy interpr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Over the years, ensemble methods have become a staple of machine learning.
Similarly, generalized linear models (GLMs) have become very popular for a wide
variety of statistical inference tasks. The former have been shown to enhance
out- of-sample predictive power and the latter possess easy interpretability.
Recently, ensembles of GLMs have been proposed as a possibility. On the
downside, this approach loses the interpretability that GLMs possess. We show
that minimum description length (MDL)-motivated compression of the inferred
ensembles can be used to recover interpretability without much, if any,
downside to performance and illustrate on a number of standard classification
data sets. |
---|---|
DOI: | 10.48550/arxiv.1611.06800 |