Data Selection: A General Principle for Building Small Interpretable Models
We present convincing empirical evidence for an effective and general strategy for building accurate small models. Such models are attractive for interpretability and also find use in resource-constrained environments. The strategy is to learn the training distribution and sample accordingly from th...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present convincing empirical evidence for an effective and general
strategy for building accurate small models. Such models are attractive for
interpretability and also find use in resource-constrained environments. The
strategy is to learn the training distribution and sample accordingly from the
provided training data. The distribution learning algorithm is not a
contribution of this work; our contribution is a rigorous demonstration of the
broad utility of this strategy in various practical settings. We apply it to
the tasks of (1) building cluster explanation trees, (2) prototype-based
classification, and (3) classification using Random Forests, and show that it
improves the accuracy of decades-old weak traditional baselines to be
competitive with specialized modern techniques.
This strategy is also versatile wrt the notion of model size. In the first
two tasks, model size is considered to be number of leaves in the tree and the
number of prototypes respectively. In the final task involving Random Forests,
the strategy is shown to be effective even when model size comprises of more
than one factor: number of trees and their maximum depth.
Positive results using multiple datasets are presented that are shown to be
statistically significant. |
---|---|
DOI: | 10.48550/arxiv.2210.03921 |