MABSplit: Faster Forest Training Using Multi-Armed Bandits
Random forests are some of the most widely used machine learning models today, especially in domains that necessitate interpretability. We present an algorithm that accelerates the training of random forests and other popular tree-based learning methods. At the core of our algorithm is a novel node-...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Random forests are some of the most widely used machine learning models
today, especially in domains that necessitate interpretability. We present an
algorithm that accelerates the training of random forests and other popular
tree-based learning methods. At the core of our algorithm is a novel
node-splitting subroutine, dubbed MABSplit, used to efficiently find split
points when constructing decision trees. Our algorithm borrows techniques from
the multi-armed bandit literature to judiciously determine how to allocate
samples and computational power across candidate split points. We provide
theoretical guarantees that MABSplit improves the sample complexity of each
node split from linear to logarithmic in the number of data points. In some
settings, MABSplit leads to 100x faster training (an 99% reduction in training
time) without any decrease in generalization performance. We demonstrate
similar speedups when MABSplit is used across a variety of forest-based
variants, such as Extremely Random Forests and Random Patches. We also show our
algorithm can be used in both classification and regression tasks. Finally, we
show that MABSplit outperforms existing methods in generalization performance
and feature importance calculations under a fixed computational budget. All of
our experimental results are reproducible via a one-line script at
https://github.com/ThrunGroup/FastForest. |
---|---|
DOI: | 10.48550/arxiv.2212.07473 |