Attention-based random forest and contamination model
A new approach called ABRF (the attention-based random forest) and its modifications for applying the attention mechanism to the random forest (RF) for regression and classification are proposed. The main idea behind the proposed ABRF models is to assign attention weights with trainable parameters t...
Gespeichert in:
Veröffentlicht in: | Neural networks 2022-10, Vol.154, p.346-359 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A new approach called ABRF (the attention-based random forest) and its modifications for applying the attention mechanism to the random forest (RF) for regression and classification are proposed. The main idea behind the proposed ABRF models is to assign attention weights with trainable parameters to decision trees in a specific way. The attention weights depend on the distance between an instance, which falls into a corresponding leaf of a tree, and training instances, which fall in the same leaf. This idea stems from representation of the Nadaraya–Watson kernel regression in the form of a RF. Three modifications of the general approach are proposed. The first one is based on applying the Huber’s contamination model and on computing the attention weights by solving quadratic or linear optimization problems. The second and the third modifications use the gradient-based algorithms for computing an extended set of the attention trainable parameters. Numerical experiments with various regression and classification datasets illustrate the proposed method. The code implementing the approach is publicly available. |
---|---|
ISSN: | 0893-6080 1879-2782 |
DOI: | 10.1016/j.neunet.2022.07.029 |