Two-level optimization by differential evolution in decision tree learning algorithm

Decision tree learning algorithms have a long history, but they are still popular due to their efficiency. Tree construction starts at the root and continues to each leaf node, creating a “near-optimal” tree. One of the key steps in creating a decision tree is the selection of a feature to split at...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ITM web of conferences 2024, Vol.59, p.1018
Hauptverfasser: Mitrofanov, Sergei, Semenkin, Eugene
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Decision tree learning algorithms have a long history, but they are still popular due to their efficiency. Tree construction starts at the root and continues to each leaf node, creating a “near-optimal” tree. One of the key steps in creating a decision tree is the selection of a feature to split at each root node, which affects the classification accuracy. This process can be quite labor intensive. The article proposes a new approach to constructing decision trees based on the use of differential evolution for two-level tree optimization. Differential evolution works in two stages: at the first level, a feature is selected for separation, and at the second level, the threshold value for this feature is optimized. The results of the work were tested on several examples of classification problems.
ISSN:2271-2097
2271-2097
DOI:10.1051/itmconf/20245901018