Analysis and Benchmarking of feature reduction for classification under computational constraints

Machine learning is most often expensive in terms of computational and memory costs due to training with large volumes of data. Current computational limitations of many computing systems motivate us to investigate practical approaches, such as feature selection and reduction, to reduce the time and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine learning: science and technology 2024-06, Vol.5 (2), p.20501
Hauptverfasser: Subasi, Omer, Ghosh, Sayan, Manzano, Joseph, Palmer, Bruce, Marquez, Andrés
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Machine learning is most often expensive in terms of computational and memory costs due to training with large volumes of data. Current computational limitations of many computing systems motivate us to investigate practical approaches, such as feature selection and reduction, to reduce the time and memory costs while not sacrificing the accuracy of classification algorithms. In this work, we carefully review, analyze, and identify the feature reduction methods that have low costs/overheads in terms of time and memory. Then, we evaluate the identified reduction methods in terms of their impact on the accuracy, precision, time, and memory costs of traditional classification algorithms. Specifically, we focus on the least resource intensive feature reduction methods that are available in Scikit-Learn library. Since our goal is to identify the best performing low-cost reduction methods, we do not consider complex expensive reduction algorithms in this study. In our evaluation, we find that at quadratic-scale feature reduction, the classification algorithms achieve the best trade-off among competitive performance metrics. Results show that the overall training times are reduced 61%, the model sizes are reduced 6×, and accuracy scores increase 25% compared to the baselines on average with quadratic scale reduction.
ISSN:2632-2153
2632-2153
DOI:10.1088/2632-2153/ad3726