Model averaging for support vector classifier by cross-validation
Support vector classification (SVC) is a well-known statistical technique for classification problems in machine learning and other fields. An important question for SVC is the selection of covariates (or features) for the model. Many studies have considered model selection methods. As is well-known...
Gespeichert in:
Veröffentlicht in: | Statistics and computing 2023-10, Vol.33 (5), Article 117 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Support vector classification (SVC) is a well-known statistical technique for classification problems in machine learning and other fields. An important question for SVC is the selection of covariates (or features) for the model. Many studies have considered model selection methods. As is well-known, selecting one winning model over others can entail considerable instability in predictive performance due to model selection uncertainties. This paper advocates model averaging as an alternative approach, where estimates obtained from different models are combined in a weighted average. We propose a model weighting scheme and provide the theoretical underpinning for the proposed method. In particular, we prove that our proposed method yields a model average estimator that achieves the smallest hinge risk among all feasible combinations asymptotically. To remedy the computational burden due to a large number of feasible models, we propose a screening step to eliminate the uninformative features before combining the models. Results from real data applications and a simulation study show that the proposed method generally yields more accurate estimates than existing methods. |
---|---|
ISSN: | 0960-3174 1573-1375 |
DOI: | 10.1007/s11222-023-10284-6 |