Fast SVM classifier for large-scale classification problems
Support vector machines (SVM), as one of effective and popular classification tools, have been widely applied in various fields. However, they may incur prohibitive computational costs when solving large-scale classification problems. To address this problem, we construct a new fast SVM with a trunc...
Gespeichert in:
Veröffentlicht in: | Information sciences 2023-09, Vol.642, p.119136, Article 119136 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Support vector machines (SVM), as one of effective and popular classification tools, have been widely applied in various fields. However, they may incur prohibitive computational costs when solving large-scale classification problems. To address this problem, we construct a new fast SVM with a truncated squared hinge loss (dubbed as Lts-SVM). We begin by developing an optimality theory of the nonconvex and nonsmooth Lts-SVM, which makes it convenient for us to investigate the support vectors and working set of Lts-SVM. Based on this, we propose a new and effective global convergence algorithm to address the Lts-SVM. This method is found to enjoy a tremendously low computational complexity, which makes sufficiently decreasing the demand for extremely large-scale computation possible. Numerical comparisons with eight other solvers show that our proposed algorithm achieves excellent performance on large-scale classification problems with regard to shorter computational times, more desirable accuracy levels, fewer support vectors and more robust to outliers.
•The new SVM model. We construct a new SVM model, which is named as truncated squared hinge loss SVM model.•The new and efficient ADMM algorithm. We propose a new and effective alternating direction method of multipliers with working set for solving the truncated squared hinge loss SVM model.•High numerical performance. We compare our algorithm with eight other efficient solvers. Numerical experiments show that our algorithm shares excellent performance. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2023.119136 |