Momentum Sequential Minimal Optimization: An accelerated method for Support Vector Machine training

Sequential Minimal Optimization (SMO) can be regarded as the state-of-the-art approach in non-linear Support Vector Machines training, being the method of choice in the successful LIBSVM software. Its optimization procedure is based on updating only a couple of the problem coefficients per iteration...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Barbero, Alvaro, Dorronsoro, Jose R.
Format: Tagungsbericht
Sprache:eng ; jpn
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Sequential Minimal Optimization (SMO) can be regarded as the state-of-the-art approach in non-linear Support Vector Machines training, being the method of choice in the successful LIBSVM software. Its optimization procedure is based on updating only a couple of the problem coefficients per iteration, until convergence. In this paper we notice that this strategy can be interpreted as finding the sparsest yet most useful updating direction per iteration. We present a modification of SMO including a new approximate momentum term in the updating direction which captures information from previous updates, and show that this term presents a trade-off between sparsity and suitability of the chosen direction. We show how this novelty is able to provide substantial savings in practice in SMO's number of iterations to convergence, without increasing noticeably its cost per iteration. We study when this saving in iterates can result in a reduced SVM training times, and the behavior of this new technique when combined with caching and shrinking strategies.
ISSN:2161-4393
2161-4407
DOI:10.1109/IJCNN.2011.6033245