The Use of Stability Principle for Kernel Determination in Relevance Vector Machines
The task of RBF kernel selection in Relevance Vector Machines (RVM) is considered. RVM exploits a probabilistic Bayesian learning framework offering number of advantages to state-of-the-art Support Vector Machines. In particular RVM effectively avoids determination of regularization coefficient C vi...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The task of RBF kernel selection in Relevance Vector Machines (RVM) is considered. RVM exploits a probabilistic Bayesian learning framework offering number of advantages to state-of-the-art Support Vector Machines. In particular RVM effectively avoids determination of regularization coefficient C via evidence maximization. In the paper we show that RBF kernel selection in Bayesian framework requires extension of algorithmic model. In new model integration over posterior probability becomes intractable. Therefore point estimation of posterior probability is used. In RVM evidence value is calculated via Laplace approximation. However, extended model doesn’t allow maximization of posterior probability as dimension of optimization parameters space becomes too high. Hence Laplace approximation can be no more used in new model. We propose a local evidence estimation method which establishes a compromise between accuracy and stability of algorithm. In the paper we first briefly describe maximal evidence principle, present model of kernel algorithms as well as our approximations for evidence estimation, and then give results of experimental evaluation. Both classification and regression cases are considered. |
---|---|
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/11893028_81 |