Geometric Learning with Positively Decomposable Kernels
Kernel methods are powerful tools in machine learning. Classical kernel methods are based on positive-definite kernels, which map data spaces into reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces, positive-definite kernels are difficult to come by. In this case, we propose the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Kernel methods are powerful tools in machine learning. Classical kernel
methods are based on positive-definite kernels, which map data spaces into
reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces,
positive-definite kernels are difficult to come by. In this case, we propose
the use of reproducing kernel Krein space (RKKS) based methods, which require
only kernels that admit a positive decomposition. We show that one does not
need to access this decomposition in order to learn in RKKS. We then
investigate the conditions under which a kernel is positively decomposable. We
show that invariant kernels admit a positive decomposition on homogeneous
spaces under tractable regularity assumptions. This makes them much easier to
construct than positive-definite kernels, providing a route for learning with
kernels for non-Euclidean data. By the same token, this provides theoretical
foundations for RKKS-based methods in general. |
---|---|
DOI: | 10.48550/arxiv.2310.13821 |