An Online Multiple Kernel Parallelizable Learning Scheme
The performance of reproducing kernel Hilbert space-based methods is known to be sensitive to the choice of the reproducing kernel. Choosing an adequate reproducing kernel can be challenging and computationally demanding, especially in data-rich tasks without prior information about the solution dom...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The performance of reproducing kernel Hilbert space-based methods is known to
be sensitive to the choice of the reproducing kernel. Choosing an adequate
reproducing kernel can be challenging and computationally demanding, especially
in data-rich tasks without prior information about the solution domain. In this
paper, we propose a learning scheme that scalably combines several single
kernel-based online methods to reduce the kernel-selection bias. The proposed
learning scheme applies to any task formulated as a regularized empirical risk
minimization convex problem. More specifically, our learning scheme is based on
a multi-kernel learning formulation that can be applied to widen any
single-kernel solution space, thus increasing the possibility of finding
higher-performance solutions. In addition, it is parallelizable, allowing for
the distribution of the computational load across different computing units. We
show experimentally that the proposed learning scheme outperforms the combined
single-kernel online methods separately in terms of the cumulative regularized
least squares cost metric. |
---|---|
DOI: | 10.48550/arxiv.2308.10101 |