Learning via Surrogate PAC-Bayes
Neurips 2024, Dec 2024, Vancouver, Canada PAC-Bayes learning is a comprehensive setting for (i) studying the generalisation ability of learning algorithms and (ii) deriving new learning algorithms by optimising a generalisation bound. However, optimising generalisation bounds might not always be via...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neurips 2024, Dec 2024, Vancouver, Canada PAC-Bayes learning is a comprehensive setting for (i) studying the
generalisation ability of learning algorithms and (ii) deriving new learning
algorithms by optimising a generalisation bound. However, optimising
generalisation bounds might not always be viable for tractable or computational
reasons, or both. For example, iteratively querying the empirical risk might
prove computationally expensive. In response, we introduce a novel principled
strategy for building an iterative learning algorithm via the optimisation of a
sequence of surrogate training objectives, inherited from PAC-Bayes
generalisation bounds. The key argument is to replace the empirical risk (seen
as a function of hypotheses) in the generalisation bound by its projection onto
a constructible low dimensional functional space: these projections can be
queried much more efficiently than the initial risk. On top of providing that
generic recipe for learning via surrogate PAC-Bayes bounds, we (i) contribute
theoretical results establishing that iteratively optimising our surrogates
implies the optimisation of the original generalisation bounds, (ii)
instantiate this strategy to the framework of meta-learning, introducing a
meta-objective offering a closed form expression for meta-gradient, (iii)
illustrate our approach with numerical experiments inspired by an industrial
biochemical problem. |
---|---|
DOI: | 10.48550/arxiv.2410.10230 |