Sample Compression Hypernetworks: From Generalization Bounds to Meta-Learning
Reconstruction functions are pivotal in sample compression theory, a framework for deriving tight generalization bounds. From a small sample of the training set (the compression set) and an optional stream of information (the message), they recover a predictor previously learned from the whole train...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reconstruction functions are pivotal in sample compression theory, a
framework for deriving tight generalization bounds. From a small sample of the
training set (the compression set) and an optional stream of information (the
message), they recover a predictor previously learned from the whole training
set. While usually fixed, we propose to learn reconstruction functions. To
facilitate the optimization and increase the expressiveness of the message, we
derive a new sample compression generalization bound for real-valued messages.
From this theoretical analysis, we then present a new hypernetwork architecture
that outputs predictors with tight generalization guarantees when trained using
an original meta-learning framework. The results of promising preliminary
experiments are then reported. |
---|---|
DOI: | 10.48550/arxiv.2410.13577 |