Supervised Learning with General Risk Functionals
Standard uniform convergence results bound the generalization gap of the expected loss over a hypothesis class. The emergence of risk-sensitive learning requires generalization guarantees for functionals of the loss distribution beyond the expectation. While prior works specialize in uniform converg...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Standard uniform convergence results bound the generalization gap of the
expected loss over a hypothesis class. The emergence of risk-sensitive learning
requires generalization guarantees for functionals of the loss distribution
beyond the expectation. While prior works specialize in uniform convergence of
particular functionals, our work provides uniform convergence for a general
class of H\"older risk functionals for which the closeness in the Cumulative
Distribution Function (CDF) entails closeness in risk. We establish the first
uniform convergence results for estimating the CDF of the loss distribution,
yielding guarantees that hold simultaneously both over all H\"older risk
functionals and over all hypotheses. Thus licensed to perform empirical risk
minimization, we develop practical gradient-based methods for minimizing
distortion risks (widely studied subset of H\"older risks that subsumes the
spectral risks, including the mean, conditional value at risk, cumulative
prospect theory risks, and others) and provide convergence guarantees. In
experiments, we demonstrate the efficacy of our learning procedure, both in
settings where uniform convergence results hold and in high-dimensional
settings with deep networks. |
---|---|
DOI: | 10.48550/arxiv.2206.13648 |