Gradient Methods for Stochastic Optimization in Relative Scale
We propose a new concept of a relatively inexact stochastic subgradient and present novel first-order methods that can use such objects to approximately solve convex optimization problems in relative scale. An important example where relatively inexact subgradients naturally arise is given by the Po...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new concept of a relatively inexact stochastic subgradient and
present novel first-order methods that can use such objects to approximately
solve convex optimization problems in relative scale. An important example
where relatively inexact subgradients naturally arise is given by the Power or
Lanczos algorithms for computing an approximate leading eigenvector of a
symmetric positive semidefinite matrix. Using these algorithms as subroutines
in our methods, we get new optimization schemes that can provably solve certain
large-scale Semidefinite Programming problems with relative accuracy guarantees
by using only matrix-vector products. |
---|---|
DOI: | 10.48550/arxiv.2301.08352 |