Adaptive Backtracking For Faster Optimization
Backtracking line search is foundational in numerical optimization. The basic idea is to adjust the step size of an algorithm by a constant factor until some chosen criterion (e.g. Armijo, Goldstein, Descent Lemma) is satisfied. We propose a new way for adjusting step sizes, replacing the constant f...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Backtracking line search is foundational in numerical optimization. The basic
idea is to adjust the step size of an algorithm by a constant factor until some
chosen criterion (e.g. Armijo, Goldstein, Descent Lemma) is satisfied. We
propose a new way for adjusting step sizes, replacing the constant factor used
in regular backtracking with one that takes into account the degree to which
the chosen criterion is violated, without additional computational burden. For
convex problems, we prove adaptive backtracking requires fewer adjustments to
produce a feasible step size than regular backtracking does for two popular
line search criteria: the Armijo condition and the descent lemma. For nonconvex
smooth problems, we additionally prove adaptive backtracking enjoys the same
guarantees of regular backtracking. Finally, we perform a variety of
experiments on over fifteen real world datasets, all of which confirm that
adaptive backtracking often leads to significantly faster optimization. |
---|---|
DOI: | 10.48550/arxiv.2408.13150 |