Lazy Lagrangians for Optimistic Learning With Budget Constraints

We consider the general problem of online convex optimization with time-varying budget constraints in the presence of predictions for the next cost and constraint functions, that arises in a plethora of network resource management problems. A novel saddle-point algorithm is designed by combining a F...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on networking 2023-10, Vol.31 (5), p.1-15
Hauptverfasser: Anderson, Daron, Iosifidis, George, Leith, Douglas J.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We consider the general problem of online convex optimization with time-varying budget constraints in the presence of predictions for the next cost and constraint functions, that arises in a plethora of network resource management problems. A novel saddle-point algorithm is designed by combining a Follow-The-Regularized-Leader iteration with prediction-adaptive dynamic steps. The algorithm achieves \c O(T^{(3-\beta)/4}) regret and \c O(T^{(1+\beta)/2}) constraint violation bounds that are tunable via parameter \beta\!\in\![1/2,1) and have constant factors that shrink with the predictions quality, achieving eventually \c O(1) regret for perfect predictions. Our work extends the seminal FTRL framework for this new OCO setting and outperforms the respective state-of-the-art greedy-based solutions which naturally cannot benefit from predictions, without imposing conditions on the (unknown) quality of predictions, the cost functions or the geometry of constraints, beyond convexity.
ISSN:1063-6692
1558-2566
DOI:10.1109/TNET.2022.3222404