Distributed Optimization with Global Constraints Using Noisy Measurements
We propose a new distributed optimization algorithm for solving a class of constrained optimization problems in which (a) the objective function is separable (i.e., the sum of local objective functions of agents), (b) the optimization variables of distributed agents, which are subject to nontrivial...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new distributed optimization algorithm for solving a class of
constrained optimization problems in which (a) the objective function is
separable (i.e., the sum of local objective functions of agents), (b) the
optimization variables of distributed agents, which are subject to nontrivial
local constraints, are coupled by global constraints, and (c) only noisy
observations are available to estimate (the gradients of) local objective
functions. In many practical scenarios, agents may not be willing to share
their optimization variables with others. For this reason, we propose a
distributed algorithm that does not require the agents to share their
optimization variables with each other; instead, each agent maintains a local
estimate of the global constraint functions and share the estimate only with
its neighbors. These local estimates of constraint functions are updated using
a consensus-type algorithm, while the local optimization variables of each
agent are updated using a first-order method based on noisy estimates of
gradient. We prove that, when the agents adopt the proposed algorithm, their
optimization variables converge with probability 1 to an optimal point of an
approximated problem based on the penalty method. |
---|---|
DOI: | 10.48550/arxiv.2106.07703 |