Network Learning in Quadratic Games From Best-Response Dynamics
We investigate the capacity of an adversary to learn the underlying interaction network through repeated best response actions in linear-quadratic games. The adversary strategically perturbs the decisions of a set of action-compromised players and observes the sequential decisions of a set of action...
Gespeichert in:
Veröffentlicht in: | IEEE/ACM transactions on networking 2024-10, Vol.32 (5), p.3669-3684 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate the capacity of an adversary to learn the underlying interaction network through repeated best response actions in linear-quadratic games. The adversary strategically perturbs the decisions of a set of action-compromised players and observes the sequential decisions of a set of action-leaked players. The central question pertains to whether such an adversary can fully reconstruct or effectively estimate the underlying interaction structure among the players. To begin with, we establish a series of results that characterize the learnability of the interaction graph from the adversary's perspective by drawing connections between this network learning problem in games and classical system identification theory. Subsequently, taking into account the inherent stability and sparsity constraints inherent in the network interaction structure, we propose a stable and sparse system identification framework for learning the interaction graph based on complete player action observations. Moreover, we present a stable and sparse subspace identification framework for learning the interaction graph when only partially observed player actions are available. Finally, we demonstrate the efficacy of the proposed learning frameworks through numerical examples. |
---|---|
ISSN: | 1063-6692 1558-2566 |
DOI: | 10.1109/TNET.2024.3404509 |