When Optimization Meets Machine Learning: The Case of IRS-Assisted Wireless Networks

Performance optimization of wireless networks is typically complicated because of high computational complexity and dynamic channel conditions. Considering a specific case, the recent introduction of intelligent reflecting surface (IRS) can reshape the wireless channels by controlling the scattering...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE network 2022-03, Vol.36 (2), p.190-198
Hauptverfasser: Gong, Shimin, Lin, Jiaye, Ding, Beichen, Niyato, Dusit, Kim, Dong In, Guizani, Mohsen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Performance optimization of wireless networks is typically complicated because of high computational complexity and dynamic channel conditions. Considering a specific case, the recent introduction of intelligent reflecting surface (IRS) can reshape the wireless channels by controlling the scattering elements' phase shifts, namely, passive beamforming. However, due to the large size of scattering elements, the IRS's beamforming optimization becomes intractable. In this article, we focus on machine learning (ML) approaches for complex optimization problems in wireless networks. ML approaches can provide flexibility and robustness against uncertain and dynamic systems. However, practical challenges still remain due to slow convergence in offline training or online learning. This motivated us to design a novel optimization-driven ML framework that exploits the efficiency of model-based optimization and the robustness of model-free ML approaches. Splitting the control variables into two parts allows one part to be updated by the outer loop ML approach while the other part is solved by the inner loop optimization. The case study in IRS-assisted wireless networks confirms that the optimization-driven ML framework can improve learning efficiency and the reward performance significantly compared to conventional model-free ML approaches.
ISSN:0890-8044
1558-156X
DOI:10.1109/MNET.211.2100386