Multi-Objective Optimization for UAV-Assisted Wireless Powered IoT Networks Based on Extended DDPG Algorithm

This paper studies an unmanned aerial vehicle (UAV)-assisted wireless powered IoT network, where a rotary-wing UAV adopts fly-hover-communicate protocol to successively visit IoT devices in demand. During the hovering periods, the UAV works on full-duplex mode to simultaneously collect data from the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on communications 2021-09, Vol.69 (9), p.6361-6374
Hauptverfasser: Yu, Yu, Tang, Jie, Huang, Jiayi, Zhang, Xiuyin, So, Daniel Ka Chun, Wong, Kai-Kit
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper studies an unmanned aerial vehicle (UAV)-assisted wireless powered IoT network, where a rotary-wing UAV adopts fly-hover-communicate protocol to successively visit IoT devices in demand. During the hovering periods, the UAV works on full-duplex mode to simultaneously collect data from the target device and charge other devices within its coverage. Practical propulsion power consumption model and non-linear energy harvesting model are taken into account. We formulate a multi-objective optimization problem to jointly optimize three objectives: maximization of sum data rate, maximization of total harvested energy and minimization of UAV's energy consumption over a particular mission period. These three objectives are in conflict with each other partly and weight parameters are given to describe associated importance. Since IoT devices keep gathering information from the physical surrounding environment and their requirements to upload data change dynamically, online path planning of the UAV is required. In this paper, we apply deep reinforcement learning algorithm to achieve online decision. An extended deep deterministic policy gradient (DDPG) algorithm is proposed to learn control policies of UAV over multiple objectives. While training, the agent learns to produce optimal policies under given weights conditions on the basis of achieving timely data collection according to the requirement priority and avoiding devices' data overflow. The verification results show that the proposed MODDPG (multi-objective DDPG) algorithm achieves joint optimization of three objectives and optimal policies can be adjusted according to weight parameters among optimization objectives.
ISSN:0090-6778
1558-0857
DOI:10.1109/TCOMM.2021.3089476