Lightweight Automatic Modulation Classification Based on Decentralized Learning

Due to the implementation and performance limitations of centralized learning automatic modulation classification (CentAMC) method, this paper proposes a decentralized learning AMC (DecentAMC) method using model consolidation and lightweight design. Specifically, the model consolidation is realized...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cognitive communications and networking 2022-03, Vol.8 (1), p.57-70
Hauptverfasser: Fu, Xue, Gui, Guan, Wang, Yu, Ohtsuki, Tomoaki, Adebisi, Bamidele, Gacanin, Haris, Adachi, Fumiyuki
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Due to the implementation and performance limitations of centralized learning automatic modulation classification (CentAMC) method, this paper proposes a decentralized learning AMC (DecentAMC) method using model consolidation and lightweight design. Specifically, the model consolidation is realized by a central device (CD) for edge device (ED) model averaging (MA) and multiple EDs for ED model training. The lightweight is designed by separable convolutional neural network (S-CNN), in which the separable convolutional layer is utilized to replace the standard convolution layer and most of fully connected layers are cut off. Simulation results show that the proposed method substantially reduces the storage and computational capacity requirements of the EDs and communication overhead. The training efficiency also shows remarkable improvement. Compared with convolutional neural network (CNN), the space complexity (i.e., model parameters and output feature map) is decreased by about 94% and the time complexity (i.e., floating point operations) of S-CNN is decreased by about 96% while degrading the average correct classification probability by less than 1%. Compared with S-CNN-based CentAMC, without considering model weights uploading and downloading, the training efficiency of our proposed method is about {N} times of it, where {N} is the number of EDs. Considering the model weights uploading and downloading, the training efficiency of our proposed method can still be maintained at a high level (e.g., when the number of EDs is 12, the training efficency of the proposed AMC method is about 4 times that of S-CNN-based CentAMC in dataset D_{1} = \{2{\mathrm {FSK, 4FSK, 8FSK, BPSK, QPSK, 8PSK, 16QAM}}\} and about 5 times that of S-CNN-based CentAMC in dataset D_{2} = \{2 {\mathrm {FSK, 4FSK, 8FSK, BPSK, QPSK, 8PSK, PAM2, PAM4, PAM8, 16QAM}}\} ), while the communication overhead is reduced more than 35%.
ISSN:2332-7731
2332-7731
DOI:10.1109/TCCN.2021.3089178