New classes inference, few‐shot learning and continual learning for radar signal recognition

Automatic radar modulation recognition plays a significant role in both civilian and military applications. With the rapid development of deep learning, convolutional neural networks have achieved demonstrated success in radar signal recognition. However, the convolutional neural networks usually on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET radar, sonar & navigation sonar & navigation, 2022-10, Vol.16 (10), p.1641-1655
Hauptverfasser: Luo, Jiaji, Si, Weijian, Deng, Zhian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Automatic radar modulation recognition plays a significant role in both civilian and military applications. With the rapid development of deep learning, convolutional neural networks have achieved demonstrated success in radar signal recognition. However, the convolutional neural networks usually only recognise trained classes, and when the dataset changes, the networks need to be retrained. However, in actual radar signal recognition applications, the model usually needs to predict new radar signals, and the size of the training set will continue to accumulate. Therefore, few‐shot learning and rapid training on dynamic datasets become crucial. In this study, a lifelong learning system based on imprint few‐shot learning and Net2Net knowledge transfer for radar signal recognition is proposed. The proposed algorithm adapts to the constant changes of the dataset, which can achieve new classes inference, few‐shot learning, and knowledge transfer. The model is trained on the dataset containing 8 types of radar signals and achieves high recognition accuracy in the test dataset containing 12 types of radar signals. The recognition accuracy of the proposed algorithm achieves 91.8% at −2 dB. In addition, Net2Net knowledge transfer can improve the training efficiency on new datasets avoiding training from scratch.
ISSN:1751-8784
1751-8792
DOI:10.1049/rsn2.12286