LPI radar waveform recognition based on semi-supervised model all mean teacher
Low probability of intercept (LPI) radar signal identification plays an important role in electronic warfare, but most existing algorithms are proposed under the condition of sufficient samples, ignoring the problem of a small amount of labeled data in the actual electromagnetic environment. To solv...
Gespeichert in:
Veröffentlicht in: | Digital signal processing 2024-08, Vol.151, p.104568, Article 104568 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Low probability of intercept (LPI) radar signal identification plays an important role in electronic warfare, but most existing algorithms are proposed under the condition of sufficient samples, ignoring the problem of a small amount of labeled data in the actual electromagnetic environment. To solve the problem, in this paper, a semi-supervised learning model All Mean Teacher (AMT) based on Mean Teacher (MT) is proposed. First, the LPI radar signal is transformed into Time-frequency images (TFIs) by using the Choi-Williams distribution, and Random Erasing is used for TFIs which improves the generalization ability of the model. Then the Multi-headed Self-Attention Network (MSA-Net) is aimed to extract features, combined with AMT to realize the automatic waveform recognition of radar signals. MSA-Net facilitates feature information propagation by computing contrast costs on TFIs between the student and teacher networks. It solves the problem that TFIs are not easy to train for small amounts of labeled data, improving the accuracy of signal recognition in semi-supervised learning scenarios. Experimental results show that the average recognition accuracy of the proposed method is up to 85.7% at a signal-to-noise ratio of -8 dB.
•Designing a novel SSL model for the automatic recognition of LPI radar signals.•Introducing an improved multi-head attention mechanism enables the model to capture global information more effectively.•Incorporating an extra contrast cost into the overall loss function.•Introducing noise to the samples through the data augmentation technique of random erasing for better consistency training. |
---|---|
ISSN: | 1051-2004 1095-4333 |
DOI: | 10.1016/j.dsp.2024.104568 |