CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network

Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. Howev...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-09, Vol.13 (17), p.3408
Hauptverfasser: Zhang, Wenna, Xue, Kailiang, Yao, Aiqin, Sun, Yunqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 17
container_start_page 3408
container_title Electronics (Basel)
container_volume 13
creator Zhang, Wenna
Xue, Kailiang
Yao, Aiqin
Sun, Yunqiang
description Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are used for sequence recognition tasks, face two main challenges, respectively: the ineffective utilization of global information and slow processing speeds due to sequential operations. To address these issues, this paper introduces CTRNet, a novel automatic modulation recognition network that combines a CNN with Transformer. This combination leverages Transformer’s ability to adequately capture the long-distance dependencies between global sequences and its advantages in sequence modeling, along with the CNN’s capability to extract features from local feature regions of signals. During the data preprocessing stage, the original IQ-modulated signals undergo sliding-window processing. By selecting the appropriate window sizes and strides, multiple subsequences are formed, enabling the network to effectively handle complex modulation patterns. In the embedding module, token vectors are designed to integrate information from multiple samples within each window, enhancing the model’s understanding and modeling ability of global information. In the feedforward neural network, a more effective Bilinear layer is employed for processing to capture the higher-order relationship between input features, thereby enhancing the ability of the model to capture complex patterns. Experiments conducted on the RML2016.10A public dataset demonstrate that compared with the existing algorithms, the proposed algorithm not only exhibits significant advantages in terms of parameter efficiency but also achieves higher recognition accuracy under various signal-to-noise ratio (SNR) conditions. In particular, it performs relatively well in terms of accuracy, precision, recall, and F1-score, with clearer classification of higher-order modulations and notable overall accuracy improvement.
doi_str_mv 10.3390/electronics13173408
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3103840490</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3103840490</sourcerecordid><originalsourceid>FETCH-LOGICAL-p158t-d02d345611c835b085a3ac17cf3d20093fb3a11e4ec875b7961a106fe0985bc43</originalsourceid><addsrcrecordid>eNotjsFOwzAQRC0kJKrSL-ASiXNgN2snNrcSAUUqQSrhXDmOg1LSuNiO-H0iYC7vnWaGsSuEGyIFt3awJno39iYgYUEc5BlbZFCoVGUqu2CrEA4wRyFJggV7K-tdZeNdsh6T9RTdUcfeJC-unYbZ3JjsrHEfY__r9zrYNpml9noMnfNH69OyqpLKTl4PM-K385-X7LzTQ7Crfy7Z--NDXW7S7evTc7nepicUMqYtZC1xkSMaSaIBKTRpg4XpqM3mh9Q1pBEtt0YWoilUjhoh7ywoKRrDacmu_3pP3n1NNsT9wU1-nCf3hECSA1dAP4QSUm4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3103840490</pqid></control><display><type>article</type><title>CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network</title><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>MDPI - Multidisciplinary Digital Publishing Institute</source><creator>Zhang, Wenna ; Xue, Kailiang ; Yao, Aiqin ; Sun, Yunqiang</creator><creatorcontrib>Zhang, Wenna ; Xue, Kailiang ; Yao, Aiqin ; Sun, Yunqiang</creatorcontrib><description>Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are used for sequence recognition tasks, face two main challenges, respectively: the ineffective utilization of global information and slow processing speeds due to sequential operations. To address these issues, this paper introduces CTRNet, a novel automatic modulation recognition network that combines a CNN with Transformer. This combination leverages Transformer’s ability to adequately capture the long-distance dependencies between global sequences and its advantages in sequence modeling, along with the CNN’s capability to extract features from local feature regions of signals. During the data preprocessing stage, the original IQ-modulated signals undergo sliding-window processing. By selecting the appropriate window sizes and strides, multiple subsequences are formed, enabling the network to effectively handle complex modulation patterns. In the embedding module, token vectors are designed to integrate information from multiple samples within each window, enhancing the model’s understanding and modeling ability of global information. In the feedforward neural network, a more effective Bilinear layer is employed for processing to capture the higher-order relationship between input features, thereby enhancing the ability of the model to capture complex patterns. Experiments conducted on the RML2016.10A public dataset demonstrate that compared with the existing algorithms, the proposed algorithm not only exhibits significant advantages in terms of parameter efficiency but also achieves higher recognition accuracy under various signal-to-noise ratio (SNR) conditions. In particular, it performs relatively well in terms of accuracy, precision, recall, and F1-score, with clearer classification of higher-order modulations and notable overall accuracy improvement.</description><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13173408</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Automatic modulation recognition ; Classification ; Deep learning ; Design ; Efficiency ; Feature extraction ; Feature recognition ; Machine learning ; Modelling ; Natural language processing ; Neural networks ; Parameter estimation ; Pattern recognition ; Recurrent neural networks ; Signal to noise ratio ; Task complexity ; Wireless communications ; Wireless networks</subject><ispartof>Electronics (Basel), 2024-09, Vol.13 (17), p.3408</ispartof><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>315,782,786,27931,27932</link.rule.ids></links><search><creatorcontrib>Zhang, Wenna</creatorcontrib><creatorcontrib>Xue, Kailiang</creatorcontrib><creatorcontrib>Yao, Aiqin</creatorcontrib><creatorcontrib>Sun, Yunqiang</creatorcontrib><title>CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network</title><title>Electronics (Basel)</title><description>Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are used for sequence recognition tasks, face two main challenges, respectively: the ineffective utilization of global information and slow processing speeds due to sequential operations. To address these issues, this paper introduces CTRNet, a novel automatic modulation recognition network that combines a CNN with Transformer. This combination leverages Transformer’s ability to adequately capture the long-distance dependencies between global sequences and its advantages in sequence modeling, along with the CNN’s capability to extract features from local feature regions of signals. During the data preprocessing stage, the original IQ-modulated signals undergo sliding-window processing. By selecting the appropriate window sizes and strides, multiple subsequences are formed, enabling the network to effectively handle complex modulation patterns. In the embedding module, token vectors are designed to integrate information from multiple samples within each window, enhancing the model’s understanding and modeling ability of global information. In the feedforward neural network, a more effective Bilinear layer is employed for processing to capture the higher-order relationship between input features, thereby enhancing the ability of the model to capture complex patterns. Experiments conducted on the RML2016.10A public dataset demonstrate that compared with the existing algorithms, the proposed algorithm not only exhibits significant advantages in terms of parameter efficiency but also achieves higher recognition accuracy under various signal-to-noise ratio (SNR) conditions. In particular, it performs relatively well in terms of accuracy, precision, recall, and F1-score, with clearer classification of higher-order modulations and notable overall accuracy improvement.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Automatic modulation recognition</subject><subject>Classification</subject><subject>Deep learning</subject><subject>Design</subject><subject>Efficiency</subject><subject>Feature extraction</subject><subject>Feature recognition</subject><subject>Machine learning</subject><subject>Modelling</subject><subject>Natural language processing</subject><subject>Neural networks</subject><subject>Parameter estimation</subject><subject>Pattern recognition</subject><subject>Recurrent neural networks</subject><subject>Signal to noise ratio</subject><subject>Task complexity</subject><subject>Wireless communications</subject><subject>Wireless networks</subject><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNotjsFOwzAQRC0kJKrSL-ASiXNgN2snNrcSAUUqQSrhXDmOg1LSuNiO-H0iYC7vnWaGsSuEGyIFt3awJno39iYgYUEc5BlbZFCoVGUqu2CrEA4wRyFJggV7K-tdZeNdsh6T9RTdUcfeJC-unYbZ3JjsrHEfY__r9zrYNpml9noMnfNH69OyqpLKTl4PM-K385-X7LzTQ7Crfy7Z--NDXW7S7evTc7nepicUMqYtZC1xkSMaSaIBKTRpg4XpqM3mh9Q1pBEtt0YWoilUjhoh7ywoKRrDacmu_3pP3n1NNsT9wU1-nCf3hECSA1dAP4QSUm4</recordid><startdate>20240901</startdate><enddate>20240901</enddate><creator>Zhang, Wenna</creator><creator>Xue, Kailiang</creator><creator>Yao, Aiqin</creator><creator>Sun, Yunqiang</creator><general>MDPI AG</general><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20240901</creationdate><title>CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network</title><author>Zhang, Wenna ; Xue, Kailiang ; Yao, Aiqin ; Sun, Yunqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p158t-d02d345611c835b085a3ac17cf3d20093fb3a11e4ec875b7961a106fe0985bc43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Automatic modulation recognition</topic><topic>Classification</topic><topic>Deep learning</topic><topic>Design</topic><topic>Efficiency</topic><topic>Feature extraction</topic><topic>Feature recognition</topic><topic>Machine learning</topic><topic>Modelling</topic><topic>Natural language processing</topic><topic>Neural networks</topic><topic>Parameter estimation</topic><topic>Pattern recognition</topic><topic>Recurrent neural networks</topic><topic>Signal to noise ratio</topic><topic>Task complexity</topic><topic>Wireless communications</topic><topic>Wireless networks</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Wenna</creatorcontrib><creatorcontrib>Xue, Kailiang</creatorcontrib><creatorcontrib>Yao, Aiqin</creatorcontrib><creatorcontrib>Sun, Yunqiang</creatorcontrib><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Wenna</au><au>Xue, Kailiang</au><au>Yao, Aiqin</au><au>Sun, Yunqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-09-01</date><risdate>2024</risdate><volume>13</volume><issue>17</issue><spage>3408</spage><pages>3408-</pages><eissn>2079-9292</eissn><abstract>Deep learning (DL) has brought new perspectives and methods to automatic modulation recognition (AMR), enabling AMR systems to operate more efficiently and reliably in modern wireless communication environments through its powerful feature learning and complex pattern recognition capabilities. However, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are used for sequence recognition tasks, face two main challenges, respectively: the ineffective utilization of global information and slow processing speeds due to sequential operations. To address these issues, this paper introduces CTRNet, a novel automatic modulation recognition network that combines a CNN with Transformer. This combination leverages Transformer’s ability to adequately capture the long-distance dependencies between global sequences and its advantages in sequence modeling, along with the CNN’s capability to extract features from local feature regions of signals. During the data preprocessing stage, the original IQ-modulated signals undergo sliding-window processing. By selecting the appropriate window sizes and strides, multiple subsequences are formed, enabling the network to effectively handle complex modulation patterns. In the embedding module, token vectors are designed to integrate information from multiple samples within each window, enhancing the model’s understanding and modeling ability of global information. In the feedforward neural network, a more effective Bilinear layer is employed for processing to capture the higher-order relationship between input features, thereby enhancing the ability of the model to capture complex patterns. Experiments conducted on the RML2016.10A public dataset demonstrate that compared with the existing algorithms, the proposed algorithm not only exhibits significant advantages in terms of parameter efficiency but also achieves higher recognition accuracy under various signal-to-noise ratio (SNR) conditions. In particular, it performs relatively well in terms of accuracy, precision, recall, and F1-score, with clearer classification of higher-order modulations and notable overall accuracy improvement.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13173408</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2079-9292
ispartof Electronics (Basel), 2024-09, Vol.13 (17), p.3408
issn 2079-9292
language eng
recordid cdi_proquest_journals_3103840490
source Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; MDPI - Multidisciplinary Digital Publishing Institute
subjects Accuracy
Algorithms
Artificial neural networks
Automatic modulation recognition
Classification
Deep learning
Design
Efficiency
Feature extraction
Feature recognition
Machine learning
Modelling
Natural language processing
Neural networks
Parameter estimation
Pattern recognition
Recurrent neural networks
Signal to noise ratio
Task complexity
Wireless communications
Wireless networks
title CTRNet: An Automatic Modulation Recognition Based on Transformer-CNN Neural Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-04T22%3A40%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CTRNet:%20An%20Automatic%20Modulation%20Recognition%20Based%20on%20Transformer-CNN%20Neural%20Network&rft.jtitle=Electronics%20(Basel)&rft.au=Zhang,%20Wenna&rft.date=2024-09-01&rft.volume=13&rft.issue=17&rft.spage=3408&rft.pages=3408-&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13173408&rft_dat=%3Cproquest%3E3103840490%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3103840490&rft_id=info:pmid/&rfr_iscdi=true