An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism

This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2023-09, Vol.12 (17), p.3661
Hauptverfasser: Ma, Zhao, Fang, Shengliang, Fan, Youchen, Li, Gaoxing, Hu, Haojie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 17
container_start_page 3661
container_title Electronics (Basel)
container_volume 12
creator Ma, Zhao
Fang, Shengliang
Fan, Youchen
Li, Gaoxing
Hu, Haojie
description This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used to extract features from the original I/Q sequence on three branches, respectively, learn the spatiotemporal features of the original signal from different “perspectives” through the convolution kernels with different shapes, and perform channel fusion on the output feature maps of the three branches to obtain a multi-domain mixed feature map. Then, the deep features of the signal are extracted by connecting multiple convolution layers in the time domain. Secondly, a plug-and-play channel attention module is constructed, which can be embedded into any feature extraction layer to give higher weight to the more valuable channels in the output feature map to achieve the purpose of feature correction for the output feature map. The experimental results on the RadiomL2016.10A dataset show that the designed HFECNET-CA has higher recognition accuracy and fewer trainable parameters compared to other networks. Under 20 SNRs, the average recognition accuracy reached 63.92%, and the highest recognition accuracy reached 93.64%.
doi_str_mv 10.3390/electronics12173661
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2862245450</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A764265186</galeid><sourcerecordid>A764265186</sourcerecordid><originalsourceid>FETCH-LOGICAL-c311t-86883911afedd9a4d85cfd81b42f8bf4362fab5da1523761da771c3f1c8ff56f3</originalsourceid><addsrcrecordid>eNptUc1OwzAMrhBIIOAJuETiPGiSNk25VdP4kQZc4FylibNltAkkqcYegzcmZRw4YEv-_T5blrPsAudXlNb5NfQgo3fWyIAJrihj-CA7IXlVz2pSk8M_8XF2HsImT1Jjyml-kn01Fi20NtKAjUhYhZZmtY5bmCx6dAp6pJ1HzRjdIKKRU23sU-QsmvciBJPIP-kNatD9rvNGoVsQcfSAFp_RC_mDfYK4df4Nzd3QGQsKbU1coybGtHfqP4JcC2vCcJYdadEHOP_1p9nr7eJlfj9bPt89zJvlTFKM44wzzmmNsdCgVC0KxUupFcddQTTvdEEZ0aIrlcAloRXDSlQVllRjybUumaan2eV-7rt3HyOE2G7c6G1a2RLOCCnKoswT6mqPWokeWmO1my5KqmAw0lnQJtWbihWElZizRKB7gvQuBA-6ffdmEH7X4ryd_tX-8y_6DRkKjk4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2862245450</pqid></control><display><type>article</type><title>An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB Electronic Journals Library</source><creator>Ma, Zhao ; Fang, Shengliang ; Fan, Youchen ; Li, Gaoxing ; Hu, Haojie</creator><creatorcontrib>Ma, Zhao ; Fang, Shengliang ; Fan, Youchen ; Li, Gaoxing ; Hu, Haojie</creatorcontrib><description>This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used to extract features from the original I/Q sequence on three branches, respectively, learn the spatiotemporal features of the original signal from different “perspectives” through the convolution kernels with different shapes, and perform channel fusion on the output feature maps of the three branches to obtain a multi-domain mixed feature map. Then, the deep features of the signal are extracted by connecting multiple convolution layers in the time domain. Secondly, a plug-and-play channel attention module is constructed, which can be embedded into any feature extraction layer to give higher weight to the more valuable channels in the output feature map to achieve the purpose of feature correction for the output feature map. The experimental results on the RadiomL2016.10A dataset show that the designed HFECNET-CA has higher recognition accuracy and fewer trainable parameters compared to other networks. Under 20 SNRs, the average recognition accuracy reached 63.92%, and the highest recognition accuracy reached 93.64%.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics12173661</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Artificial neural networks ; Automatic classification ; Automatic modulation recognition ; Classification ; Communication ; Computer networks ; Decision making ; Deep learning ; Design ; Feature extraction ; Feature maps ; Kernels ; Machine learning ; Modulation (Electronics) ; Neural networks ; Telecommunication systems ; Wavelet transforms</subject><ispartof>Electronics (Basel), 2023-09, Vol.12 (17), p.3661</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c311t-86883911afedd9a4d85cfd81b42f8bf4362fab5da1523761da771c3f1c8ff56f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Ma, Zhao</creatorcontrib><creatorcontrib>Fang, Shengliang</creatorcontrib><creatorcontrib>Fan, Youchen</creatorcontrib><creatorcontrib>Li, Gaoxing</creatorcontrib><creatorcontrib>Hu, Haojie</creatorcontrib><title>An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism</title><title>Electronics (Basel)</title><description>This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used to extract features from the original I/Q sequence on three branches, respectively, learn the spatiotemporal features of the original signal from different “perspectives” through the convolution kernels with different shapes, and perform channel fusion on the output feature maps of the three branches to obtain a multi-domain mixed feature map. Then, the deep features of the signal are extracted by connecting multiple convolution layers in the time domain. Secondly, a plug-and-play channel attention module is constructed, which can be embedded into any feature extraction layer to give higher weight to the more valuable channels in the output feature map to achieve the purpose of feature correction for the output feature map. The experimental results on the RadiomL2016.10A dataset show that the designed HFECNET-CA has higher recognition accuracy and fewer trainable parameters compared to other networks. Under 20 SNRs, the average recognition accuracy reached 63.92%, and the highest recognition accuracy reached 93.64%.</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Automatic classification</subject><subject>Automatic modulation recognition</subject><subject>Classification</subject><subject>Communication</subject><subject>Computer networks</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>Design</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Kernels</subject><subject>Machine learning</subject><subject>Modulation (Electronics)</subject><subject>Neural networks</subject><subject>Telecommunication systems</subject><subject>Wavelet transforms</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNptUc1OwzAMrhBIIOAJuETiPGiSNk25VdP4kQZc4FylibNltAkkqcYegzcmZRw4YEv-_T5blrPsAudXlNb5NfQgo3fWyIAJrihj-CA7IXlVz2pSk8M_8XF2HsImT1Jjyml-kn01Fi20NtKAjUhYhZZmtY5bmCx6dAp6pJ1HzRjdIKKRU23sU-QsmvciBJPIP-kNatD9rvNGoVsQcfSAFp_RC_mDfYK4df4Nzd3QGQsKbU1coybGtHfqP4JcC2vCcJYdadEHOP_1p9nr7eJlfj9bPt89zJvlTFKM44wzzmmNsdCgVC0KxUupFcddQTTvdEEZ0aIrlcAloRXDSlQVllRjybUumaan2eV-7rt3HyOE2G7c6G1a2RLOCCnKoswT6mqPWokeWmO1my5KqmAw0lnQJtWbihWElZizRKB7gvQuBA-6ffdmEH7X4ryd_tX-8y_6DRkKjk4</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Ma, Zhao</creator><creator>Fang, Shengliang</creator><creator>Fan, Youchen</creator><creator>Li, Gaoxing</creator><creator>Hu, Haojie</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20230901</creationdate><title>An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism</title><author>Ma, Zhao ; Fang, Shengliang ; Fan, Youchen ; Li, Gaoxing ; Hu, Haojie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c311t-86883911afedd9a4d85cfd81b42f8bf4362fab5da1523761da771c3f1c8ff56f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Automatic classification</topic><topic>Automatic modulation recognition</topic><topic>Classification</topic><topic>Communication</topic><topic>Computer networks</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>Design</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Kernels</topic><topic>Machine learning</topic><topic>Modulation (Electronics)</topic><topic>Neural networks</topic><topic>Telecommunication systems</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ma, Zhao</creatorcontrib><creatorcontrib>Fang, Shengliang</creatorcontrib><creatorcontrib>Fan, Youchen</creatorcontrib><creatorcontrib>Li, Gaoxing</creatorcontrib><creatorcontrib>Hu, Haojie</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ma, Zhao</au><au>Fang, Shengliang</au><au>Fan, Youchen</au><au>Li, Gaoxing</au><au>Hu, Haojie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism</atitle><jtitle>Electronics (Basel)</jtitle><date>2023-09-01</date><risdate>2023</risdate><volume>12</volume><issue>17</issue><spage>3661</spage><pages>3661-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>This paper proposes a hybrid feature extraction convolutional neural network combined with a channel attention mechanism (HFECNET-CA) for automatic modulation recognition (AMR). Firstly, we designed a hybrid feature extraction backbone network. Three different forms of convolution kernels were used to extract features from the original I/Q sequence on three branches, respectively, learn the spatiotemporal features of the original signal from different “perspectives” through the convolution kernels with different shapes, and perform channel fusion on the output feature maps of the three branches to obtain a multi-domain mixed feature map. Then, the deep features of the signal are extracted by connecting multiple convolution layers in the time domain. Secondly, a plug-and-play channel attention module is constructed, which can be embedded into any feature extraction layer to give higher weight to the more valuable channels in the output feature map to achieve the purpose of feature correction for the output feature map. The experimental results on the RadiomL2016.10A dataset show that the designed HFECNET-CA has higher recognition accuracy and fewer trainable parameters compared to other networks. Under 20 SNRs, the average recognition accuracy reached 63.92%, and the highest recognition accuracy reached 93.64%.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics12173661</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2023-09, Vol.12 (17), p.3661
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2862245450
source MDPI - Multidisciplinary Digital Publishing Institute; EZB Electronic Journals Library
subjects Accuracy
Artificial neural networks
Automatic classification
Automatic modulation recognition
Classification
Communication
Computer networks
Decision making
Deep learning
Design
Feature extraction
Feature maps
Kernels
Machine learning
Modulation (Electronics)
Neural networks
Telecommunication systems
Wavelet transforms
title An Efficient and Lightweight Model for Automatic Modulation Classification: A Hybrid Feature Extraction Network Combined with Attention Mechanism
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T19%3A10%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Efficient%20and%20Lightweight%20Model%20for%20Automatic%20Modulation%20Classification:%20A%20Hybrid%20Feature%20Extraction%20Network%20Combined%20with%20Attention%20Mechanism&rft.jtitle=Electronics%20(Basel)&rft.au=Ma,%20Zhao&rft.date=2023-09-01&rft.volume=12&rft.issue=17&rft.spage=3661&rft.pages=3661-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics12173661&rft_dat=%3Cgale_proqu%3EA764265186%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2862245450&rft_id=info:pmid/&rft_galeid=A764265186&rfr_iscdi=true