Lightweight Dual-Stream SAR-ATR Framework Based on an Attention Mechanism-Guided Heterogeneous Graph Network
Current methods synthetic aperture radar-automatic target recognition (SAR-ATR) research methods still struggle with overfitting due to small amounts of training data, as well as black-box opacity and high computational requirements. Unmanned aerial vehicles, as the mainstream means of acquiring SAR...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in applied earth observations and remote sensing 2025, Vol.18, p.537-556 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 556 |
---|---|
container_issue | |
container_start_page | 537 |
container_title | IEEE journal of selected topics in applied earth observations and remote sensing |
container_volume | 18 |
creator | Xiong, Xuying Zhang, Xinyu Jiang, Weidong Liu, Tianpeng Liu, Yongxiang Liu, Li |
description | Current methods synthetic aperture radar-automatic target recognition (SAR-ATR) research methods still struggle with overfitting due to small amounts of training data, as well as black-box opacity and high computational requirements. Unmanned aerial vehicles, as the mainstream means of acquiring SAR data, place higher requirements on ATR algorithms due to their flexible maneuvering characteristics. This article starts by studying the electromagnetic (EM) backscattering mechanism and the physical properties of SAR. We construct a heterogeneous graph for the first time to fully exploit both the EM scattering information of the target components and their interactions. Moreover, the multilevel multihead attention mechanism is introduced to the graph net to learn features from various topological structure levels. Additionally, we include a convolutional neural network based feature extraction net to replenish intuitive visual features. The above two nets form the lightweight dual-stream framework (LDSF). LDSF uses a feature fusion subnetwork to adaptively fuse the dual-stream features to maximize the final classification performance. The experiments use two more rigorous evaluation protocols on MSTAR and OpenSARShip, namely, once-for-all and less-for-more, which can rigorously assess the efficacy and generalization capability of the algorithms. The superiority of LDSF is verified. |
doi_str_mv | 10.1109/JSTARS.2024.3498327 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JSTARS_2024_3498327</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10753051</ieee_id><doaj_id>oai_doaj_org_article_9266c4a667734553933e0b8f7756f65b</doaj_id><sourcerecordid>3140636827</sourcerecordid><originalsourceid>FETCH-LOGICAL-c244t-4d6977f4576524dea788aadd9a06efbcebf910d7a191bd5122342448d29f44433</originalsourceid><addsrcrecordid>eNpNkU-P0zAQxS0EEmXhE8AhEucU_3d8DAvbXVRAasvZmsSTNqWNi-1oxbcnJSvEZUZjvd8bax4hbxldMkbthy_bXb3ZLjnlcimkrQQ3z8iCM8VKpoR6ThbMClsySeVL8iqlI6WaGysW5LTu94f8iNdafBrhVG5zRDgX23pT1rtNcRfhjI8h_iw-QkJfhKGAoahzxiH30_AV2wMMfTqXq7H3k-AeM8awxwHDmIpVhMuh-Ib5avGavOjglPDNU78hP-4-727vy_X31cNtvS5bLmUupdfWmE4qoxWXHsFUFYD3FqjGrmmx6Syj3gCzrPGKcS7kBFae205KKcQNeZh9fYCju8T-DPG3C9C7vw8h7h3E3LcndJZr3UrQ2hghlRJWCKRN1RmjdKdVM3m9n70uMfwaMWV3DGMcpu87Md1TC11xM6nErGpjSCli928ro-4akZsjcteI3FNEE_VupnpE_I8wSlDFxB-kLYwk</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3140636827</pqid></control><display><type>article</type><title>Lightweight Dual-Stream SAR-ATR Framework Based on an Attention Mechanism-Guided Heterogeneous Graph Network</title><source>DOAJ Directory of Open Access Journals</source><source>EZB Electronic Journals Library</source><creator>Xiong, Xuying ; Zhang, Xinyu ; Jiang, Weidong ; Liu, Tianpeng ; Liu, Yongxiang ; Liu, Li</creator><creatorcontrib>Xiong, Xuying ; Zhang, Xinyu ; Jiang, Weidong ; Liu, Tianpeng ; Liu, Yongxiang ; Liu, Li</creatorcontrib><description>Current methods synthetic aperture radar-automatic target recognition (SAR-ATR) research methods still struggle with overfitting due to small amounts of training data, as well as black-box opacity and high computational requirements. Unmanned aerial vehicles, as the mainstream means of acquiring SAR data, place higher requirements on ATR algorithms due to their flexible maneuvering characteristics. This article starts by studying the electromagnetic (EM) backscattering mechanism and the physical properties of SAR. We construct a heterogeneous graph for the first time to fully exploit both the EM scattering information of the target components and their interactions. Moreover, the multilevel multihead attention mechanism is introduced to the graph net to learn features from various topological structure levels. Additionally, we include a convolutional neural network based feature extraction net to replenish intuitive visual features. The above two nets form the lightweight dual-stream framework (LDSF). LDSF uses a feature fusion subnetwork to adaptively fuse the dual-stream features to maximize the final classification performance. The experiments use two more rigorous evaluation protocols on MSTAR and OpenSARShip, namely, once-for-all and less-for-more, which can rigorously assess the efficacy and generalization capability of the algorithms. The superiority of LDSF is verified.</description><identifier>ISSN: 1939-1404</identifier><identifier>EISSN: 2151-1535</identifier><identifier>DOI: 10.1109/JSTARS.2024.3498327</identifier><identifier>CODEN: IJSTHZ</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Attention ; Attention mechanism ; Automatic target recognition ; Automatic vehicle identification systems ; Autonomous aerial vehicles ; Computational modeling ; Convolutional neural networks ; Data acquisition ; electromagnetic scattering mechanism ; Feature extraction ; feature fusion ; heterogeneous graph ; Lightweight ; Neural networks ; Opacity ; Physical properties ; Radar imaging ; Radar remote sensing ; Research methods ; Rivers ; SAR (radar) ; Scattering ; Semantics ; Synthetic aperture radar ; synthetic aperture radar–automatic target recognition (SAR–ATR) ; Unmanned aerial vehicles ; Visual perception ; Visualization ; Weight reduction</subject><ispartof>IEEE journal of selected topics in applied earth observations and remote sensing, 2025, Vol.18, p.537-556</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c244t-4d6977f4576524dea788aadd9a06efbcebf910d7a191bd5122342448d29f44433</cites><orcidid>0000-0003-2522-656X ; 0009-0004-6035-6751 ; 0009-0000-6381-0156 ; 0000-0002-0682-8365 ; 0000-0002-6807-049X ; 0000-0002-2011-2873</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,860,2096,4010,27902,27903,27904</link.rule.ids></links><search><creatorcontrib>Xiong, Xuying</creatorcontrib><creatorcontrib>Zhang, Xinyu</creatorcontrib><creatorcontrib>Jiang, Weidong</creatorcontrib><creatorcontrib>Liu, Tianpeng</creatorcontrib><creatorcontrib>Liu, Yongxiang</creatorcontrib><creatorcontrib>Liu, Li</creatorcontrib><title>Lightweight Dual-Stream SAR-ATR Framework Based on an Attention Mechanism-Guided Heterogeneous Graph Network</title><title>IEEE journal of selected topics in applied earth observations and remote sensing</title><addtitle>JSTARS</addtitle><description>Current methods synthetic aperture radar-automatic target recognition (SAR-ATR) research methods still struggle with overfitting due to small amounts of training data, as well as black-box opacity and high computational requirements. Unmanned aerial vehicles, as the mainstream means of acquiring SAR data, place higher requirements on ATR algorithms due to their flexible maneuvering characteristics. This article starts by studying the electromagnetic (EM) backscattering mechanism and the physical properties of SAR. We construct a heterogeneous graph for the first time to fully exploit both the EM scattering information of the target components and their interactions. Moreover, the multilevel multihead attention mechanism is introduced to the graph net to learn features from various topological structure levels. Additionally, we include a convolutional neural network based feature extraction net to replenish intuitive visual features. The above two nets form the lightweight dual-stream framework (LDSF). LDSF uses a feature fusion subnetwork to adaptively fuse the dual-stream features to maximize the final classification performance. The experiments use two more rigorous evaluation protocols on MSTAR and OpenSARShip, namely, once-for-all and less-for-more, which can rigorously assess the efficacy and generalization capability of the algorithms. The superiority of LDSF is verified.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Attention</subject><subject>Attention mechanism</subject><subject>Automatic target recognition</subject><subject>Automatic vehicle identification systems</subject><subject>Autonomous aerial vehicles</subject><subject>Computational modeling</subject><subject>Convolutional neural networks</subject><subject>Data acquisition</subject><subject>electromagnetic scattering mechanism</subject><subject>Feature extraction</subject><subject>feature fusion</subject><subject>heterogeneous graph</subject><subject>Lightweight</subject><subject>Neural networks</subject><subject>Opacity</subject><subject>Physical properties</subject><subject>Radar imaging</subject><subject>Radar remote sensing</subject><subject>Research methods</subject><subject>Rivers</subject><subject>SAR (radar)</subject><subject>Scattering</subject><subject>Semantics</subject><subject>Synthetic aperture radar</subject><subject>synthetic aperture radar–automatic target recognition (SAR–ATR)</subject><subject>Unmanned aerial vehicles</subject><subject>Visual perception</subject><subject>Visualization</subject><subject>Weight reduction</subject><issn>1939-1404</issn><issn>2151-1535</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkU-P0zAQxS0EEmXhE8AhEucU_3d8DAvbXVRAasvZmsSTNqWNi-1oxbcnJSvEZUZjvd8bax4hbxldMkbthy_bXb3ZLjnlcimkrQQ3z8iCM8VKpoR6ThbMClsySeVL8iqlI6WaGysW5LTu94f8iNdafBrhVG5zRDgX23pT1rtNcRfhjI8h_iw-QkJfhKGAoahzxiH30_AV2wMMfTqXq7H3k-AeM8awxwHDmIpVhMuh-Ib5avGavOjglPDNU78hP-4-727vy_X31cNtvS5bLmUupdfWmE4qoxWXHsFUFYD3FqjGrmmx6Syj3gCzrPGKcS7kBFae205KKcQNeZh9fYCju8T-DPG3C9C7vw8h7h3E3LcndJZr3UrQ2hghlRJWCKRN1RmjdKdVM3m9n70uMfwaMWV3DGMcpu87Md1TC11xM6nErGpjSCli928ro-4akZsjcteI3FNEE_VupnpE_I8wSlDFxB-kLYwk</recordid><startdate>2025</startdate><enddate>2025</enddate><creator>Xiong, Xuying</creator><creator>Zhang, Xinyu</creator><creator>Jiang, Weidong</creator><creator>Liu, Tianpeng</creator><creator>Liu, Yongxiang</creator><creator>Liu, Li</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-2522-656X</orcidid><orcidid>https://orcid.org/0009-0004-6035-6751</orcidid><orcidid>https://orcid.org/0009-0000-6381-0156</orcidid><orcidid>https://orcid.org/0000-0002-0682-8365</orcidid><orcidid>https://orcid.org/0000-0002-6807-049X</orcidid><orcidid>https://orcid.org/0000-0002-2011-2873</orcidid></search><sort><creationdate>2025</creationdate><title>Lightweight Dual-Stream SAR-ATR Framework Based on an Attention Mechanism-Guided Heterogeneous Graph Network</title><author>Xiong, Xuying ; Zhang, Xinyu ; Jiang, Weidong ; Liu, Tianpeng ; Liu, Yongxiang ; Liu, Li</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c244t-4d6977f4576524dea788aadd9a06efbcebf910d7a191bd5122342448d29f44433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Attention</topic><topic>Attention mechanism</topic><topic>Automatic target recognition</topic><topic>Automatic vehicle identification systems</topic><topic>Autonomous aerial vehicles</topic><topic>Computational modeling</topic><topic>Convolutional neural networks</topic><topic>Data acquisition</topic><topic>electromagnetic scattering mechanism</topic><topic>Feature extraction</topic><topic>feature fusion</topic><topic>heterogeneous graph</topic><topic>Lightweight</topic><topic>Neural networks</topic><topic>Opacity</topic><topic>Physical properties</topic><topic>Radar imaging</topic><topic>Radar remote sensing</topic><topic>Research methods</topic><topic>Rivers</topic><topic>SAR (radar)</topic><topic>Scattering</topic><topic>Semantics</topic><topic>Synthetic aperture radar</topic><topic>synthetic aperture radar–automatic target recognition (SAR–ATR)</topic><topic>Unmanned aerial vehicles</topic><topic>Visual perception</topic><topic>Visualization</topic><topic>Weight reduction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xiong, Xuying</creatorcontrib><creatorcontrib>Zhang, Xinyu</creatorcontrib><creatorcontrib>Jiang, Weidong</creatorcontrib><creatorcontrib>Liu, Tianpeng</creatorcontrib><creatorcontrib>Liu, Yongxiang</creatorcontrib><creatorcontrib>Liu, Li</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore Digital Library</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science & Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE journal of selected topics in applied earth observations and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiong, Xuying</au><au>Zhang, Xinyu</au><au>Jiang, Weidong</au><au>Liu, Tianpeng</au><au>Liu, Yongxiang</au><au>Liu, Li</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Lightweight Dual-Stream SAR-ATR Framework Based on an Attention Mechanism-Guided Heterogeneous Graph Network</atitle><jtitle>IEEE journal of selected topics in applied earth observations and remote sensing</jtitle><stitle>JSTARS</stitle><date>2025</date><risdate>2025</risdate><volume>18</volume><spage>537</spage><epage>556</epage><pages>537-556</pages><issn>1939-1404</issn><eissn>2151-1535</eissn><coden>IJSTHZ</coden><abstract>Current methods synthetic aperture radar-automatic target recognition (SAR-ATR) research methods still struggle with overfitting due to small amounts of training data, as well as black-box opacity and high computational requirements. Unmanned aerial vehicles, as the mainstream means of acquiring SAR data, place higher requirements on ATR algorithms due to their flexible maneuvering characteristics. This article starts by studying the electromagnetic (EM) backscattering mechanism and the physical properties of SAR. We construct a heterogeneous graph for the first time to fully exploit both the EM scattering information of the target components and their interactions. Moreover, the multilevel multihead attention mechanism is introduced to the graph net to learn features from various topological structure levels. Additionally, we include a convolutional neural network based feature extraction net to replenish intuitive visual features. The above two nets form the lightweight dual-stream framework (LDSF). LDSF uses a feature fusion subnetwork to adaptively fuse the dual-stream features to maximize the final classification performance. The experiments use two more rigorous evaluation protocols on MSTAR and OpenSARShip, namely, once-for-all and less-for-more, which can rigorously assess the efficacy and generalization capability of the algorithms. The superiority of LDSF is verified.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JSTARS.2024.3498327</doi><tpages>20</tpages><orcidid>https://orcid.org/0000-0003-2522-656X</orcidid><orcidid>https://orcid.org/0009-0004-6035-6751</orcidid><orcidid>https://orcid.org/0009-0000-6381-0156</orcidid><orcidid>https://orcid.org/0000-0002-0682-8365</orcidid><orcidid>https://orcid.org/0000-0002-6807-049X</orcidid><orcidid>https://orcid.org/0000-0002-2011-2873</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1939-1404 |
ispartof | IEEE journal of selected topics in applied earth observations and remote sensing, 2025, Vol.18, p.537-556 |
issn | 1939-1404 2151-1535 |
language | eng |
recordid | cdi_crossref_primary_10_1109_JSTARS_2024_3498327 |
source | DOAJ Directory of Open Access Journals; EZB Electronic Journals Library |
subjects | Algorithms Artificial neural networks Attention Attention mechanism Automatic target recognition Automatic vehicle identification systems Autonomous aerial vehicles Computational modeling Convolutional neural networks Data acquisition electromagnetic scattering mechanism Feature extraction feature fusion heterogeneous graph Lightweight Neural networks Opacity Physical properties Radar imaging Radar remote sensing Research methods Rivers SAR (radar) Scattering Semantics Synthetic aperture radar synthetic aperture radar–automatic target recognition (SAR–ATR) Unmanned aerial vehicles Visual perception Visualization Weight reduction |
title | Lightweight Dual-Stream SAR-ATR Framework Based on an Attention Mechanism-Guided Heterogeneous Graph Network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T21%3A58%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Lightweight%20Dual-Stream%20SAR-ATR%20Framework%20Based%20on%20an%20Attention%20Mechanism-Guided%20Heterogeneous%20Graph%20Network&rft.jtitle=IEEE%20journal%20of%20selected%20topics%20in%20applied%20earth%20observations%20and%20remote%20sensing&rft.au=Xiong,%20Xuying&rft.date=2025&rft.volume=18&rft.spage=537&rft.epage=556&rft.pages=537-556&rft.issn=1939-1404&rft.eissn=2151-1535&rft.coden=IJSTHZ&rft_id=info:doi/10.1109/JSTARS.2024.3498327&rft_dat=%3Cproquest_cross%3E3140636827%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3140636827&rft_id=info:pmid/&rft_ieee_id=10753051&rft_doaj_id=oai_doaj_org_article_9266c4a667734553933e0b8f7756f65b&rfr_iscdi=true |