Robust ISAR Target Recognition Based on ADRISAR-Net

Due to the inherent unknown image deformation among the training and test samples, performance of the deep convolutional neural network (CNN) will be degraded for Inverse Synthetic Aperture Radar (ISAR) automatic target recognition. Meanwhile, traditional CNN only captures the local spatial informat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on aerospace and electronic systems 2022-12, Vol.58 (6), p.5494-5505
Hauptverfasser: Zhou, Xuening, Bai, Xueru, Wang, Li, Zhou, Feng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5505
container_issue 6
container_start_page 5494
container_title IEEE transactions on aerospace and electronic systems
container_volume 58
creator Zhou, Xuening
Bai, Xueru
Wang, Li
Zhou, Feng
description Due to the inherent unknown image deformation among the training and test samples, performance of the deep convolutional neural network (CNN) will be degraded for Inverse Synthetic Aperture Radar (ISAR) automatic target recognition. Meanwhile, traditional CNN only captures the local spatial information due to small receptive fields, thus, neglects the global information useful for recognition. To tackle these issues, this article proposes the attention-augmented deformation robust ISAR image recognition network, dubbed as ADRISAR-Net. The model adopts the inverse compositional spatial transformer for automatic image deformation adjustment, and performs joint local and global feature extractions by the attention-augmented CNN. Finally, the softmax classifier outputs the recognition results. The proposed ADRISAR-Net is end-to-end trainable, and achieves higher recognition accuracy for the four-satellite and three-airplane ISAR image data sets generated by electromagnetic computing.
doi_str_mv 10.1109/TAES.2022.3174826
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9774276</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9774276</ieee_id><sourcerecordid>2747611322</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-ca0ac038cfadd5f0e9b57cc7e9ad40287b213901fda007d7edc072907b5a61c83</originalsourceid><addsrcrecordid>eNo9kMFKw0AQhhdRsFYfQLwEPCfO7CaZ7DHWqoWikNbzstndlBRt6m568O2b0OJp_oHvn4GPsXuEBBHk07qcrxIOnCcCKS14fsEmmGUUyxzEJZsAYBFLnuE1uwlhO6xpkYoJE1VXH0IfLVZlFa2137g-qpzpNru2b7td9KyDs9EQypdqZOIP19-yq0Z_B3d3nlP29Tpfz97j5efbYlYuY8Ol6GOjQRsQhWm0tVkDTtYZGUNOapsCL6jmKCRgYzUAWXLWAHEJVGc6R1OIKXs83d377vfgQq-23cHvhpeKU0o5ouB8oPBEGd-F4F2j9r790f5PIajRjRrdqNGNOrsZOg-nTuuc--clUcopF0dKFl2C</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2747611322</pqid></control><display><type>article</type><title>Robust ISAR Target Recognition Based on ADRISAR-Net</title><source>IEEE Electronic Library (IEL)</source><creator>Zhou, Xuening ; Bai, Xueru ; Wang, Li ; Zhou, Feng</creator><creatorcontrib>Zhou, Xuening ; Bai, Xueru ; Wang, Li ; Zhou, Feng</creatorcontrib><description>Due to the inherent unknown image deformation among the training and test samples, performance of the deep convolutional neural network (CNN) will be degraded for Inverse Synthetic Aperture Radar (ISAR) automatic target recognition. Meanwhile, traditional CNN only captures the local spatial information due to small receptive fields, thus, neglects the global information useful for recognition. To tackle these issues, this article proposes the attention-augmented deformation robust ISAR image recognition network, dubbed as ADRISAR-Net. The model adopts the inverse compositional spatial transformer for automatic image deformation adjustment, and performs joint local and global feature extractions by the attention-augmented CNN. Finally, the softmax classifier outputs the recognition results. The proposed ADRISAR-Net is end-to-end trainable, and achieves higher recognition accuracy for the four-satellite and three-airplane ISAR image data sets generated by electromagnetic computing.</description><identifier>ISSN: 0018-9251</identifier><identifier>EISSN: 1557-9603</identifier><identifier>DOI: 10.1109/TAES.2022.3174826</identifier><identifier>CODEN: IEARAX</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; Attention ; Automatic target recognition ; automatic target recognition (ATR) ; Convolution ; convolutional neural network (CNN) ; Deformable models ; Deformation ; Feature extraction ; Generators ; image deformation ; Inverse synthetic aperture radar ; inverse synthetic aperture radar (ISAR) ; Object recognition ; Robustness ; Satellite imagery ; Spatial data ; Strain ; Target recognition</subject><ispartof>IEEE transactions on aerospace and electronic systems, 2022-12, Vol.58 (6), p.5494-5505</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-ca0ac038cfadd5f0e9b57cc7e9ad40287b213901fda007d7edc072907b5a61c83</citedby><cites>FETCH-LOGICAL-c293t-ca0ac038cfadd5f0e9b57cc7e9ad40287b213901fda007d7edc072907b5a61c83</cites><orcidid>0000-0001-9283-1810 ; 0000-0002-1514-7393</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9774276$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27915,27916,54749</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9774276$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhou, Xuening</creatorcontrib><creatorcontrib>Bai, Xueru</creatorcontrib><creatorcontrib>Wang, Li</creatorcontrib><creatorcontrib>Zhou, Feng</creatorcontrib><title>Robust ISAR Target Recognition Based on ADRISAR-Net</title><title>IEEE transactions on aerospace and electronic systems</title><addtitle>T-AES</addtitle><description>Due to the inherent unknown image deformation among the training and test samples, performance of the deep convolutional neural network (CNN) will be degraded for Inverse Synthetic Aperture Radar (ISAR) automatic target recognition. Meanwhile, traditional CNN only captures the local spatial information due to small receptive fields, thus, neglects the global information useful for recognition. To tackle these issues, this article proposes the attention-augmented deformation robust ISAR image recognition network, dubbed as ADRISAR-Net. The model adopts the inverse compositional spatial transformer for automatic image deformation adjustment, and performs joint local and global feature extractions by the attention-augmented CNN. Finally, the softmax classifier outputs the recognition results. The proposed ADRISAR-Net is end-to-end trainable, and achieves higher recognition accuracy for the four-satellite and three-airplane ISAR image data sets generated by electromagnetic computing.</description><subject>Artificial neural networks</subject><subject>Attention</subject><subject>Automatic target recognition</subject><subject>automatic target recognition (ATR)</subject><subject>Convolution</subject><subject>convolutional neural network (CNN)</subject><subject>Deformable models</subject><subject>Deformation</subject><subject>Feature extraction</subject><subject>Generators</subject><subject>image deformation</subject><subject>Inverse synthetic aperture radar</subject><subject>inverse synthetic aperture radar (ISAR)</subject><subject>Object recognition</subject><subject>Robustness</subject><subject>Satellite imagery</subject><subject>Spatial data</subject><subject>Strain</subject><subject>Target recognition</subject><issn>0018-9251</issn><issn>1557-9603</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMFKw0AQhhdRsFYfQLwEPCfO7CaZ7DHWqoWikNbzstndlBRt6m568O2b0OJp_oHvn4GPsXuEBBHk07qcrxIOnCcCKS14fsEmmGUUyxzEJZsAYBFLnuE1uwlhO6xpkYoJE1VXH0IfLVZlFa2137g-qpzpNru2b7td9KyDs9EQypdqZOIP19-yq0Z_B3d3nlP29Tpfz97j5efbYlYuY8Ol6GOjQRsQhWm0tVkDTtYZGUNOapsCL6jmKCRgYzUAWXLWAHEJVGc6R1OIKXs83d377vfgQq-23cHvhpeKU0o5ouB8oPBEGd-F4F2j9r790f5PIajRjRrdqNGNOrsZOg-nTuuc--clUcopF0dKFl2C</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Zhou, Xuening</creator><creator>Bai, Xueru</creator><creator>Wang, Li</creator><creator>Zhou, Feng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>H8D</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-9283-1810</orcidid><orcidid>https://orcid.org/0000-0002-1514-7393</orcidid></search><sort><creationdate>20221201</creationdate><title>Robust ISAR Target Recognition Based on ADRISAR-Net</title><author>Zhou, Xuening ; Bai, Xueru ; Wang, Li ; Zhou, Feng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-ca0ac038cfadd5f0e9b57cc7e9ad40287b213901fda007d7edc072907b5a61c83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Attention</topic><topic>Automatic target recognition</topic><topic>automatic target recognition (ATR)</topic><topic>Convolution</topic><topic>convolutional neural network (CNN)</topic><topic>Deformable models</topic><topic>Deformation</topic><topic>Feature extraction</topic><topic>Generators</topic><topic>image deformation</topic><topic>Inverse synthetic aperture radar</topic><topic>inverse synthetic aperture radar (ISAR)</topic><topic>Object recognition</topic><topic>Robustness</topic><topic>Satellite imagery</topic><topic>Spatial data</topic><topic>Strain</topic><topic>Target recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Xuening</creatorcontrib><creatorcontrib>Bai, Xueru</creatorcontrib><creatorcontrib>Wang, Li</creatorcontrib><creatorcontrib>Zhou, Feng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on aerospace and electronic systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhou, Xuening</au><au>Bai, Xueru</au><au>Wang, Li</au><au>Zhou, Feng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust ISAR Target Recognition Based on ADRISAR-Net</atitle><jtitle>IEEE transactions on aerospace and electronic systems</jtitle><stitle>T-AES</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>58</volume><issue>6</issue><spage>5494</spage><epage>5505</epage><pages>5494-5505</pages><issn>0018-9251</issn><eissn>1557-9603</eissn><coden>IEARAX</coden><abstract>Due to the inherent unknown image deformation among the training and test samples, performance of the deep convolutional neural network (CNN) will be degraded for Inverse Synthetic Aperture Radar (ISAR) automatic target recognition. Meanwhile, traditional CNN only captures the local spatial information due to small receptive fields, thus, neglects the global information useful for recognition. To tackle these issues, this article proposes the attention-augmented deformation robust ISAR image recognition network, dubbed as ADRISAR-Net. The model adopts the inverse compositional spatial transformer for automatic image deformation adjustment, and performs joint local and global feature extractions by the attention-augmented CNN. Finally, the softmax classifier outputs the recognition results. The proposed ADRISAR-Net is end-to-end trainable, and achieves higher recognition accuracy for the four-satellite and three-airplane ISAR image data sets generated by electromagnetic computing.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TAES.2022.3174826</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-9283-1810</orcidid><orcidid>https://orcid.org/0000-0002-1514-7393</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9251
ispartof IEEE transactions on aerospace and electronic systems, 2022-12, Vol.58 (6), p.5494-5505
issn 0018-9251
1557-9603
language eng
recordid cdi_ieee_primary_9774276
source IEEE Electronic Library (IEL)
subjects Artificial neural networks
Attention
Automatic target recognition
automatic target recognition (ATR)
Convolution
convolutional neural network (CNN)
Deformable models
Deformation
Feature extraction
Generators
image deformation
Inverse synthetic aperture radar
inverse synthetic aperture radar (ISAR)
Object recognition
Robustness
Satellite imagery
Spatial data
Strain
Target recognition
title Robust ISAR Target Recognition Based on ADRISAR-Net
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T06%3A38%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20ISAR%20Target%20Recognition%20Based%20on%20ADRISAR-Net&rft.jtitle=IEEE%20transactions%20on%20aerospace%20and%20electronic%20systems&rft.au=Zhou,%20Xuening&rft.date=2022-12-01&rft.volume=58&rft.issue=6&rft.spage=5494&rft.epage=5505&rft.pages=5494-5505&rft.issn=0018-9251&rft.eissn=1557-9603&rft.coden=IEARAX&rft_id=info:doi/10.1109/TAES.2022.3174826&rft_dat=%3Cproquest_RIE%3E2747611322%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2747611322&rft_id=info:pmid/&rft_ieee_id=9774276&rfr_iscdi=true