Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition

Convolutional neural network (CNN) has been successfully employed for synthetic aperture radar automatic target recognition (SAR-ATR). Whereas, few labeled synthetic aperture radar (SAR) images cannot train a CNN model with strong generalization. In practice, the annotation of SAR images is often di...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2024, Vol.73, p.1-14
Hauptverfasser: Huang, Linqing, Liu, Gongshen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 14
container_issue
container_start_page 1
container_title IEEE transactions on instrumentation and measurement
container_volume 73
creator Huang, Linqing
Liu, Gongshen
description Convolutional neural network (CNN) has been successfully employed for synthetic aperture radar automatic target recognition (SAR-ATR). Whereas, few labeled synthetic aperture radar (SAR) images cannot train a CNN model with strong generalization. In practice, the annotation of SAR images is often difficult and time-consuming, so we can usually collect few labeled and massive unlabeled SAR images. Here, we propose a convolutional feature aggregation network (CFANet) with self-supervised learning and decision fusion for SAR-ATR with few labeled and massive unlabeled data. The major contributions of CFANet are threefold. First, we develop to concatenate feature maps (FMs) of different convolutional layers to extract more discriminative feature. Second, the massive unlabeled SAR images with self-supervised pseudolabels are employed to pretrain CFANet, and then, the few labeled SAR images are used to fine-tune the model. By doing this, the information in both labeled and unlabeled SAR images are employed for the downstream ATR. Third, to effectively extract the information in different layers, the first-order and second-order statistical features of different layers are also used to train two extra classifiers. Then, we can obtain three pieces of soft classification results yielded by softmax layer of CFANet and two classifiers for a query SAR target image. These soft classification results are combined by weighted arithmetic average (WAA) rule whose weights are learned by minimizing the mean squared error (MSE) between fusion results and ground truth on labeled SAR target images. The developed CFANet model was tested on MSTAR and FuSARship datasets comprising about 5000 images. The experimental results demonstrate CFANet that can usually achieve the highest classification accuracy compared with a variety of related SAR-ATR methods.
doi_str_mv 10.1109/TIM.2024.3443349
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10636296</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10636296</ieee_id><sourcerecordid>3098882228</sourcerecordid><originalsourceid>FETCH-LOGICAL-c175t-e1aec153175744ff3c1ffe20ae53e7aef6b5abf25972548cb7515404a5aadfed3</originalsourceid><addsrcrecordid>eNpNkL1PwzAUxC0EEqWwMzBYYk7xt5OxKhQqFZDaIsbITZ-DIcTFTor470loB6bTPd2d9H4IXVIyopRkN6vZ44gRJkZcCM5FdoQGVEqdZEqxYzQghKZJJqQ6RWcxvhNCtBJ6gHYTX-981TbO16bCUzBNGwCPyzJAaforfoLm24cP_OqaN7yEyibLdgth5yJs8BxMqF1dYlNv8C0ULvaVafsn1ge8HC_wyoQSGryAwpe160fP0Yk1VYSLgw7Ry_RuNXlI5s_3s8l4nhRUyyYBaqCgkndGC2EtL6i1wIgByUEbsGotzdoymWkmRVqstaRSEGGkMRsLGz5E1_vdbfBfLcQmf_dt6D6NOSdZmqaMsbRLkX2qCD7GADbfBvdpwk9OSd7TzTu6eU83P9DtKlf7igOAf3HFFcsU_wUK5ngW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3098882228</pqid></control><display><type>article</type><title>Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition</title><source>IEEE Electronic Library Online</source><creator>Huang, Linqing ; Liu, Gongshen</creator><creatorcontrib>Huang, Linqing ; Liu, Gongshen</creatorcontrib><description>Convolutional neural network (CNN) has been successfully employed for synthetic aperture radar automatic target recognition (SAR-ATR). Whereas, few labeled synthetic aperture radar (SAR) images cannot train a CNN model with strong generalization. In practice, the annotation of SAR images is often difficult and time-consuming, so we can usually collect few labeled and massive unlabeled SAR images. Here, we propose a convolutional feature aggregation network (CFANet) with self-supervised learning and decision fusion for SAR-ATR with few labeled and massive unlabeled data. The major contributions of CFANet are threefold. First, we develop to concatenate feature maps (FMs) of different convolutional layers to extract more discriminative feature. Second, the massive unlabeled SAR images with self-supervised pseudolabels are employed to pretrain CFANet, and then, the few labeled SAR images are used to fine-tune the model. By doing this, the information in both labeled and unlabeled SAR images are employed for the downstream ATR. Third, to effectively extract the information in different layers, the first-order and second-order statistical features of different layers are also used to train two extra classifiers. Then, we can obtain three pieces of soft classification results yielded by softmax layer of CFANet and two classifiers for a query SAR target image. These soft classification results are combined by weighted arithmetic average (WAA) rule whose weights are learned by minimizing the mean squared error (MSE) between fusion results and ground truth on labeled SAR target images. The developed CFANet model was tested on MSTAR and FuSARship datasets comprising about 5000 images. The experimental results demonstrate CFANet that can usually achieve the highest classification accuracy compared with a variety of related SAR-ATR methods.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2024.3443349</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Annotations ; Artificial neural networks ; Automatic target recognition ; Classification ; Convolutional neural networks ; convolutional neural networks (CNNs) ; Data models ; feature aggregation ; Feature extraction ; Feature maps ; Feature recognition ; few labeled SAR images ; first- and second-order statistical features ; Ground truth ; Image recognition ; Machine learning ; Radar imaging ; Radar polarimetry ; Self-supervised learning ; Synthetic aperture radar ; synthetic aperture radar (SAR) ; Target recognition</subject><ispartof>IEEE transactions on instrumentation and measurement, 2024, Vol.73, p.1-14</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c175t-e1aec153175744ff3c1ffe20ae53e7aef6b5abf25972548cb7515404a5aadfed3</cites><orcidid>0000-0001-5194-1570 ; 0000-0001-6892-8006</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10636296$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10636296$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Huang, Linqing</creatorcontrib><creatorcontrib>Liu, Gongshen</creatorcontrib><title>Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Convolutional neural network (CNN) has been successfully employed for synthetic aperture radar automatic target recognition (SAR-ATR). Whereas, few labeled synthetic aperture radar (SAR) images cannot train a CNN model with strong generalization. In practice, the annotation of SAR images is often difficult and time-consuming, so we can usually collect few labeled and massive unlabeled SAR images. Here, we propose a convolutional feature aggregation network (CFANet) with self-supervised learning and decision fusion for SAR-ATR with few labeled and massive unlabeled data. The major contributions of CFANet are threefold. First, we develop to concatenate feature maps (FMs) of different convolutional layers to extract more discriminative feature. Second, the massive unlabeled SAR images with self-supervised pseudolabels are employed to pretrain CFANet, and then, the few labeled SAR images are used to fine-tune the model. By doing this, the information in both labeled and unlabeled SAR images are employed for the downstream ATR. Third, to effectively extract the information in different layers, the first-order and second-order statistical features of different layers are also used to train two extra classifiers. Then, we can obtain three pieces of soft classification results yielded by softmax layer of CFANet and two classifiers for a query SAR target image. These soft classification results are combined by weighted arithmetic average (WAA) rule whose weights are learned by minimizing the mean squared error (MSE) between fusion results and ground truth on labeled SAR target images. The developed CFANet model was tested on MSTAR and FuSARship datasets comprising about 5000 images. The experimental results demonstrate CFANet that can usually achieve the highest classification accuracy compared with a variety of related SAR-ATR methods.</description><subject>Annotations</subject><subject>Artificial neural networks</subject><subject>Automatic target recognition</subject><subject>Classification</subject><subject>Convolutional neural networks</subject><subject>convolutional neural networks (CNNs)</subject><subject>Data models</subject><subject>feature aggregation</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Feature recognition</subject><subject>few labeled SAR images</subject><subject>first- and second-order statistical features</subject><subject>Ground truth</subject><subject>Image recognition</subject><subject>Machine learning</subject><subject>Radar imaging</subject><subject>Radar polarimetry</subject><subject>Self-supervised learning</subject><subject>Synthetic aperture radar</subject><subject>synthetic aperture radar (SAR)</subject><subject>Target recognition</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkL1PwzAUxC0EEqWwMzBYYk7xt5OxKhQqFZDaIsbITZ-DIcTFTor470loB6bTPd2d9H4IXVIyopRkN6vZ44gRJkZcCM5FdoQGVEqdZEqxYzQghKZJJqQ6RWcxvhNCtBJ6gHYTX-981TbO16bCUzBNGwCPyzJAaforfoLm24cP_OqaN7yEyibLdgth5yJs8BxMqF1dYlNv8C0ULvaVafsn1ge8HC_wyoQSGryAwpe160fP0Yk1VYSLgw7Ry_RuNXlI5s_3s8l4nhRUyyYBaqCgkndGC2EtL6i1wIgByUEbsGotzdoymWkmRVqstaRSEGGkMRsLGz5E1_vdbfBfLcQmf_dt6D6NOSdZmqaMsbRLkX2qCD7GADbfBvdpwk9OSd7TzTu6eU83P9DtKlf7igOAf3HFFcsU_wUK5ngW</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Huang, Linqing</creator><creator>Liu, Gongshen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-5194-1570</orcidid><orcidid>https://orcid.org/0000-0001-6892-8006</orcidid></search><sort><creationdate>2024</creationdate><title>Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition</title><author>Huang, Linqing ; Liu, Gongshen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c175t-e1aec153175744ff3c1ffe20ae53e7aef6b5abf25972548cb7515404a5aadfed3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Annotations</topic><topic>Artificial neural networks</topic><topic>Automatic target recognition</topic><topic>Classification</topic><topic>Convolutional neural networks</topic><topic>convolutional neural networks (CNNs)</topic><topic>Data models</topic><topic>feature aggregation</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Feature recognition</topic><topic>few labeled SAR images</topic><topic>first- and second-order statistical features</topic><topic>Ground truth</topic><topic>Image recognition</topic><topic>Machine learning</topic><topic>Radar imaging</topic><topic>Radar polarimetry</topic><topic>Self-supervised learning</topic><topic>Synthetic aperture radar</topic><topic>synthetic aperture radar (SAR)</topic><topic>Target recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huang, Linqing</creatorcontrib><creatorcontrib>Liu, Gongshen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Linqing</au><au>Liu, Gongshen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2024</date><risdate>2024</risdate><volume>73</volume><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Convolutional neural network (CNN) has been successfully employed for synthetic aperture radar automatic target recognition (SAR-ATR). Whereas, few labeled synthetic aperture radar (SAR) images cannot train a CNN model with strong generalization. In practice, the annotation of SAR images is often difficult and time-consuming, so we can usually collect few labeled and massive unlabeled SAR images. Here, we propose a convolutional feature aggregation network (CFANet) with self-supervised learning and decision fusion for SAR-ATR with few labeled and massive unlabeled data. The major contributions of CFANet are threefold. First, we develop to concatenate feature maps (FMs) of different convolutional layers to extract more discriminative feature. Second, the massive unlabeled SAR images with self-supervised pseudolabels are employed to pretrain CFANet, and then, the few labeled SAR images are used to fine-tune the model. By doing this, the information in both labeled and unlabeled SAR images are employed for the downstream ATR. Third, to effectively extract the information in different layers, the first-order and second-order statistical features of different layers are also used to train two extra classifiers. Then, we can obtain three pieces of soft classification results yielded by softmax layer of CFANet and two classifiers for a query SAR target image. These soft classification results are combined by weighted arithmetic average (WAA) rule whose weights are learned by minimizing the mean squared error (MSE) between fusion results and ground truth on labeled SAR target images. The developed CFANet model was tested on MSTAR and FuSARship datasets comprising about 5000 images. The experimental results demonstrate CFANet that can usually achieve the highest classification accuracy compared with a variety of related SAR-ATR methods.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2024.3443349</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-5194-1570</orcidid><orcidid>https://orcid.org/0000-0001-6892-8006</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9456
ispartof IEEE transactions on instrumentation and measurement, 2024, Vol.73, p.1-14
issn 0018-9456
1557-9662
language eng
recordid cdi_ieee_primary_10636296
source IEEE Electronic Library Online
subjects Annotations
Artificial neural networks
Automatic target recognition
Classification
Convolutional neural networks
convolutional neural networks (CNNs)
Data models
feature aggregation
Feature extraction
Feature maps
Feature recognition
few labeled SAR images
first- and second-order statistical features
Ground truth
Image recognition
Machine learning
Radar imaging
Radar polarimetry
Self-supervised learning
Synthetic aperture radar
synthetic aperture radar (SAR)
Target recognition
title Convolutional Feature Aggregation Network With Self-Supervised Learning and Decision Fusion for SAR Target Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T05%3A34%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Convolutional%20Feature%20Aggregation%20Network%20With%20Self-Supervised%20Learning%20and%20Decision%20Fusion%20for%20SAR%20Target%20Recognition&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Huang,%20Linqing&rft.date=2024&rft.volume=73&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2024.3443349&rft_dat=%3Cproquest_RIE%3E3098882228%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3098882228&rft_id=info:pmid/&rft_ieee_id=10636296&rfr_iscdi=true