SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification
Polarimetric synthetic aperture radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-02 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Alkhatib, Mohammed Q Zitouni, M Sami Al-Saad, Mina Nour Aburaed Al-Ahmad, Hussain |
description | Polarimetric synthetic aperture radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complex-valued nature of PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple state-of-the-art approaches using the airborne synthetic aperture radar (AIRSAR) datasets of Flevoland and San Francisco, as well as the ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset. Analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising overall accuracy of 96.01% even with only a 1% sampling ratio. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2932594069</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2932594069</sourcerecordid><originalsourceid>FETCH-proquest_journals_29325940693</originalsourceid><addsrcrecordid>eNqNys0KgkAUQOEhCJLyHS60FqY7atkutKE2FdlehhhLmxybH3z9XPQArc7ifBMSIGOraBMjzkhobUspxXSNScICci4LjifptlA-hVJ6AKehkLIHLoXzRgL3ttEdjGbQ5gW1NnDRqtxd4fgWDwm5EtY2dXMXbnQLMq2FsjL8dU6WfH_LD1Fv9MdL66pWe9ONq8KMYZLFNM3Yf-oLPio9Mg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2932594069</pqid></control><display><type>article</type><title>SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification</title><source>Free E- Journals</source><creator>Alkhatib, Mohammed Q ; Zitouni, M Sami ; Al-Saad, Mina ; Nour Aburaed ; Al-Ahmad, Hussain</creator><creatorcontrib>Alkhatib, Mohammed Q ; Zitouni, M Sami ; Al-Saad, Mina ; Nour Aburaed ; Al-Ahmad, Hussain</creatorcontrib><description>Polarimetric synthetic aperture radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complex-valued nature of PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple state-of-the-art approaches using the airborne synthetic aperture radar (AIRSAR) datasets of Flevoland and San Francisco, as well as the ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset. Analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising overall accuracy of 96.01% even with only a 1% sampling ratio.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Airborne radar ; Artificial neural networks ; Datasets ; Feature extraction ; Image classification ; Land cover ; Machine learning ; Radar imaging ; Synthetic aperture radar</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Alkhatib, Mohammed Q</creatorcontrib><creatorcontrib>Zitouni, M Sami</creatorcontrib><creatorcontrib>Al-Saad, Mina</creatorcontrib><creatorcontrib>Nour Aburaed</creatorcontrib><creatorcontrib>Al-Ahmad, Hussain</creatorcontrib><title>SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification</title><title>arXiv.org</title><description>Polarimetric synthetic aperture radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complex-valued nature of PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple state-of-the-art approaches using the airborne synthetic aperture radar (AIRSAR) datasets of Flevoland and San Francisco, as well as the ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset. Analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising overall accuracy of 96.01% even with only a 1% sampling ratio.</description><subject>Airborne radar</subject><subject>Artificial neural networks</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Land cover</subject><subject>Machine learning</subject><subject>Radar imaging</subject><subject>Synthetic aperture radar</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNys0KgkAUQOEhCJLyHS60FqY7atkutKE2FdlehhhLmxybH3z9XPQArc7ifBMSIGOraBMjzkhobUspxXSNScICci4LjifptlA-hVJ6AKehkLIHLoXzRgL3ttEdjGbQ5gW1NnDRqtxd4fgWDwm5EtY2dXMXbnQLMq2FsjL8dU6WfH_LD1Fv9MdL66pWe9ONq8KMYZLFNM3Yf-oLPio9Mg</recordid><startdate>20240227</startdate><enddate>20240227</enddate><creator>Alkhatib, Mohammed Q</creator><creator>Zitouni, M Sami</creator><creator>Al-Saad, Mina</creator><creator>Nour Aburaed</creator><creator>Al-Ahmad, Hussain</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240227</creationdate><title>SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification</title><author>Alkhatib, Mohammed Q ; Zitouni, M Sami ; Al-Saad, Mina ; Nour Aburaed ; Al-Ahmad, Hussain</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29325940693</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Airborne radar</topic><topic>Artificial neural networks</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Land cover</topic><topic>Machine learning</topic><topic>Radar imaging</topic><topic>Synthetic aperture radar</topic><toplevel>online_resources</toplevel><creatorcontrib>Alkhatib, Mohammed Q</creatorcontrib><creatorcontrib>Zitouni, M Sami</creatorcontrib><creatorcontrib>Al-Saad, Mina</creatorcontrib><creatorcontrib>Nour Aburaed</creatorcontrib><creatorcontrib>Al-Ahmad, Hussain</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alkhatib, Mohammed Q</au><au>Zitouni, M Sami</au><au>Al-Saad, Mina</au><au>Nour Aburaed</au><au>Al-Ahmad, Hussain</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification</atitle><jtitle>arXiv.org</jtitle><date>2024-02-27</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Polarimetric synthetic aperture radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complex-valued nature of PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple state-of-the-art approaches using the airborne synthetic aperture radar (AIRSAR) datasets of Flevoland and San Francisco, as well as the ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset. Analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising overall accuracy of 96.01% even with only a 1% sampling ratio.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-02 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2932594069 |
source | Free E- Journals |
subjects | Airborne radar Artificial neural networks Datasets Feature extraction Image classification Land cover Machine learning Radar imaging Synthetic aperture radar |
title | SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T08%3A18%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=SDF2Net:%20Shallow%20to%20Deep%20Feature%20Fusion%20Network%20for%20PolSAR%20Image%20Classification&rft.jtitle=arXiv.org&rft.au=Alkhatib,%20Mohammed%20Q&rft.date=2024-02-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2932594069%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2932594069&rft_id=info:pmid/&rfr_iscdi=true |