Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection

Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a nove...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2023, Vol.61, p.1-14
Hauptverfasser: Ye, Yuanxin, Wang, Mengmeng, Zhou, Liang, Lei, Guangyang, Fan, Jianwei, Qin, Yao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 14
container_issue
container_start_page 1
container_title IEEE transactions on geoscience and remote sensing
container_volume 61
creator Ye, Yuanxin
Wang, Mengmeng
Zhou, Liang
Lei, Guangyang
Fan, Jianwei
Qin, Yao
description Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a novel adjacent-level feature fusion network with 3-D convolution (named AFCF3D-Net) is proposed in this article. First, through the inner fusion property of 3-D convolution, we design a new feature fusion way that can simultaneously extract and fuse the feature information from bi-temporal images. Then, to alleviate the semantic gap between low-level features and high-level features, we propose an adjacent-level feature cross-fusion (AFCF) module to aggregate complementary feature information between the adjacent levels. Furthermore, the full-scale skip connection strategy is introduced to improve the capability of pixel-wise prediction and the compactness of changed objects in the results. Finally, the proposed AFCF3D-Net has been validated on the three challenging RS CD datasets: the Wuhan building dataset (WHU-CD), the LEVIR building dataset (LEVIR-CD), and the Sun Yat-Sen University dataset (SYSU-CD). The results of quantitative analysis and qualitative comparison demonstrate that the proposed AFCF3D-Net achieves better performance compared to other state-of-the-art (SOTA) methods. The code for this work is available at https://github.com/wm-Githuber/AFCF3D-Net .
doi_str_mv 10.1109/TGRS.2023.3305499
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2858743441</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2858743441</sourcerecordid><originalsourceid>FETCH-LOGICAL-c273t-3067cdf6e1147df1e1160fdc8bf039e56d9892bf535178e0f938f39f2399c0bc3</originalsourceid><addsrcrecordid>eNotkFFrwjAUhcPYYM7tB-wtsOe4pEna5lHqdII4UMdgL6GmN1rR1iXpYP9-Kfp0Xs4599wPoWdGR4xR9bqZrdajhCZ8xDmVQqkbNGBS5oSmQtyiAWUqJUmuknv04P2BUiYkywboe1wdSgNNIAv4hSOeQhk6B7hwrfdk2vm6bfBXHfaYkwkulktsW4dXcGoD4DU0vm52eH4qdzGyL5soEwhgQow9ojtbHj08XXWIPqdvm-KdLD5m82K8ICbJeCCcppmpbAqMiayyLGpKbWXyraVcgUwrFWdvreRxcA7UKp5brmzClTJ0a_gQvVx6z6796cAHfWg718STOsllngkuBIsudnGZ_jMHVp9dfSrdn2ZU9wh1j1D3CPUVIf8HB49ifQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2858743441</pqid></control><display><type>article</type><title>Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection</title><source>IEEE Electronic Library (IEL)</source><creator>Ye, Yuanxin ; Wang, Mengmeng ; Zhou, Liang ; Lei, Guangyang ; Fan, Jianwei ; Qin, Yao</creator><creatorcontrib>Ye, Yuanxin ; Wang, Mengmeng ; Zhou, Liang ; Lei, Guangyang ; Fan, Jianwei ; Qin, Yao</creatorcontrib><description>Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a novel adjacent-level feature fusion network with 3-D convolution (named AFCF3D-Net) is proposed in this article. First, through the inner fusion property of 3-D convolution, we design a new feature fusion way that can simultaneously extract and fuse the feature information from bi-temporal images. Then, to alleviate the semantic gap between low-level features and high-level features, we propose an adjacent-level feature cross-fusion (AFCF) module to aggregate complementary feature information between the adjacent levels. Furthermore, the full-scale skip connection strategy is introduced to improve the capability of pixel-wise prediction and the compactness of changed objects in the results. Finally, the proposed AFCF3D-Net has been validated on the three challenging RS CD datasets: the Wuhan building dataset (WHU-CD), the LEVIR building dataset (LEVIR-CD), and the Sun Yat-Sen University dataset (SYSU-CD). The results of quantitative analysis and qualitative comparison demonstrate that the proposed AFCF3D-Net achieves better performance compared to other state-of-the-art (SOTA) methods. The code for this work is available at https://github.com/wm-Githuber/AFCF3D-Net .</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2023.3305499</identifier><language>eng</language><publisher>New York: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</publisher><subject>Change detection ; Convolution ; Datasets ; Deep learning ; Detection ; Qualitative analysis ; Remote sensing</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2023, Vol.61, p.1-14</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c273t-3067cdf6e1147df1e1160fdc8bf039e56d9892bf535178e0f938f39f2399c0bc3</citedby><cites>FETCH-LOGICAL-c273t-3067cdf6e1147df1e1160fdc8bf039e56d9892bf535178e0f938f39f2399c0bc3</cites><orcidid>0000-0002-9793-1092 ; 0000-0001-6843-6722 ; 0000-0002-3777-6334</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,4024,27923,27924,27925</link.rule.ids></links><search><creatorcontrib>Ye, Yuanxin</creatorcontrib><creatorcontrib>Wang, Mengmeng</creatorcontrib><creatorcontrib>Zhou, Liang</creatorcontrib><creatorcontrib>Lei, Guangyang</creatorcontrib><creatorcontrib>Fan, Jianwei</creatorcontrib><creatorcontrib>Qin, Yao</creatorcontrib><title>Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection</title><title>IEEE transactions on geoscience and remote sensing</title><description>Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a novel adjacent-level feature fusion network with 3-D convolution (named AFCF3D-Net) is proposed in this article. First, through the inner fusion property of 3-D convolution, we design a new feature fusion way that can simultaneously extract and fuse the feature information from bi-temporal images. Then, to alleviate the semantic gap between low-level features and high-level features, we propose an adjacent-level feature cross-fusion (AFCF) module to aggregate complementary feature information between the adjacent levels. Furthermore, the full-scale skip connection strategy is introduced to improve the capability of pixel-wise prediction and the compactness of changed objects in the results. Finally, the proposed AFCF3D-Net has been validated on the three challenging RS CD datasets: the Wuhan building dataset (WHU-CD), the LEVIR building dataset (LEVIR-CD), and the Sun Yat-Sen University dataset (SYSU-CD). The results of quantitative analysis and qualitative comparison demonstrate that the proposed AFCF3D-Net achieves better performance compared to other state-of-the-art (SOTA) methods. The code for this work is available at https://github.com/wm-Githuber/AFCF3D-Net .</description><subject>Change detection</subject><subject>Convolution</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Detection</subject><subject>Qualitative analysis</subject><subject>Remote sensing</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNotkFFrwjAUhcPYYM7tB-wtsOe4pEna5lHqdII4UMdgL6GmN1rR1iXpYP9-Kfp0Xs4599wPoWdGR4xR9bqZrdajhCZ8xDmVQqkbNGBS5oSmQtyiAWUqJUmuknv04P2BUiYkywboe1wdSgNNIAv4hSOeQhk6B7hwrfdk2vm6bfBXHfaYkwkulktsW4dXcGoD4DU0vm52eH4qdzGyL5soEwhgQow9ojtbHj08XXWIPqdvm-KdLD5m82K8ICbJeCCcppmpbAqMiayyLGpKbWXyraVcgUwrFWdvreRxcA7UKp5brmzClTJ0a_gQvVx6z6796cAHfWg718STOsllngkuBIsudnGZ_jMHVp9dfSrdn2ZU9wh1j1D3CPUVIf8HB49ifQ</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>Ye, Yuanxin</creator><creator>Wang, Mengmeng</creator><creator>Zhou, Liang</creator><creator>Lei, Guangyang</creator><creator>Fan, Jianwei</creator><creator>Qin, Yao</creator><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-9793-1092</orcidid><orcidid>https://orcid.org/0000-0001-6843-6722</orcidid><orcidid>https://orcid.org/0000-0002-3777-6334</orcidid></search><sort><creationdate>2023</creationdate><title>Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection</title><author>Ye, Yuanxin ; Wang, Mengmeng ; Zhou, Liang ; Lei, Guangyang ; Fan, Jianwei ; Qin, Yao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c273t-3067cdf6e1147df1e1160fdc8bf039e56d9892bf535178e0f938f39f2399c0bc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Change detection</topic><topic>Convolution</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Detection</topic><topic>Qualitative analysis</topic><topic>Remote sensing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ye, Yuanxin</creatorcontrib><creatorcontrib>Wang, Mengmeng</creatorcontrib><creatorcontrib>Zhou, Liang</creatorcontrib><creatorcontrib>Lei, Guangyang</creatorcontrib><creatorcontrib>Fan, Jianwei</creatorcontrib><creatorcontrib>Qin, Yao</creatorcontrib><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ye, Yuanxin</au><au>Wang, Mengmeng</au><au>Zhou, Liang</au><au>Lei, Guangyang</au><au>Fan, Jianwei</au><au>Qin, Yao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><date>2023</date><risdate>2023</risdate><volume>61</volume><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><abstract>Deep learning-based (DL-based) change detection (CD) using remote sensing (RS) images has received increasing attention in recent years. However, how to effectively extract and fuse the deep features of bi-temporal images for improving the accuracy of CD is still a challenge. To address that, a novel adjacent-level feature fusion network with 3-D convolution (named AFCF3D-Net) is proposed in this article. First, through the inner fusion property of 3-D convolution, we design a new feature fusion way that can simultaneously extract and fuse the feature information from bi-temporal images. Then, to alleviate the semantic gap between low-level features and high-level features, we propose an adjacent-level feature cross-fusion (AFCF) module to aggregate complementary feature information between the adjacent levels. Furthermore, the full-scale skip connection strategy is introduced to improve the capability of pixel-wise prediction and the compactness of changed objects in the results. Finally, the proposed AFCF3D-Net has been validated on the three challenging RS CD datasets: the Wuhan building dataset (WHU-CD), the LEVIR building dataset (LEVIR-CD), and the Sun Yat-Sen University dataset (SYSU-CD). The results of quantitative analysis and qualitative comparison demonstrate that the proposed AFCF3D-Net achieves better performance compared to other state-of-the-art (SOTA) methods. The code for this work is available at https://github.com/wm-Githuber/AFCF3D-Net .</abstract><cop>New York</cop><pub>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</pub><doi>10.1109/TGRS.2023.3305499</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-9793-1092</orcidid><orcidid>https://orcid.org/0000-0001-6843-6722</orcidid><orcidid>https://orcid.org/0000-0002-3777-6334</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0196-2892
ispartof IEEE transactions on geoscience and remote sensing, 2023, Vol.61, p.1-14
issn 0196-2892
1558-0644
language eng
recordid cdi_proquest_journals_2858743441
source IEEE Electronic Library (IEL)
subjects Change detection
Convolution
Datasets
Deep learning
Detection
Qualitative analysis
Remote sensing
title Adjacent-Level Feature Cross-Fusion With 3-D CNN for Remote Sensing Image Change Detection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T11%3A31%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adjacent-Level%20Feature%20Cross-Fusion%20With%203-D%20CNN%20for%20Remote%20Sensing%20Image%20Change%20Detection&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Ye,%20Yuanxin&rft.date=2023&rft.volume=61&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=0196-2892&rft.eissn=1558-0644&rft_id=info:doi/10.1109/TGRS.2023.3305499&rft_dat=%3Cproquest_cross%3E2858743441%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2858743441&rft_id=info:pmid/&rfr_iscdi=true