Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN

In this letter, we propose a pseudo-siamese convolutional neural network architecture that enables to solve the task of identifying corresponding patches in very high-resolution optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel n...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2018-05, Vol.15 (5), p.784-788
Hauptverfasser: Hughes, Lloyd H., Schmitt, Michael, Mou, Lichao, Wang, Yuanyuan, Zhu, Xiao Xiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 788
container_issue 5
container_start_page 784
container_title IEEE geoscience and remote sensing letters
container_volume 15
creator Hughes, Lloyd H.
Schmitt, Michael
Mou, Lichao
Wang, Yuanyuan
Zhu, Xiao Xiang
description In this letter, we propose a pseudo-siamese convolutional neural network architecture that enables to solve the task of identifying corresponding patches in very high-resolution optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated data set that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently coregistered 3-D point clouds. The satellite images, from which the patches comprising our data set are extracted, show a complex urban scene containing many elevated objects (i.e., buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development toward a generalized multisensor key-point matching procedure.
doi_str_mv 10.1109/LGRS.2018.2799232
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2174543296</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8314449</ieee_id><sourcerecordid>2174543296</sourcerecordid><originalsourceid>FETCH-LOGICAL-c341t-194b24513c3f31193d4f6d776b3b9e015b0d42a58bbb23c5296a8ecd2e3420633</originalsourceid><addsrcrecordid>eNo9kE1rwkAQhpfSQq3tDyi9LPQcu7Mfye5RQmsFUdFKe1s22Y1GNEl348F_3wSlp5lhnncGHoSegYwAiHqbTVbrESUgRzRRijJ6gwYghIyISOC277mIhJI_9-ghhD0hlEuZDNBmal3VlsW5rLY4rb13oakr209L0-Y7F3BZ4fV4hU1l8aJpy9wc8PRott3mu2x32OBlcCdbR-vSHF1wOJ3PH9FdYQ7BPV3rEG0-3r_Sz2i2mEzT8SzKGYc2AsUzygWwnBUMQDHLi9gmSZyxTDkCIiOWUyNklmWU5YKq2EiXW-oYpyRmbIheL3cbX_-eXGj1vj75qnupKSRccNZFOgouVO7rELwrdOPLo_FnDUT39nRvT_f29NVel3m5ZErn3D8vGXDOFfsDPc5pkA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2174543296</pqid></control><display><type>article</type><title>Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN</title><source>IEEE Electronic Library (IEL)</source><creator>Hughes, Lloyd H. ; Schmitt, Michael ; Mou, Lichao ; Wang, Yuanyuan ; Zhu, Xiao Xiang</creator><creatorcontrib>Hughes, Lloyd H. ; Schmitt, Michael ; Mou, Lichao ; Wang, Yuanyuan ; Zhu, Xiao Xiang</creatorcontrib><description>In this letter, we propose a pseudo-siamese convolutional neural network architecture that enables to solve the task of identifying corresponding patches in very high-resolution optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated data set that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently coregistered 3-D point clouds. The satellite images, from which the patches comprising our data set are extracted, show a complex urban scene containing many elevated objects (i.e., buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development toward a generalized multisensor key-point matching procedure.</description><identifier>ISSN: 1545-598X</identifier><identifier>EISSN: 1558-0571</identifier><identifier>DOI: 10.1109/LGRS.2018.2799232</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adaptive optics ; Artificial neural networks ; Convolutional neural networks (CNNs) ; data fusion ; deep learning ; deep matching ; Entropy ; image matching ; Image reconstruction ; Imagery ; Neural networks ; Object recognition ; Optical distortion ; Optical fiber networks ; optical imagery ; Optical imaging ; Optical interferometry ; Optical sensors ; Patches (structures) ; Radar ; Radar imaging ; Remote sensing ; SAR (radar) ; Satellite imagery ; Satellites ; Synthetic aperture radar ; synthetic aperture radar (SAR) ; Three dimensional models</subject><ispartof>IEEE geoscience and remote sensing letters, 2018-05, Vol.15 (5), p.784-788</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c341t-194b24513c3f31193d4f6d776b3b9e015b0d42a58bbb23c5296a8ecd2e3420633</citedby><cites>FETCH-LOGICAL-c341t-194b24513c3f31193d4f6d776b3b9e015b0d42a58bbb23c5296a8ecd2e3420633</cites><orcidid>0000-0003-0293-4491 ; 0000-0002-0575-2362 ; 0000-0002-0586-9413 ; 0000-0001-5530-3613</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8314449$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids></links><search><creatorcontrib>Hughes, Lloyd H.</creatorcontrib><creatorcontrib>Schmitt, Michael</creatorcontrib><creatorcontrib>Mou, Lichao</creatorcontrib><creatorcontrib>Wang, Yuanyuan</creatorcontrib><creatorcontrib>Zhu, Xiao Xiang</creatorcontrib><title>Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>In this letter, we propose a pseudo-siamese convolutional neural network architecture that enables to solve the task of identifying corresponding patches in very high-resolution optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated data set that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently coregistered 3-D point clouds. The satellite images, from which the patches comprising our data set are extracted, show a complex urban scene containing many elevated objects (i.e., buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development toward a generalized multisensor key-point matching procedure.</description><subject>Adaptive optics</subject><subject>Artificial neural networks</subject><subject>Convolutional neural networks (CNNs)</subject><subject>data fusion</subject><subject>deep learning</subject><subject>deep matching</subject><subject>Entropy</subject><subject>image matching</subject><subject>Image reconstruction</subject><subject>Imagery</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Optical distortion</subject><subject>Optical fiber networks</subject><subject>optical imagery</subject><subject>Optical imaging</subject><subject>Optical interferometry</subject><subject>Optical sensors</subject><subject>Patches (structures)</subject><subject>Radar</subject><subject>Radar imaging</subject><subject>Remote sensing</subject><subject>SAR (radar)</subject><subject>Satellite imagery</subject><subject>Satellites</subject><subject>Synthetic aperture radar</subject><subject>synthetic aperture radar (SAR)</subject><subject>Three dimensional models</subject><issn>1545-598X</issn><issn>1558-0571</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNo9kE1rwkAQhpfSQq3tDyi9LPQcu7Mfye5RQmsFUdFKe1s22Y1GNEl348F_3wSlp5lhnncGHoSegYwAiHqbTVbrESUgRzRRijJ6gwYghIyISOC277mIhJI_9-ghhD0hlEuZDNBmal3VlsW5rLY4rb13oakr209L0-Y7F3BZ4fV4hU1l8aJpy9wc8PRott3mu2x32OBlcCdbR-vSHF1wOJ3PH9FdYQ7BPV3rEG0-3r_Sz2i2mEzT8SzKGYc2AsUzygWwnBUMQDHLi9gmSZyxTDkCIiOWUyNklmWU5YKq2EiXW-oYpyRmbIheL3cbX_-eXGj1vj75qnupKSRccNZFOgouVO7rELwrdOPLo_FnDUT39nRvT_f29NVel3m5ZErn3D8vGXDOFfsDPc5pkA</recordid><startdate>20180501</startdate><enddate>20180501</enddate><creator>Hughes, Lloyd H.</creator><creator>Schmitt, Michael</creator><creator>Mou, Lichao</creator><creator>Wang, Yuanyuan</creator><creator>Zhu, Xiao Xiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TG</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>JQ2</scope><scope>KL.</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-0293-4491</orcidid><orcidid>https://orcid.org/0000-0002-0575-2362</orcidid><orcidid>https://orcid.org/0000-0002-0586-9413</orcidid><orcidid>https://orcid.org/0000-0001-5530-3613</orcidid></search><sort><creationdate>20180501</creationdate><title>Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN</title><author>Hughes, Lloyd H. ; Schmitt, Michael ; Mou, Lichao ; Wang, Yuanyuan ; Zhu, Xiao Xiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c341t-194b24513c3f31193d4f6d776b3b9e015b0d42a58bbb23c5296a8ecd2e3420633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Adaptive optics</topic><topic>Artificial neural networks</topic><topic>Convolutional neural networks (CNNs)</topic><topic>data fusion</topic><topic>deep learning</topic><topic>deep matching</topic><topic>Entropy</topic><topic>image matching</topic><topic>Image reconstruction</topic><topic>Imagery</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Optical distortion</topic><topic>Optical fiber networks</topic><topic>optical imagery</topic><topic>Optical imaging</topic><topic>Optical interferometry</topic><topic>Optical sensors</topic><topic>Patches (structures)</topic><topic>Radar</topic><topic>Radar imaging</topic><topic>Remote sensing</topic><topic>SAR (radar)</topic><topic>Satellite imagery</topic><topic>Satellites</topic><topic>Synthetic aperture radar</topic><topic>synthetic aperture radar (SAR)</topic><topic>Three dimensional models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hughes, Lloyd H.</creatorcontrib><creatorcontrib>Schmitt, Michael</creatorcontrib><creatorcontrib>Mou, Lichao</creatorcontrib><creatorcontrib>Wang, Yuanyuan</creatorcontrib><creatorcontrib>Zhu, Xiao Xiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>ProQuest Computer Science Collection</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hughes, Lloyd H.</au><au>Schmitt, Michael</au><au>Mou, Lichao</au><au>Wang, Yuanyuan</au><au>Zhu, Xiao Xiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2018-05-01</date><risdate>2018</risdate><volume>15</volume><issue>5</issue><spage>784</spage><epage>788</epage><pages>784-788</pages><issn>1545-598X</issn><eissn>1558-0571</eissn><coden>IGRSBY</coden><abstract>In this letter, we propose a pseudo-siamese convolutional neural network architecture that enables to solve the task of identifying corresponding patches in very high-resolution optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated data set that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently coregistered 3-D point clouds. The satellite images, from which the patches comprising our data set are extracted, show a complex urban scene containing many elevated objects (i.e., buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development toward a generalized multisensor key-point matching procedure.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LGRS.2018.2799232</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0003-0293-4491</orcidid><orcidid>https://orcid.org/0000-0002-0575-2362</orcidid><orcidid>https://orcid.org/0000-0002-0586-9413</orcidid><orcidid>https://orcid.org/0000-0001-5530-3613</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1545-598X
ispartof IEEE geoscience and remote sensing letters, 2018-05, Vol.15 (5), p.784-788
issn 1545-598X
1558-0571
language eng
recordid cdi_proquest_journals_2174543296
source IEEE Electronic Library (IEL)
subjects Adaptive optics
Artificial neural networks
Convolutional neural networks (CNNs)
data fusion
deep learning
deep matching
Entropy
image matching
Image reconstruction
Imagery
Neural networks
Object recognition
Optical distortion
Optical fiber networks
optical imagery
Optical imaging
Optical interferometry
Optical sensors
Patches (structures)
Radar
Radar imaging
Remote sensing
SAR (radar)
Satellite imagery
Satellites
Synthetic aperture radar
synthetic aperture radar (SAR)
Three dimensional models
title Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T16%3A56%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Identifying%20Corresponding%20Patches%20in%20SAR%20and%20Optical%20Images%20With%20a%20Pseudo-Siamese%20CNN&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Hughes,%20Lloyd%20H.&rft.date=2018-05-01&rft.volume=15&rft.issue=5&rft.spage=784&rft.epage=788&rft.pages=784-788&rft.issn=1545-598X&rft.eissn=1558-0571&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2018.2799232&rft_dat=%3Cproquest_cross%3E2174543296%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2174543296&rft_id=info:pmid/&rft_ieee_id=8314449&rfr_iscdi=true