GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement
Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the tra...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2024-12, Vol.46 (12), p.10419-10433 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 10433 |
---|---|
container_issue | 12 |
container_start_page | 10419 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 46 |
creator | Zhang, Xiaohan Li, Xingyu Sultani, Waqas Chen, Chen Wshah, Safwan |
description | Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the training and testing data are captured from completely distinct areas. We attribute this deficiency to the lack of ability to extract the geometric layout of visual features and models' overfitting to low-level details. Our preliminary work (Zhang et al. 2022) introduced a Geometric Layout Extractor (GLE) to capture the geometric layout from input features. However, the previous GLE does not fully exploit information in the input feature. In this work, we propose GeoDTR+ with an enhanced GLE module that better models the correlations among visual features. To fully explore the LS techniques from our preliminary work, we further propose Contrastive Hard Samples Generation (CHSG) to facilitate model training. Extensive experiments show that GeoDTR+ achieves state-of-the-art (SOTA) results in cross-area evaluation on CVUSA (Workman et al. 2015), CVACT (Liu and Li, 2019), and VIGOR (Zhu et al. 2021) by a large margin (16.44%, 22.71%, and 13.66% without polar transformation) while keeping the same-area performance comparable to existing SOTA. Moreover, we provide detailed analyses of GeoDTR+. |
doi_str_mv | 10.1109/TPAMI.2024.3443652 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10636837</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10636837</ieee_id><sourcerecordid>3093170869</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1642-4d6f273991f9c4adfd9c4de983c9d574b65a2d508e24ff1f7da15ed396869b533</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhoMobk7_gIjsUpDOJCdNG-_GplOYOKR6W7LmRCL9mE3n0F9v66Z4dQ6H533hPIScMjpijKqrZDF-uB9xysUIhAAZ8j3SZwpUACGofdKnTPIgjnncI0fev1HKREjhkPRAMcFExPtkMcNqmjxdXg-TaqNrM5xhibXLhpO68j54cbhpT1VeZTp3X7pxVTn8cLq7Fdh04NR5LBtdvuZYtMsxObA693iymwPyfHuTTO6C-ePsfjKeBxmTggfCSMsjUIpZlQltrGmHQRVDpkwYiaUMNTchjZELa5mNjGYhGlAylmoZAgzIxbZ3VVfva_RNWjifYZ7rEqu1T4EqYBFt6RblWzTrfqrRpqvaFbr-TBlNO5Ppj8m0M5nuTLah813_elmg-Yv8qmuBsy3gEPFfowQZQwTfm0J3cA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3093170869</pqid></control><display><type>article</type><title>GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Xiaohan ; Li, Xingyu ; Sultani, Waqas ; Chen, Chen ; Wshah, Safwan</creator><creatorcontrib>Zhang, Xiaohan ; Li, Xingyu ; Sultani, Waqas ; Chen, Chen ; Wshah, Safwan</creatorcontrib><description>Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the training and testing data are captured from completely distinct areas. We attribute this deficiency to the lack of ability to extract the geometric layout of visual features and models' overfitting to low-level details. Our preliminary work (Zhang et al. 2022) introduced a Geometric Layout Extractor (GLE) to capture the geometric layout from input features. However, the previous GLE does not fully exploit information in the input feature. In this work, we propose GeoDTR+ with an enhanced GLE module that better models the correlations among visual features. To fully explore the LS techniques from our preliminary work, we further propose Contrastive Hard Samples Generation (CHSG) to facilitate model training. Extensive experiments show that GeoDTR+ achieves state-of-the-art (SOTA) results in cross-area evaluation on CVUSA (Workman et al. 2015), CVACT (Liu and Li, 2019), and VIGOR (Zhu et al. 2021) by a large margin (16.44%, 22.71%, and 13.66% without polar transformation) while keeping the same-area performance comparable to existing SOTA. Moreover, we provide detailed analyses of GeoDTR+.</description><identifier>ISSN: 0162-8828</identifier><identifier>ISSN: 1939-3539</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2024.3443652</identifier><identifier>PMID: 39141472</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Accuracy ; Correlation ; cross-view geolocalization ; Data mining ; Feature extraction ; image retrieval ; Layout ; metric learning ; Training ; Transformers ; Visual geolocalization</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2024-12, Vol.46 (12), p.10419-10433</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1642-4d6f273991f9c4adfd9c4de983c9d574b65a2d508e24ff1f7da15ed396869b533</cites><orcidid>0000-0002-9322-0728 ; 0000-0001-5051-7719 ; 0000-0001-6344-9604 ; 0000-0002-0043-316X ; 0000-0003-3957-7061</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10636837$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10636837$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39141472$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Xiaohan</creatorcontrib><creatorcontrib>Li, Xingyu</creatorcontrib><creatorcontrib>Sultani, Waqas</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Wshah, Safwan</creatorcontrib><title>GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the training and testing data are captured from completely distinct areas. We attribute this deficiency to the lack of ability to extract the geometric layout of visual features and models' overfitting to low-level details. Our preliminary work (Zhang et al. 2022) introduced a Geometric Layout Extractor (GLE) to capture the geometric layout from input features. However, the previous GLE does not fully exploit information in the input feature. In this work, we propose GeoDTR+ with an enhanced GLE module that better models the correlations among visual features. To fully explore the LS techniques from our preliminary work, we further propose Contrastive Hard Samples Generation (CHSG) to facilitate model training. Extensive experiments show that GeoDTR+ achieves state-of-the-art (SOTA) results in cross-area evaluation on CVUSA (Workman et al. 2015), CVACT (Liu and Li, 2019), and VIGOR (Zhu et al. 2021) by a large margin (16.44%, 22.71%, and 13.66% without polar transformation) while keeping the same-area performance comparable to existing SOTA. Moreover, we provide detailed analyses of GeoDTR+.</description><subject>Accuracy</subject><subject>Correlation</subject><subject>cross-view geolocalization</subject><subject>Data mining</subject><subject>Feature extraction</subject><subject>image retrieval</subject><subject>Layout</subject><subject>metric learning</subject><subject>Training</subject><subject>Transformers</subject><subject>Visual geolocalization</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkF1LwzAUhoMobk7_gIjsUpDOJCdNG-_GplOYOKR6W7LmRCL9mE3n0F9v66Z4dQ6H533hPIScMjpijKqrZDF-uB9xysUIhAAZ8j3SZwpUACGofdKnTPIgjnncI0fev1HKREjhkPRAMcFExPtkMcNqmjxdXg-TaqNrM5xhibXLhpO68j54cbhpT1VeZTp3X7pxVTn8cLq7Fdh04NR5LBtdvuZYtMsxObA693iymwPyfHuTTO6C-ePsfjKeBxmTggfCSMsjUIpZlQltrGmHQRVDpkwYiaUMNTchjZELa5mNjGYhGlAylmoZAgzIxbZ3VVfva_RNWjifYZ7rEqu1T4EqYBFt6RblWzTrfqrRpqvaFbr-TBlNO5Ppj8m0M5nuTLah813_elmg-Yv8qmuBsy3gEPFfowQZQwTfm0J3cA</recordid><startdate>202412</startdate><enddate>202412</enddate><creator>Zhang, Xiaohan</creator><creator>Li, Xingyu</creator><creator>Sultani, Waqas</creator><creator>Chen, Chen</creator><creator>Wshah, Safwan</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9322-0728</orcidid><orcidid>https://orcid.org/0000-0001-5051-7719</orcidid><orcidid>https://orcid.org/0000-0001-6344-9604</orcidid><orcidid>https://orcid.org/0000-0002-0043-316X</orcidid><orcidid>https://orcid.org/0000-0003-3957-7061</orcidid></search><sort><creationdate>202412</creationdate><title>GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement</title><author>Zhang, Xiaohan ; Li, Xingyu ; Sultani, Waqas ; Chen, Chen ; Wshah, Safwan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1642-4d6f273991f9c4adfd9c4de983c9d574b65a2d508e24ff1f7da15ed396869b533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Correlation</topic><topic>cross-view geolocalization</topic><topic>Data mining</topic><topic>Feature extraction</topic><topic>image retrieval</topic><topic>Layout</topic><topic>metric learning</topic><topic>Training</topic><topic>Transformers</topic><topic>Visual geolocalization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Xiaohan</creatorcontrib><creatorcontrib>Li, Xingyu</creatorcontrib><creatorcontrib>Sultani, Waqas</creatorcontrib><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Wshah, Safwan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Xiaohan</au><au>Li, Xingyu</au><au>Sultani, Waqas</au><au>Chen, Chen</au><au>Wshah, Safwan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2024-12</date><risdate>2024</risdate><volume>46</volume><issue>12</issue><spage>10419</spage><epage>10433</epage><pages>10419-10433</pages><issn>0162-8828</issn><issn>1939-3539</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Cross-View Geo-Localization (CVGL) estimates the location of a ground image by matching it to a geo-tagged aerial image in a database. Recent works achieve outstanding progress on CVGL benchmarks. However, existing methods still suffer from poor performance in cross-area evaluation, in which the training and testing data are captured from completely distinct areas. We attribute this deficiency to the lack of ability to extract the geometric layout of visual features and models' overfitting to low-level details. Our preliminary work (Zhang et al. 2022) introduced a Geometric Layout Extractor (GLE) to capture the geometric layout from input features. However, the previous GLE does not fully exploit information in the input feature. In this work, we propose GeoDTR+ with an enhanced GLE module that better models the correlations among visual features. To fully explore the LS techniques from our preliminary work, we further propose Contrastive Hard Samples Generation (CHSG) to facilitate model training. Extensive experiments show that GeoDTR+ achieves state-of-the-art (SOTA) results in cross-area evaluation on CVUSA (Workman et al. 2015), CVACT (Liu and Li, 2019), and VIGOR (Zhu et al. 2021) by a large margin (16.44%, 22.71%, and 13.66% without polar transformation) while keeping the same-area performance comparable to existing SOTA. Moreover, we provide detailed analyses of GeoDTR+.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>39141472</pmid><doi>10.1109/TPAMI.2024.3443652</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-9322-0728</orcidid><orcidid>https://orcid.org/0000-0001-5051-7719</orcidid><orcidid>https://orcid.org/0000-0001-6344-9604</orcidid><orcidid>https://orcid.org/0000-0002-0043-316X</orcidid><orcidid>https://orcid.org/0000-0003-3957-7061</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2024-12, Vol.46 (12), p.10419-10433 |
issn | 0162-8828 1939-3539 1939-3539 2160-9292 |
language | eng |
recordid | cdi_ieee_primary_10636837 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy Correlation cross-view geolocalization Data mining Feature extraction image retrieval Layout metric learning Training Transformers Visual geolocalization |
title | GeoDTR+: Toward Generic Cross-View Geolocalization via Geometric Disentanglement |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T18%3A31%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GeoDTR+:%20Toward%20Generic%20Cross-View%20Geolocalization%20via%20Geometric%20Disentanglement&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Zhang,%20Xiaohan&rft.date=2024-12&rft.volume=46&rft.issue=12&rft.spage=10419&rft.epage=10433&rft.pages=10419-10433&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2024.3443652&rft_dat=%3Cproquest_RIE%3E3093170869%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3093170869&rft_id=info:pmid/39141472&rft_ieee_id=10636837&rfr_iscdi=true |