Delta Descriptors: Change-Based Place Representation for Robust Visual Localization

Visual place recognition is challenging because there are so many factors that can cause the appearance of a place to change, from day-night cycles to seasonal change to atmospheric conditions. In recent years a large range of approaches have been developed to address this challenge including deep-l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-07
Hauptverfasser: Garg, Sourav, Harwood, Ben, Gaurangi Anand, Milford, Michael
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Garg, Sourav
Harwood, Ben
Gaurangi Anand
Milford, Michael
description Visual place recognition is challenging because there are so many factors that can cause the appearance of a place to change, from day-night cycles to seasonal change to atmospheric conditions. In recent years a large range of approaches have been developed to address this challenge including deep-learnt image descriptors, domain translation, and sequential filtering, all with shortcomings including generality and velocity-sensitivity. In this paper we propose a novel descriptor derived from tracking changes in any learned global descriptor over time, dubbed Delta Descriptors. Delta Descriptors mitigate the offsets induced in the original descriptor matching space in an unsupervised manner by considering temporal differences across places observed along a route. Like all other approaches, Delta Descriptors have a shortcoming - volatility on a frame to frame basis - which can be overcome by combining them with sequential filtering methods. Using two benchmark datasets, we first demonstrate the high performance of Delta Descriptors in isolation, before showing new state-of-the-art performance when combined with sequence-based matching. We also present results demonstrating the approach working with four different underlying descriptor types, and two other beneficial properties of Delta Descriptors in comparison to existing techniques: their increased inherent robustness to variations in camera motion and a reduced rate of performance degradation as dimensional reduction is applied. Source code is made available at https://github.com/oravus/DeltaDescriptors.
doi_str_mv 10.48550/arxiv.2006.05700
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2006_05700</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411955130</sourcerecordid><originalsourceid>FETCH-LOGICAL-a520-cb28b5015a37f0621e41da8db5689d15529fa6b42c2987b3bd6b0314e30fb0ff3</originalsourceid><addsrcrecordid>eNotj01Lw0AURQdBsNT-AFcOuE598yaTD3eaWhUKSi1uw5tkRlNiJs4kov56a-vqLu7hcg9jZwLmcaYUXJL_aj7nCJDMQaUAR2yCUoooixFP2CyELQBgkqJScsKeF6YdiC9MqHzTD86HK168UfdqohsKpuZPLVWGr03vTTDdQEPjOm6d52unxzDwlyaM1PKVq6htfvb1KTu21AYz-88p2yxvN8V9tHq8eyiuVxEphKjSmGkFQpFMLSQoTCxqymqtkiyvhVKYW0p0jBXmWaqlrhMNUsRGgtVgrZyy88Ps3rjsffNO_rv8My_35jvi4kD03n2MJgzl1o2-230qMRYiV0pIkL9dlVtl</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2411955130</pqid></control><display><type>article</type><title>Delta Descriptors: Change-Based Place Representation for Robust Visual Localization</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Garg, Sourav ; Harwood, Ben ; Gaurangi Anand ; Milford, Michael</creator><creatorcontrib>Garg, Sourav ; Harwood, Ben ; Gaurangi Anand ; Milford, Michael</creatorcontrib><description>Visual place recognition is challenging because there are so many factors that can cause the appearance of a place to change, from day-night cycles to seasonal change to atmospheric conditions. In recent years a large range of approaches have been developed to address this challenge including deep-learnt image descriptors, domain translation, and sequential filtering, all with shortcomings including generality and velocity-sensitivity. In this paper we propose a novel descriptor derived from tracking changes in any learned global descriptor over time, dubbed Delta Descriptors. Delta Descriptors mitigate the offsets induced in the original descriptor matching space in an unsupervised manner by considering temporal differences across places observed along a route. Like all other approaches, Delta Descriptors have a shortcoming - volatility on a frame to frame basis - which can be overcome by combining them with sequential filtering methods. Using two benchmark datasets, we first demonstrate the high performance of Delta Descriptors in isolation, before showing new state-of-the-art performance when combined with sequence-based matching. We also present results demonstrating the approach working with four different underlying descriptor types, and two other beneficial properties of Delta Descriptors in comparison to existing techniques: their increased inherent robustness to variations in camera motion and a reduced rate of performance degradation as dimensional reduction is applied. Source code is made available at https://github.com/oravus/DeltaDescriptors.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2006.05700</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Information Retrieval ; Computer Science - Learning ; Computer Science - Robotics ; Filtration ; Matching ; Offsets ; Performance degradation ; Source code ; Volatility</subject><ispartof>arXiv.org, 2020-07</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,782,883,27908</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2006.05700$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/LRA.2020.3005627$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Garg, Sourav</creatorcontrib><creatorcontrib>Harwood, Ben</creatorcontrib><creatorcontrib>Gaurangi Anand</creatorcontrib><creatorcontrib>Milford, Michael</creatorcontrib><title>Delta Descriptors: Change-Based Place Representation for Robust Visual Localization</title><title>arXiv.org</title><description>Visual place recognition is challenging because there are so many factors that can cause the appearance of a place to change, from day-night cycles to seasonal change to atmospheric conditions. In recent years a large range of approaches have been developed to address this challenge including deep-learnt image descriptors, domain translation, and sequential filtering, all with shortcomings including generality and velocity-sensitivity. In this paper we propose a novel descriptor derived from tracking changes in any learned global descriptor over time, dubbed Delta Descriptors. Delta Descriptors mitigate the offsets induced in the original descriptor matching space in an unsupervised manner by considering temporal differences across places observed along a route. Like all other approaches, Delta Descriptors have a shortcoming - volatility on a frame to frame basis - which can be overcome by combining them with sequential filtering methods. Using two benchmark datasets, we first demonstrate the high performance of Delta Descriptors in isolation, before showing new state-of-the-art performance when combined with sequence-based matching. We also present results demonstrating the approach working with four different underlying descriptor types, and two other beneficial properties of Delta Descriptors in comparison to existing techniques: their increased inherent robustness to variations in camera motion and a reduced rate of performance degradation as dimensional reduction is applied. Source code is made available at https://github.com/oravus/DeltaDescriptors.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Information Retrieval</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><subject>Filtration</subject><subject>Matching</subject><subject>Offsets</subject><subject>Performance degradation</subject><subject>Source code</subject><subject>Volatility</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01Lw0AURQdBsNT-AFcOuE598yaTD3eaWhUKSi1uw5tkRlNiJs4kov56a-vqLu7hcg9jZwLmcaYUXJL_aj7nCJDMQaUAR2yCUoooixFP2CyELQBgkqJScsKeF6YdiC9MqHzTD86HK168UfdqohsKpuZPLVWGr03vTTDdQEPjOm6d52unxzDwlyaM1PKVq6htfvb1KTu21AYz-88p2yxvN8V9tHq8eyiuVxEphKjSmGkFQpFMLSQoTCxqymqtkiyvhVKYW0p0jBXmWaqlrhMNUsRGgtVgrZyy88Ps3rjsffNO_rv8My_35jvi4kD03n2MJgzl1o2-230qMRYiV0pIkL9dlVtl</recordid><startdate>20200730</startdate><enddate>20200730</enddate><creator>Garg, Sourav</creator><creator>Harwood, Ben</creator><creator>Gaurangi Anand</creator><creator>Milford, Michael</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200730</creationdate><title>Delta Descriptors: Change-Based Place Representation for Robust Visual Localization</title><author>Garg, Sourav ; Harwood, Ben ; Gaurangi Anand ; Milford, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a520-cb28b5015a37f0621e41da8db5689d15529fa6b42c2987b3bd6b0314e30fb0ff3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Information Retrieval</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><topic>Filtration</topic><topic>Matching</topic><topic>Offsets</topic><topic>Performance degradation</topic><topic>Source code</topic><topic>Volatility</topic><toplevel>online_resources</toplevel><creatorcontrib>Garg, Sourav</creatorcontrib><creatorcontrib>Harwood, Ben</creatorcontrib><creatorcontrib>Gaurangi Anand</creatorcontrib><creatorcontrib>Milford, Michael</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Garg, Sourav</au><au>Harwood, Ben</au><au>Gaurangi Anand</au><au>Milford, Michael</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Delta Descriptors: Change-Based Place Representation for Robust Visual Localization</atitle><jtitle>arXiv.org</jtitle><date>2020-07-30</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Visual place recognition is challenging because there are so many factors that can cause the appearance of a place to change, from day-night cycles to seasonal change to atmospheric conditions. In recent years a large range of approaches have been developed to address this challenge including deep-learnt image descriptors, domain translation, and sequential filtering, all with shortcomings including generality and velocity-sensitivity. In this paper we propose a novel descriptor derived from tracking changes in any learned global descriptor over time, dubbed Delta Descriptors. Delta Descriptors mitigate the offsets induced in the original descriptor matching space in an unsupervised manner by considering temporal differences across places observed along a route. Like all other approaches, Delta Descriptors have a shortcoming - volatility on a frame to frame basis - which can be overcome by combining them with sequential filtering methods. Using two benchmark datasets, we first demonstrate the high performance of Delta Descriptors in isolation, before showing new state-of-the-art performance when combined with sequence-based matching. We also present results demonstrating the approach working with four different underlying descriptor types, and two other beneficial properties of Delta Descriptors in comparison to existing techniques: their increased inherent robustness to variations in camera motion and a reduced rate of performance degradation as dimensional reduction is applied. Source code is made available at https://github.com/oravus/DeltaDescriptors.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2006.05700</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-07
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2006_05700
source arXiv.org; Free E- Journals
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Information Retrieval
Computer Science - Learning
Computer Science - Robotics
Filtration
Matching
Offsets
Performance degradation
Source code
Volatility
title Delta Descriptors: Change-Based Place Representation for Robust Visual Localization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T09%3A31%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Delta%20Descriptors:%20Change-Based%20Place%20Representation%20for%20Robust%20Visual%20Localization&rft.jtitle=arXiv.org&rft.au=Garg,%20Sourav&rft.date=2020-07-30&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2006.05700&rft_dat=%3Cproquest_arxiv%3E2411955130%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2411955130&rft_id=info:pmid/&rfr_iscdi=true