Autonomous visual navigation

A method of visual navigation for a robot includes integrating a depth map with localization information to generate a three-dimensional (3D) map. The method also includes motion planning based on the 3D map, the localization information, and/or a user input. The motion planning overrides the user i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Agarwal, Saurav, Wierzynski, Casimir Matthew, Aghamohammadi, Aliakbar, Behabadi, Bardia Fallah, Gibson, Sarah Paige
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Agarwal, Saurav
Wierzynski, Casimir Matthew
Aghamohammadi, Aliakbar
Behabadi, Bardia Fallah
Gibson, Sarah Paige
description A method of visual navigation for a robot includes integrating a depth map with localization information to generate a three-dimensional (3D) map. The method also includes motion planning based on the 3D map, the localization information, and/or a user input. The motion planning overrides the user input when a trajectory and/or a velocity, received via the user input, is predicted to cause a collision.
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US10705528B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US10705528B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US10705528B23</originalsourceid><addsrcrecordid>eNrjZJBxLC3Jz8vPzS8tVijLLC5NzFHISyzLTE8syczP42FgTUvMKU7lhdLcDIpuriHOHrqpBfnxqcUFicmpeakl8aHBhgbmBqamRhZORsbEqAEA2f8kbQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Autonomous visual navigation</title><source>esp@cenet</source><creator>Agarwal, Saurav ; Wierzynski, Casimir Matthew ; Aghamohammadi, Aliakbar ; Behabadi, Bardia Fallah ; Gibson, Sarah Paige</creator><creatorcontrib>Agarwal, Saurav ; Wierzynski, Casimir Matthew ; Aghamohammadi, Aliakbar ; Behabadi, Bardia Fallah ; Gibson, Sarah Paige</creatorcontrib><description>A method of visual navigation for a robot includes integrating a depth map with localization information to generate a three-dimensional (3D) map. The method also includes motion planning based on the 3D map, the localization information, and/or a user input. The motion planning overrides the user input when a trajectory and/or a velocity, received via the user input, is predicted to cause a collision.</description><language>eng</language><subject>CALCULATING ; COMPUTING ; CONTROLLING ; COUNTING ; GYROSCOPIC INSTRUMENTS ; HANDLING RECORD CARRIERS ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; MEASURING ; MEASURING DISTANCES, LEVELS OR BEARINGS ; NAVIGATION ; PHOTOGRAMMETRY OR VIDEOGRAMMETRY ; PHYSICS ; PRESENTATION OF DATA ; RECOGNITION OF DATA ; RECORD CARRIERS ; REGULATING ; SURVEYING ; SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES ; TESTING</subject><creationdate>2020</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200707&amp;DB=EPODOC&amp;CC=US&amp;NR=10705528B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20200707&amp;DB=EPODOC&amp;CC=US&amp;NR=10705528B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Agarwal, Saurav</creatorcontrib><creatorcontrib>Wierzynski, Casimir Matthew</creatorcontrib><creatorcontrib>Aghamohammadi, Aliakbar</creatorcontrib><creatorcontrib>Behabadi, Bardia Fallah</creatorcontrib><creatorcontrib>Gibson, Sarah Paige</creatorcontrib><title>Autonomous visual navigation</title><description>A method of visual navigation for a robot includes integrating a depth map with localization information to generate a three-dimensional (3D) map. The method also includes motion planning based on the 3D map, the localization information, and/or a user input. The motion planning overrides the user input when a trajectory and/or a velocity, received via the user input, is predicted to cause a collision.</description><subject>CALCULATING</subject><subject>COMPUTING</subject><subject>CONTROLLING</subject><subject>COUNTING</subject><subject>GYROSCOPIC INSTRUMENTS</subject><subject>HANDLING RECORD CARRIERS</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>MEASURING</subject><subject>MEASURING DISTANCES, LEVELS OR BEARINGS</subject><subject>NAVIGATION</subject><subject>PHOTOGRAMMETRY OR VIDEOGRAMMETRY</subject><subject>PHYSICS</subject><subject>PRESENTATION OF DATA</subject><subject>RECOGNITION OF DATA</subject><subject>RECORD CARRIERS</subject><subject>REGULATING</subject><subject>SURVEYING</subject><subject>SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES</subject><subject>TESTING</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2020</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZJBxLC3Jz8vPzS8tVijLLC5NzFHISyzLTE8syczP42FgTUvMKU7lhdLcDIpuriHOHrqpBfnxqcUFicmpeakl8aHBhgbmBqamRhZORsbEqAEA2f8kbQ</recordid><startdate>20200707</startdate><enddate>20200707</enddate><creator>Agarwal, Saurav</creator><creator>Wierzynski, Casimir Matthew</creator><creator>Aghamohammadi, Aliakbar</creator><creator>Behabadi, Bardia Fallah</creator><creator>Gibson, Sarah Paige</creator><scope>EVB</scope></search><sort><creationdate>20200707</creationdate><title>Autonomous visual navigation</title><author>Agarwal, Saurav ; Wierzynski, Casimir Matthew ; Aghamohammadi, Aliakbar ; Behabadi, Bardia Fallah ; Gibson, Sarah Paige</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US10705528B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2020</creationdate><topic>CALCULATING</topic><topic>COMPUTING</topic><topic>CONTROLLING</topic><topic>COUNTING</topic><topic>GYROSCOPIC INSTRUMENTS</topic><topic>HANDLING RECORD CARRIERS</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>MEASURING</topic><topic>MEASURING DISTANCES, LEVELS OR BEARINGS</topic><topic>NAVIGATION</topic><topic>PHOTOGRAMMETRY OR VIDEOGRAMMETRY</topic><topic>PHYSICS</topic><topic>PRESENTATION OF DATA</topic><topic>RECOGNITION OF DATA</topic><topic>RECORD CARRIERS</topic><topic>REGULATING</topic><topic>SURVEYING</topic><topic>SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES</topic><topic>TESTING</topic><toplevel>online_resources</toplevel><creatorcontrib>Agarwal, Saurav</creatorcontrib><creatorcontrib>Wierzynski, Casimir Matthew</creatorcontrib><creatorcontrib>Aghamohammadi, Aliakbar</creatorcontrib><creatorcontrib>Behabadi, Bardia Fallah</creatorcontrib><creatorcontrib>Gibson, Sarah Paige</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Agarwal, Saurav</au><au>Wierzynski, Casimir Matthew</au><au>Aghamohammadi, Aliakbar</au><au>Behabadi, Bardia Fallah</au><au>Gibson, Sarah Paige</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Autonomous visual navigation</title><date>2020-07-07</date><risdate>2020</risdate><abstract>A method of visual navigation for a robot includes integrating a depth map with localization information to generate a three-dimensional (3D) map. The method also includes motion planning based on the 3D map, the localization information, and/or a user input. The motion planning overrides the user input when a trajectory and/or a velocity, received via the user input, is predicted to cause a collision.</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language eng
recordid cdi_epo_espacenet_US10705528B2
source esp@cenet
subjects CALCULATING
COMPUTING
CONTROLLING
COUNTING
GYROSCOPIC INSTRUMENTS
HANDLING RECORD CARRIERS
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
MEASURING
MEASURING DISTANCES, LEVELS OR BEARINGS
NAVIGATION
PHOTOGRAMMETRY OR VIDEOGRAMMETRY
PHYSICS
PRESENTATION OF DATA
RECOGNITION OF DATA
RECORD CARRIERS
REGULATING
SURVEYING
SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
TESTING
title Autonomous visual navigation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T12%3A46%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Agarwal,%20Saurav&rft.date=2020-07-07&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS10705528B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true