Generating and/or using training instances that include previously captured robot vision data and drivability labels
Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision c...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Patent |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Mueller, Joerg Husain, Ammar |
description | Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components. |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_US11741336B2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>US11741336B2</sourcerecordid><originalsourceid>FETCH-epo_espacenet_US11741336B23</originalsourceid><addsrcrecordid>eNqNjksKwkAQRLNxIeod2gOIxIjuFT97dR06M602DDNDd08gt9eAB3BV9eBR1LSyC0USNI4vwOjXSaDoCCbIcSwc1TA6UrA32hddKJ4gC_WcioYBHGYrQh4kdcmgZ-UUwaPhOAleuMeOA9sAATsKOq8mTwxKi1_OquX5dD9eV5RTS5rRfU9Z-7jV9X5bN83usGn-cT76uUXE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Generating and/or using training instances that include previously captured robot vision data and drivability labels</title><source>esp@cenet</source><creator>Mueller, Joerg ; Husain, Ammar</creator><creatorcontrib>Mueller, Joerg ; Husain, Ammar</creatorcontrib><description>Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.</description><language>eng</language><subject>CALCULATING ; CHAMBERS PROVIDED WITH MANIPULATION DEVICES ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; HAND TOOLS ; MANIPULATORS ; PERFORMING OPERATIONS ; PHYSICS ; PORTABLE POWER-DRIVEN TOOLS ; TRANSPORTING</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230829&DB=EPODOC&CC=US&NR=11741336B2$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,777,882,25545,76296</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20230829&DB=EPODOC&CC=US&NR=11741336B2$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>Mueller, Joerg</creatorcontrib><creatorcontrib>Husain, Ammar</creatorcontrib><title>Generating and/or using training instances that include previously captured robot vision data and drivability labels</title><description>Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.</description><subject>CALCULATING</subject><subject>CHAMBERS PROVIDED WITH MANIPULATION DEVICES</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>HAND TOOLS</subject><subject>MANIPULATORS</subject><subject>PERFORMING OPERATIONS</subject><subject>PHYSICS</subject><subject>PORTABLE POWER-DRIVEN TOOLS</subject><subject>TRANSPORTING</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNjksKwkAQRLNxIeod2gOIxIjuFT97dR06M602DDNDd08gt9eAB3BV9eBR1LSyC0USNI4vwOjXSaDoCCbIcSwc1TA6UrA32hddKJ4gC_WcioYBHGYrQh4kdcmgZ-UUwaPhOAleuMeOA9sAATsKOq8mTwxKi1_OquX5dD9eV5RTS5rRfU9Z-7jV9X5bN83usGn-cT76uUXE</recordid><startdate>20230829</startdate><enddate>20230829</enddate><creator>Mueller, Joerg</creator><creator>Husain, Ammar</creator><scope>EVB</scope></search><sort><creationdate>20230829</creationdate><title>Generating and/or using training instances that include previously captured robot vision data and drivability labels</title><author>Mueller, Joerg ; Husain, Ammar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_US11741336B23</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>CHAMBERS PROVIDED WITH MANIPULATION DEVICES</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>HAND TOOLS</topic><topic>MANIPULATORS</topic><topic>PERFORMING OPERATIONS</topic><topic>PHYSICS</topic><topic>PORTABLE POWER-DRIVEN TOOLS</topic><topic>TRANSPORTING</topic><toplevel>online_resources</toplevel><creatorcontrib>Mueller, Joerg</creatorcontrib><creatorcontrib>Husain, Ammar</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mueller, Joerg</au><au>Husain, Ammar</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Generating and/or using training instances that include previously captured robot vision data and drivability labels</title><date>2023-08-29</date><risdate>2023</risdate><abstract>Implementations set forth herein relate to generating training data, such that each instance of training data includes a corresponding instance of vision data and drivability label(s) for the instance of vision data. A drivability label can be determined using first vision data from a first vision component that is connected to the robot. The drivability label(s) can be generated by processing the first vision data using geometric and/or heuristic methods. Second vision data can be generated using a second vision component of the robot, such as a camera that is connected to the robot. The drivability labels can be correlated to the second vision data and thereafter used to train one or more machine learning models. The trained models can be shared with a robot(s) in furtherance of enabling the robot(s) to determine drivability of areas captured in vision data, which is being collected in real-time using one or more vision components.</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | eng |
recordid | cdi_epo_espacenet_US11741336B2 |
source | esp@cenet |
subjects | CALCULATING CHAMBERS PROVIDED WITH MANIPULATION DEVICES COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING HAND TOOLS MANIPULATORS PERFORMING OPERATIONS PHYSICS PORTABLE POWER-DRIVEN TOOLS TRANSPORTING |
title | Generating and/or using training instances that include previously captured robot vision data and drivability labels |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T12%3A35%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=Mueller,%20Joerg&rft.date=2023-08-29&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3EUS11741336B2%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |