Model training method and model-based scene flow estimation method
The invention provides a model training method and a model-based scene flow estimation method. The method comprises the following steps: acquiring a point cloud acquired by a sensor at an Nth frame as a source point cloud, and acquiring a point cloud acquired by the sensor at an actual (N + 1) th fr...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Patent |
Sprache: | chi ; eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | PENG YUNHUI ZHANG HAO YU PENGFEI CHU JIAXIN ZHANG TIANLEI FEI WENYUAN |
description | The invention provides a model training method and a model-based scene flow estimation method. The method comprises the following steps: acquiring a point cloud acquired by a sensor at an Nth frame as a source point cloud, and acquiring a point cloud acquired by the sensor at an actual (N + 1) th frame as a target point cloud; generating a first anchor frame of a source point cloud and a second anchor frame of a target point cloud, and obtaining motion parameters when the first anchor frame is transformed to the position where the second anchor frame is located; transforming points in the source point cloud according to the motion parameters to obtain a simulated target point cloud; generating a motion vector according to the position of the point in the simulated target point cloud and the position of the point in the target point cloud, and taking the motion vector as a pseudo three-dimensional scene flow label; training based on the three-dimensional scene flow label to obtain a scene flow estimation model |
format | Patent |
fullrecord | <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN118587368A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN118587368A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN118587368A3</originalsourceid><addsrcrecordid>eNrjZHDyzU9JzVEoKUrMzMvMS1fITS3JyE9RSMxLUcgFyegmJRanpigUJ6fmpSqk5eSXK6QWl2TmJpZk5udBFfMwsKYl5hSn8kJpbgZFN9cQZw_d1IL8-NTigkSQ3pJ4Zz9DQwtTC3NjMwtHY2LUAAAr1jIJ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Model training method and model-based scene flow estimation method</title><source>esp@cenet</source><creator>PENG YUNHUI ; ZHANG HAO ; YU PENGFEI ; CHU JIAXIN ; ZHANG TIANLEI ; FEI WENYUAN</creator><creatorcontrib>PENG YUNHUI ; ZHANG HAO ; YU PENGFEI ; CHU JIAXIN ; ZHANG TIANLEI ; FEI WENYUAN</creatorcontrib><description>The invention provides a model training method and a model-based scene flow estimation method. The method comprises the following steps: acquiring a point cloud acquired by a sensor at an Nth frame as a source point cloud, and acquiring a point cloud acquired by the sensor at an actual (N + 1) th frame as a target point cloud; generating a first anchor frame of a source point cloud and a second anchor frame of a target point cloud, and obtaining motion parameters when the first anchor frame is transformed to the position where the second anchor frame is located; transforming points in the source point cloud according to the motion parameters to obtain a simulated target point cloud; generating a motion vector according to the position of the point in the simulated target point cloud and the position of the point in the target point cloud, and taking the motion vector as a pseudo three-dimensional scene flow label; training based on the three-dimensional scene flow label to obtain a scene flow estimation model</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2024</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240903&DB=EPODOC&CC=CN&NR=118587368A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,776,881,25543,76294</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&date=20240903&DB=EPODOC&CC=CN&NR=118587368A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>PENG YUNHUI</creatorcontrib><creatorcontrib>ZHANG HAO</creatorcontrib><creatorcontrib>YU PENGFEI</creatorcontrib><creatorcontrib>CHU JIAXIN</creatorcontrib><creatorcontrib>ZHANG TIANLEI</creatorcontrib><creatorcontrib>FEI WENYUAN</creatorcontrib><title>Model training method and model-based scene flow estimation method</title><description>The invention provides a model training method and a model-based scene flow estimation method. The method comprises the following steps: acquiring a point cloud acquired by a sensor at an Nth frame as a source point cloud, and acquiring a point cloud acquired by the sensor at an actual (N + 1) th frame as a target point cloud; generating a first anchor frame of a source point cloud and a second anchor frame of a target point cloud, and obtaining motion parameters when the first anchor frame is transformed to the position where the second anchor frame is located; transforming points in the source point cloud according to the motion parameters to obtain a simulated target point cloud; generating a motion vector according to the position of the point in the simulated target point cloud and the position of the point in the target point cloud, and taking the motion vector as a pseudo three-dimensional scene flow label; training based on the three-dimensional scene flow label to obtain a scene flow estimation model</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2024</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNrjZHDyzU9JzVEoKUrMzMvMS1fITS3JyE9RSMxLUcgFyegmJRanpigUJ6fmpSqk5eSXK6QWl2TmJpZk5udBFfMwsKYl5hSn8kJpbgZFN9cQZw_d1IL8-NTigkSQ3pJ4Zz9DQwtTC3NjMwtHY2LUAAAr1jIJ</recordid><startdate>20240903</startdate><enddate>20240903</enddate><creator>PENG YUNHUI</creator><creator>ZHANG HAO</creator><creator>YU PENGFEI</creator><creator>CHU JIAXIN</creator><creator>ZHANG TIANLEI</creator><creator>FEI WENYUAN</creator><scope>EVB</scope></search><sort><creationdate>20240903</creationdate><title>Model training method and model-based scene flow estimation method</title><author>PENG YUNHUI ; ZHANG HAO ; YU PENGFEI ; CHU JIAXIN ; ZHANG TIANLEI ; FEI WENYUAN</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN118587368A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2024</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>PENG YUNHUI</creatorcontrib><creatorcontrib>ZHANG HAO</creatorcontrib><creatorcontrib>YU PENGFEI</creatorcontrib><creatorcontrib>CHU JIAXIN</creatorcontrib><creatorcontrib>ZHANG TIANLEI</creatorcontrib><creatorcontrib>FEI WENYUAN</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>PENG YUNHUI</au><au>ZHANG HAO</au><au>YU PENGFEI</au><au>CHU JIAXIN</au><au>ZHANG TIANLEI</au><au>FEI WENYUAN</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Model training method and model-based scene flow estimation method</title><date>2024-09-03</date><risdate>2024</risdate><abstract>The invention provides a model training method and a model-based scene flow estimation method. The method comprises the following steps: acquiring a point cloud acquired by a sensor at an Nth frame as a source point cloud, and acquiring a point cloud acquired by the sensor at an actual (N + 1) th frame as a target point cloud; generating a first anchor frame of a source point cloud and a second anchor frame of a target point cloud, and obtaining motion parameters when the first anchor frame is transformed to the position where the second anchor frame is located; transforming points in the source point cloud according to the motion parameters to obtain a simulated target point cloud; generating a motion vector according to the position of the point in the simulated target point cloud and the position of the point in the target point cloud, and taking the motion vector as a pseudo three-dimensional scene flow label; training based on the three-dimensional scene flow label to obtain a scene flow estimation model</abstract><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | |
ispartof | |
issn | |
language | chi ; eng |
recordid | cdi_epo_espacenet_CN118587368A |
source | esp@cenet |
subjects | CALCULATING COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS COMPUTING COUNTING IMAGE DATA PROCESSING OR GENERATION, IN GENERAL PHYSICS |
title | Model training method and model-based scene flow estimation method |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T09%3A32%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=PENG%20YUNHUI&rft.date=2024-09-03&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN118587368A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |