Unmanned forklift application-oriented tray detection and positioning method and system

The invention discloses a tray detecting and positioning method and system for unmanned forklift application. The method comprises the steps that a depth image and an RGB image collected by an RDG-D camera module are obtained; establishing a tray image data set to train a tray detector so as to pred...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: REN QINYUAN, PANG JIANGNAN, LIN CHU'ANG
Format: Patent
Sprache:chi ; eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator REN QINYUAN
PANG JIANGNAN
LIN CHU'ANG
description The invention discloses a tray detecting and positioning method and system for unmanned forklift application. The method comprises the steps that a depth image and an RGB image collected by an RDG-D camera module are obtained; establishing a tray image data set to train a tray detector so as to predict a tray area and a support column area in the RGB image, and rejecting incomplete trays in the tray area; the complete tray serves as a target tray, a supporting column area of the complete tray serves as a region of interest, the RGB image and the depth image are aligned, depth information of the supporting column area is extracted, and the depth information of the region of interest is converted into three-dimensional point cloud data under a camera coordinate system based on pre-calibrated camera parameters; segmenting the supporting column surfaces of the target tray, and calculating the center-of-mass coordinates of each supporting column surface; and a supporting column triple of the forking face of the ta
format Patent
fullrecord <record><control><sourceid>epo_EVB</sourceid><recordid>TN_cdi_epo_espacenet_CN116309882A</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>CN116309882A</sourcerecordid><originalsourceid>FETCH-epo_espacenet_CN116309882A3</originalsourceid><addsrcrecordid>eNqNikEKwjAQRbNxIeodxgMUrAWpSymKK1eKyzIkUw0mMyGZTW-vFQ_g6vHff3Nzv3FEZnIwSH4FPyhgSsFbVC9cSfbE-nk14wiOlOzkAdlBkuKn4fkBkfQp7qvLWJTi0swGDIVWPy7M-nS8dueKkvRUElpi0r671PWu2ezbdnto_mne2X462g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>patent</recordtype></control><display><type>patent</type><title>Unmanned forklift application-oriented tray detection and positioning method and system</title><source>esp@cenet</source><creator>REN QINYUAN ; PANG JIANGNAN ; LIN CHU'ANG</creator><creatorcontrib>REN QINYUAN ; PANG JIANGNAN ; LIN CHU'ANG</creatorcontrib><description>The invention discloses a tray detecting and positioning method and system for unmanned forklift application. The method comprises the steps that a depth image and an RGB image collected by an RDG-D camera module are obtained; establishing a tray image data set to train a tray detector so as to predict a tray area and a support column area in the RGB image, and rejecting incomplete trays in the tray area; the complete tray serves as a target tray, a supporting column area of the complete tray serves as a region of interest, the RGB image and the depth image are aligned, depth information of the supporting column area is extracted, and the depth information of the region of interest is converted into three-dimensional point cloud data under a camera coordinate system based on pre-calibrated camera parameters; segmenting the supporting column surfaces of the target tray, and calculating the center-of-mass coordinates of each supporting column surface; and a supporting column triple of the forking face of the ta</description><language>chi ; eng</language><subject>CALCULATING ; COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS ; COMPUTING ; COUNTING ; IMAGE DATA PROCESSING OR GENERATION, IN GENERAL ; PHYSICS</subject><creationdate>2023</creationdate><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230623&amp;DB=EPODOC&amp;CC=CN&amp;NR=116309882A$$EHTML$$P50$$Gepo$$Hfree_for_read</linktohtml><link.rule.ids>230,308,780,885,25564,76547</link.rule.ids><linktorsrc>$$Uhttps://worldwide.espacenet.com/publicationDetails/biblio?FT=D&amp;date=20230623&amp;DB=EPODOC&amp;CC=CN&amp;NR=116309882A$$EView_record_in_European_Patent_Office$$FView_record_in_$$GEuropean_Patent_Office$$Hfree_for_read</linktorsrc></links><search><creatorcontrib>REN QINYUAN</creatorcontrib><creatorcontrib>PANG JIANGNAN</creatorcontrib><creatorcontrib>LIN CHU'ANG</creatorcontrib><title>Unmanned forklift application-oriented tray detection and positioning method and system</title><description>The invention discloses a tray detecting and positioning method and system for unmanned forklift application. The method comprises the steps that a depth image and an RGB image collected by an RDG-D camera module are obtained; establishing a tray image data set to train a tray detector so as to predict a tray area and a support column area in the RGB image, and rejecting incomplete trays in the tray area; the complete tray serves as a target tray, a supporting column area of the complete tray serves as a region of interest, the RGB image and the depth image are aligned, depth information of the supporting column area is extracted, and the depth information of the region of interest is converted into three-dimensional point cloud data under a camera coordinate system based on pre-calibrated camera parameters; segmenting the supporting column surfaces of the target tray, and calculating the center-of-mass coordinates of each supporting column surface; and a supporting column triple of the forking face of the ta</description><subject>CALCULATING</subject><subject>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</subject><subject>COMPUTING</subject><subject>COUNTING</subject><subject>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</subject><subject>PHYSICS</subject><fulltext>true</fulltext><rsrctype>patent</rsrctype><creationdate>2023</creationdate><recordtype>patent</recordtype><sourceid>EVB</sourceid><recordid>eNqNikEKwjAQRbNxIeodxgMUrAWpSymKK1eKyzIkUw0mMyGZTW-vFQ_g6vHff3Nzv3FEZnIwSH4FPyhgSsFbVC9cSfbE-nk14wiOlOzkAdlBkuKn4fkBkfQp7qvLWJTi0swGDIVWPy7M-nS8dueKkvRUElpi0r671PWu2ezbdnto_mne2X462g</recordid><startdate>20230623</startdate><enddate>20230623</enddate><creator>REN QINYUAN</creator><creator>PANG JIANGNAN</creator><creator>LIN CHU'ANG</creator><scope>EVB</scope></search><sort><creationdate>20230623</creationdate><title>Unmanned forklift application-oriented tray detection and positioning method and system</title><author>REN QINYUAN ; PANG JIANGNAN ; LIN CHU'ANG</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-epo_espacenet_CN116309882A3</frbrgroupid><rsrctype>patents</rsrctype><prefilter>patents</prefilter><language>chi ; eng</language><creationdate>2023</creationdate><topic>CALCULATING</topic><topic>COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS</topic><topic>COMPUTING</topic><topic>COUNTING</topic><topic>IMAGE DATA PROCESSING OR GENERATION, IN GENERAL</topic><topic>PHYSICS</topic><toplevel>online_resources</toplevel><creatorcontrib>REN QINYUAN</creatorcontrib><creatorcontrib>PANG JIANGNAN</creatorcontrib><creatorcontrib>LIN CHU'ANG</creatorcontrib><collection>esp@cenet</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>REN QINYUAN</au><au>PANG JIANGNAN</au><au>LIN CHU'ANG</au><format>patent</format><genre>patent</genre><ristype>GEN</ristype><title>Unmanned forklift application-oriented tray detection and positioning method and system</title><date>2023-06-23</date><risdate>2023</risdate><abstract>The invention discloses a tray detecting and positioning method and system for unmanned forklift application. The method comprises the steps that a depth image and an RGB image collected by an RDG-D camera module are obtained; establishing a tray image data set to train a tray detector so as to predict a tray area and a support column area in the RGB image, and rejecting incomplete trays in the tray area; the complete tray serves as a target tray, a supporting column area of the complete tray serves as a region of interest, the RGB image and the depth image are aligned, depth information of the supporting column area is extracted, and the depth information of the region of interest is converted into three-dimensional point cloud data under a camera coordinate system based on pre-calibrated camera parameters; segmenting the supporting column surfaces of the target tray, and calculating the center-of-mass coordinates of each supporting column surface; and a supporting column triple of the forking face of the ta</abstract><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier
ispartof
issn
language chi ; eng
recordid cdi_epo_espacenet_CN116309882A
source esp@cenet
subjects CALCULATING
COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
COMPUTING
COUNTING
IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
PHYSICS
title Unmanned forklift application-oriented tray detection and positioning method and system
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T15%3A03%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-epo_EVB&rft_val_fmt=info:ofi/fmt:kev:mtx:patent&rft.genre=patent&rft.au=REN%20QINYUAN&rft.date=2023-06-23&rft_id=info:doi/&rft_dat=%3Cepo_EVB%3ECN116309882A%3C/epo_EVB%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true