Teaching Robots to Do Object Assembly using Multi-modal 3D Vision
The motivation of this paper is to develop a smart system using multi-modal vision for next-generation mechanical assembly. It includes two phases where in the first phase human beings teach the assembly structure to a robot and in the second phase the robot finds objects and grasps and assembles th...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wan, Weiwei Lu, Feng Wu, Zepei Harada, Kensuke |
description | The motivation of this paper is to develop a smart system using multi-modal
vision for next-generation mechanical assembly. It includes two phases where in
the first phase human beings teach the assembly structure to a robot and in the
second phase the robot finds objects and grasps and assembles them using AI
planning. The crucial part of the system is the precision of 3D visual
detection and the paper presents multi-modal approaches to meet the
requirements: AR markers are used in the teaching phase since human beings can
actively control the process. Point cloud matching and geometric constraints
are used in the robot execution phase to avoid unexpected noises. Experiments
are performed to examine the precision and correctness of the approaches. The
study is practical: The developed approaches are integrated with graph
model-based motion planning, implemented on an industrial robots and applicable
to real-world scenarios. |
doi_str_mv | 10.48550/arxiv.1601.06473 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1601_06473</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1601_06473</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-91f447588865031bc16800ce02f3252661dcad6d87b2468c47a42781e5ffa4683</originalsourceid><addsrcrecordid>eNotz8tKAzEYBeBsXEj1AVyZF5gx1z9xObTeoFKQodvhTyapkZlGJlOxb6-trg4cDgc-Qm44q5XVmt3h9J2-ag6M1wyUkZekaQP697Tf0bfs8lzonOkq0437CH6mTSlhdMORHspp8noY5lSNuceByhXdppLy_opcRBxKuP7PBWkfH9rlc7XePL0sm3WFYGR1z6NSRltrQTPJnedgGfOBiSiFFgC899hDb40TCqxXBpUwlgcdI_4WckFu_27Phu5zSiNOx-5k6c4W-QNXrEHx</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Teaching Robots to Do Object Assembly using Multi-modal 3D Vision</title><source>arXiv.org</source><creator>Wan, Weiwei ; Lu, Feng ; Wu, Zepei ; Harada, Kensuke</creator><creatorcontrib>Wan, Weiwei ; Lu, Feng ; Wu, Zepei ; Harada, Kensuke</creatorcontrib><description>The motivation of this paper is to develop a smart system using multi-modal
vision for next-generation mechanical assembly. It includes two phases where in
the first phase human beings teach the assembly structure to a robot and in the
second phase the robot finds objects and grasps and assembles them using AI
planning. The crucial part of the system is the precision of 3D visual
detection and the paper presents multi-modal approaches to meet the
requirements: AR markers are used in the teaching phase since human beings can
actively control the process. Point cloud matching and geometric constraints
are used in the robot execution phase to avoid unexpected noises. Experiments
are performed to examine the precision and correctness of the approaches. The
study is practical: The developed approaches are integrated with graph
model-based motion planning, implemented on an industrial robots and applicable
to real-world scenarios.</description><identifier>DOI: 10.48550/arxiv.1601.06473</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2016-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1601.06473$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1601.06473$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wan, Weiwei</creatorcontrib><creatorcontrib>Lu, Feng</creatorcontrib><creatorcontrib>Wu, Zepei</creatorcontrib><creatorcontrib>Harada, Kensuke</creatorcontrib><title>Teaching Robots to Do Object Assembly using Multi-modal 3D Vision</title><description>The motivation of this paper is to develop a smart system using multi-modal
vision for next-generation mechanical assembly. It includes two phases where in
the first phase human beings teach the assembly structure to a robot and in the
second phase the robot finds objects and grasps and assembles them using AI
planning. The crucial part of the system is the precision of 3D visual
detection and the paper presents multi-modal approaches to meet the
requirements: AR markers are used in the teaching phase since human beings can
actively control the process. Point cloud matching and geometric constraints
are used in the robot execution phase to avoid unexpected noises. Experiments
are performed to examine the precision and correctness of the approaches. The
study is practical: The developed approaches are integrated with graph
model-based motion planning, implemented on an industrial robots and applicable
to real-world scenarios.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tKAzEYBeBsXEj1AVyZF5gx1z9xObTeoFKQodvhTyapkZlGJlOxb6-trg4cDgc-Qm44q5XVmt3h9J2-ag6M1wyUkZekaQP697Tf0bfs8lzonOkq0437CH6mTSlhdMORHspp8noY5lSNuceByhXdppLy_opcRBxKuP7PBWkfH9rlc7XePL0sm3WFYGR1z6NSRltrQTPJnedgGfOBiSiFFgC899hDb40TCqxXBpUwlgcdI_4WckFu_27Phu5zSiNOx-5k6c4W-QNXrEHx</recordid><startdate>20160124</startdate><enddate>20160124</enddate><creator>Wan, Weiwei</creator><creator>Lu, Feng</creator><creator>Wu, Zepei</creator><creator>Harada, Kensuke</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20160124</creationdate><title>Teaching Robots to Do Object Assembly using Multi-modal 3D Vision</title><author>Wan, Weiwei ; Lu, Feng ; Wu, Zepei ; Harada, Kensuke</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-91f447588865031bc16800ce02f3252661dcad6d87b2468c47a42781e5ffa4683</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Wan, Weiwei</creatorcontrib><creatorcontrib>Lu, Feng</creatorcontrib><creatorcontrib>Wu, Zepei</creatorcontrib><creatorcontrib>Harada, Kensuke</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wan, Weiwei</au><au>Lu, Feng</au><au>Wu, Zepei</au><au>Harada, Kensuke</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Teaching Robots to Do Object Assembly using Multi-modal 3D Vision</atitle><date>2016-01-24</date><risdate>2016</risdate><abstract>The motivation of this paper is to develop a smart system using multi-modal
vision for next-generation mechanical assembly. It includes two phases where in
the first phase human beings teach the assembly structure to a robot and in the
second phase the robot finds objects and grasps and assembles them using AI
planning. The crucial part of the system is the precision of 3D visual
detection and the paper presents multi-modal approaches to meet the
requirements: AR markers are used in the teaching phase since human beings can
actively control the process. Point cloud matching and geometric constraints
are used in the robot execution phase to avoid unexpected noises. Experiments
are performed to examine the precision and correctness of the approaches. The
study is practical: The developed approaches are integrated with graph
model-based motion planning, implemented on an industrial robots and applicable
to real-world scenarios.</abstract><doi>10.48550/arxiv.1601.06473</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1601.06473 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1601_06473 |
source | arXiv.org |
subjects | Computer Science - Robotics |
title | Teaching Robots to Do Object Assembly using Multi-modal 3D Vision |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T13%3A40%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Teaching%20Robots%20to%20Do%20Object%20Assembly%20using%20Multi-modal%203D%20Vision&rft.au=Wan,%20Weiwei&rft.date=2016-01-24&rft_id=info:doi/10.48550/arxiv.1601.06473&rft_dat=%3Carxiv_GOX%3E1601_06473%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |