A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter

Recently, many researches focus on learning to grasp novel objects, which is an important but still unsolved issue especially for service robots. While some approaches perform well in some cases, they need human labeling and can hardly be used in clutter with a high precision. In this paper, we appl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent & robotic systems 2019-04, Vol.94 (1), p.161-177
Hauptverfasser: Ni, Peiyuan, Zhang, Wenguang, Bai, Weibang, Lin, Minjie, Cao, Qixin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 177
container_issue 1
container_start_page 161
container_title Journal of intelligent & robotic systems
container_volume 94
creator Ni, Peiyuan
Zhang, Wenguang
Bai, Weibang
Lin, Minjie
Cao, Qixin
description Recently, many researches focus on learning to grasp novel objects, which is an important but still unsolved issue especially for service robots. While some approaches perform well in some cases, they need human labeling and can hardly be used in clutter with a high precision. In this paper, we apply a deep learning approach to solve the problem about grasping novel objects in clutter. We focus on two-fingered parallel-jawed grasping with RGBD camera. Firstly, we propose a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper. Considering the challenge of collecting large amounts of training data, we collect training data directly from cluttered scene with no manual labeling. Then we need to extract effective features from RGB and depth data, for which we propose a bimodal representation and use two-stream convolution neural networks (CNNs) to handle the processed inputs. Finally the experiment shows that compared to some existing popular methods, our method gets higher success rate of grasping for the original RGB-D cluttered scene.
doi_str_mv 10.1007/s10846-018-0788-6
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2009238847</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A725981199</galeid><sourcerecordid>A725981199</sourcerecordid><originalsourceid>FETCH-LOGICAL-c355t-523df9f6906fae1f79455a160aa63dafb427696577dc5411925e192d3462c95f3</originalsourceid><addsrcrecordid>eNp1kF1LwzAUhoMoOKc_wLuA15lJ2nxd1qJTmN3NvA5Zm8yOrqlJ5_DfG6nglRw4Bw7vcz5eAG4JXhCMxX0kWOYcYSIRFlIifgZmhIkM4RyrczDDihKEqeKX4CrGPcZYSaZm4LWAlT3BYhiCN_U7fDDRNtD3cHPyKI7BmgMsqypC5wOs_Kft4Hq7t_UY4TKYOLT9DrY9LLvjONpwDS6c6aK9-a1z8Pb0uCmf0Wq9fCmLFaozxkbEaNY45bjC3BlLnFA5Y4ZwbAzPGuO2ORVccSZEU7OcEEWZTanJck5rxVw2B3fT3HT1x9HGUe_9MfRppabpM5pJmYukWkyqnemsbnvnx2DqFI09tLXvrWtTvxCUKZmWqASQCaiDjzFYp4fQHkz40gTrH5v1ZLNONusfmzVPDJ2YmLT9zoa_U_6HvgH60nzc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2009238847</pqid></control><display><type>article</type><title>A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter</title><source>SpringerLink Journals</source><creator>Ni, Peiyuan ; Zhang, Wenguang ; Bai, Weibang ; Lin, Minjie ; Cao, Qixin</creator><creatorcontrib>Ni, Peiyuan ; Zhang, Wenguang ; Bai, Weibang ; Lin, Minjie ; Cao, Qixin</creatorcontrib><description>Recently, many researches focus on learning to grasp novel objects, which is an important but still unsolved issue especially for service robots. While some approaches perform well in some cases, they need human labeling and can hardly be used in clutter with a high precision. In this paper, we apply a deep learning approach to solve the problem about grasping novel objects in clutter. We focus on two-fingered parallel-jawed grasping with RGBD camera. Firstly, we propose a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper. Considering the challenge of collecting large amounts of training data, we collect training data directly from cluttered scene with no manual labeling. Then we need to extract effective features from RGB and depth data, for which we propose a bimodal representation and use two-stream convolution neural networks (CNNs) to handle the processed inputs. Finally the experiment shows that compared to some existing popular methods, our method gets higher success rate of grasping for the original RGB-D cluttered scene.</description><identifier>ISSN: 0921-0296</identifier><identifier>EISSN: 1573-0409</identifier><identifier>DOI: 10.1007/s10846-018-0788-6</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Artificial Intelligence ; Clutter ; Control ; Convolution ; Electrical Engineering ; Engineering ; Feature extraction ; Grasping (robotics) ; Labeling ; Machine learning ; Mechanical Engineering ; Mechatronics ; Methods ; Neural networks ; Robotics ; Robotics industry ; Robots ; Service robots ; Training</subject><ispartof>Journal of intelligent &amp; robotic systems, 2019-04, Vol.94 (1), p.161-177</ispartof><rights>Springer Science+Business Media B.V., part of Springer Nature 2018</rights><rights>COPYRIGHT 2019 Springer</rights><rights>Journal of Intelligent &amp; Robotic Systems is a copyright of Springer, (2018). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c355t-523df9f6906fae1f79455a160aa63dafb427696577dc5411925e192d3462c95f3</citedby><cites>FETCH-LOGICAL-c355t-523df9f6906fae1f79455a160aa63dafb427696577dc5411925e192d3462c95f3</cites><orcidid>0000-0002-4039-8637</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10846-018-0788-6$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10846-018-0788-6$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Ni, Peiyuan</creatorcontrib><creatorcontrib>Zhang, Wenguang</creatorcontrib><creatorcontrib>Bai, Weibang</creatorcontrib><creatorcontrib>Lin, Minjie</creatorcontrib><creatorcontrib>Cao, Qixin</creatorcontrib><title>A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter</title><title>Journal of intelligent &amp; robotic systems</title><addtitle>J Intell Robot Syst</addtitle><description>Recently, many researches focus on learning to grasp novel objects, which is an important but still unsolved issue especially for service robots. While some approaches perform well in some cases, they need human labeling and can hardly be used in clutter with a high precision. In this paper, we apply a deep learning approach to solve the problem about grasping novel objects in clutter. We focus on two-fingered parallel-jawed grasping with RGBD camera. Firstly, we propose a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper. Considering the challenge of collecting large amounts of training data, we collect training data directly from cluttered scene with no manual labeling. Then we need to extract effective features from RGB and depth data, for which we propose a bimodal representation and use two-stream convolution neural networks (CNNs) to handle the processed inputs. Finally the experiment shows that compared to some existing popular methods, our method gets higher success rate of grasping for the original RGB-D cluttered scene.</description><subject>Artificial Intelligence</subject><subject>Clutter</subject><subject>Control</subject><subject>Convolution</subject><subject>Electrical Engineering</subject><subject>Engineering</subject><subject>Feature extraction</subject><subject>Grasping (robotics)</subject><subject>Labeling</subject><subject>Machine learning</subject><subject>Mechanical Engineering</subject><subject>Mechatronics</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Robotics</subject><subject>Robotics industry</subject><subject>Robots</subject><subject>Service robots</subject><subject>Training</subject><issn>0921-0296</issn><issn>1573-0409</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1kF1LwzAUhoMoOKc_wLuA15lJ2nxd1qJTmN3NvA5Zm8yOrqlJ5_DfG6nglRw4Bw7vcz5eAG4JXhCMxX0kWOYcYSIRFlIifgZmhIkM4RyrczDDihKEqeKX4CrGPcZYSaZm4LWAlT3BYhiCN_U7fDDRNtD3cHPyKI7BmgMsqypC5wOs_Kft4Hq7t_UY4TKYOLT9DrY9LLvjONpwDS6c6aK9-a1z8Pb0uCmf0Wq9fCmLFaozxkbEaNY45bjC3BlLnFA5Y4ZwbAzPGuO2ORVccSZEU7OcEEWZTanJck5rxVw2B3fT3HT1x9HGUe_9MfRppabpM5pJmYukWkyqnemsbnvnx2DqFI09tLXvrWtTvxCUKZmWqASQCaiDjzFYp4fQHkz40gTrH5v1ZLNONusfmzVPDJ2YmLT9zoa_U_6HvgH60nzc</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Ni, Peiyuan</creator><creator>Zhang, Wenguang</creator><creator>Bai, Weibang</creator><creator>Lin, Minjie</creator><creator>Cao, Qixin</creator><general>Springer Netherlands</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>7XB</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-4039-8637</orcidid></search><sort><creationdate>20190401</creationdate><title>A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter</title><author>Ni, Peiyuan ; Zhang, Wenguang ; Bai, Weibang ; Lin, Minjie ; Cao, Qixin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c355t-523df9f6906fae1f79455a160aa63dafb427696577dc5411925e192d3462c95f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial Intelligence</topic><topic>Clutter</topic><topic>Control</topic><topic>Convolution</topic><topic>Electrical Engineering</topic><topic>Engineering</topic><topic>Feature extraction</topic><topic>Grasping (robotics)</topic><topic>Labeling</topic><topic>Machine learning</topic><topic>Mechanical Engineering</topic><topic>Mechatronics</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Robotics</topic><topic>Robotics industry</topic><topic>Robots</topic><topic>Service robots</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ni, Peiyuan</creatorcontrib><creatorcontrib>Zhang, Wenguang</creatorcontrib><creatorcontrib>Bai, Weibang</creatorcontrib><creatorcontrib>Lin, Minjie</creatorcontrib><creatorcontrib>Cao, Qixin</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of intelligent &amp; robotic systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ni, Peiyuan</au><au>Zhang, Wenguang</au><au>Bai, Weibang</au><au>Lin, Minjie</au><au>Cao, Qixin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter</atitle><jtitle>Journal of intelligent &amp; robotic systems</jtitle><stitle>J Intell Robot Syst</stitle><date>2019-04-01</date><risdate>2019</risdate><volume>94</volume><issue>1</issue><spage>161</spage><epage>177</epage><pages>161-177</pages><issn>0921-0296</issn><eissn>1573-0409</eissn><abstract>Recently, many researches focus on learning to grasp novel objects, which is an important but still unsolved issue especially for service robots. While some approaches perform well in some cases, they need human labeling and can hardly be used in clutter with a high precision. In this paper, we apply a deep learning approach to solve the problem about grasping novel objects in clutter. We focus on two-fingered parallel-jawed grasping with RGBD camera. Firstly, we propose a ‘grasp circle’ method to find more potential grasps in each sampling point with less cost, which is parameterized by the size of the gripper. Considering the challenge of collecting large amounts of training data, we collect training data directly from cluttered scene with no manual labeling. Then we need to extract effective features from RGB and depth data, for which we propose a bimodal representation and use two-stream convolution neural networks (CNNs) to handle the processed inputs. Finally the experiment shows that compared to some existing popular methods, our method gets higher success rate of grasping for the original RGB-D cluttered scene.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10846-018-0788-6</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-4039-8637</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0921-0296
ispartof Journal of intelligent & robotic systems, 2019-04, Vol.94 (1), p.161-177
issn 0921-0296
1573-0409
language eng
recordid cdi_proquest_journals_2009238847
source SpringerLink Journals
subjects Artificial Intelligence
Clutter
Control
Convolution
Electrical Engineering
Engineering
Feature extraction
Grasping (robotics)
Labeling
Machine learning
Mechanical Engineering
Mechatronics
Methods
Neural networks
Robotics
Robotics industry
Robots
Service robots
Training
title A New Approach Based on Two-stream CNNs for Novel Objects Grasping in Clutter
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T13%3A33%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20New%20Approach%20Based%20on%20Two-stream%20CNNs%20for%20Novel%20Objects%20Grasping%20in%20Clutter&rft.jtitle=Journal%20of%20intelligent%20&%20robotic%20systems&rft.au=Ni,%20Peiyuan&rft.date=2019-04-01&rft.volume=94&rft.issue=1&rft.spage=161&rft.epage=177&rft.pages=161-177&rft.issn=0921-0296&rft.eissn=1573-0409&rft_id=info:doi/10.1007/s10846-018-0788-6&rft_dat=%3Cgale_proqu%3EA725981199%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2009238847&rft_id=info:pmid/&rft_galeid=A725981199&rfr_iscdi=true