iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing
For humans, both the proprioception and touch sensing are highly utilized when performing haptic perception. However, most approaches in robotics use only either proprioceptive data or touch data in haptic object recognition. In this paper, we present a novel method named Iterative Closest Labeled P...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Luo, Shan Mou, Wenxuan Althoefer, Kaspar Liu, Hongbin |
description | For humans, both the proprioception and touch sensing are highly utilized
when performing haptic perception. However, most approaches in robotics use
only either proprioceptive data or touch data in haptic object recognition. In
this paper, we present a novel method named Iterative Closest Labeled Point
(iCLAP) to link the kinesthetic cues and tactile patterns fundamentally and
also introduce its extensions to recognize object shapes. In the training
phase, the iCLAP first clusters the features of tactile readings into a
codebook and assigns these features with distinct label numbers. A 4D point
cloud of the object is then formed by taking the label numbers of the tactile
features as an additional dimension to the 3D sensor positions; hence, the two
sensing modalities are merged to achieve a synthesized perception of the
touched object. Furthermore, we developed and validated hybrid fusion
strategies, product based and weighted sum based, to combine decisions obtained
from iCLAP and single sensing modalities. Extensive experimentation
demonstrates a dramatic improvement of object recognition using the proposed
methods and it shows great potential to enhance robot perception ability. |
doi_str_mv | 10.48550/arxiv.1806.07507 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1806_07507</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1806_07507</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-ce78077a8b301cafeff41fb27496da117fd6ee9368cd656beb684a10bcbdebbc3</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwAn_QIJNEq_LrYooIEWiorlHu_a6tUTtKAVE_54SOM1hRk_zhLjRqqxt06g7nL7jV6mtMqWCRsGlWMe2W20e5HaPI8s3dnmX4kfMSdJJtvlAMcW0k5spj1PMjse5w-Rlnz_dXm45Hc-DK3ER8P3I1_-5EP36sW-fi-716aVddQUagMIxWAWAliqlHQYOodaB7qFeGo9aQ_CGeVkZ67xpDDEZW6NW5MgzkasW4vYPO4sM50sHnE7Dr9AwC1U_20VG6A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing</title><source>arXiv.org</source><creator>Luo, Shan ; Mou, Wenxuan ; Althoefer, Kaspar ; Liu, Hongbin</creator><creatorcontrib>Luo, Shan ; Mou, Wenxuan ; Althoefer, Kaspar ; Liu, Hongbin</creatorcontrib><description>For humans, both the proprioception and touch sensing are highly utilized
when performing haptic perception. However, most approaches in robotics use
only either proprioceptive data or touch data in haptic object recognition. In
this paper, we present a novel method named Iterative Closest Labeled Point
(iCLAP) to link the kinesthetic cues and tactile patterns fundamentally and
also introduce its extensions to recognize object shapes. In the training
phase, the iCLAP first clusters the features of tactile readings into a
codebook and assigns these features with distinct label numbers. A 4D point
cloud of the object is then formed by taking the label numbers of the tactile
features as an additional dimension to the 3D sensor positions; hence, the two
sensing modalities are merged to achieve a synthesized perception of the
touched object. Furthermore, we developed and validated hybrid fusion
strategies, product based and weighted sum based, to combine decisions obtained
from iCLAP and single sensing modalities. Extensive experimentation
demonstrates a dramatic improvement of object recognition using the proposed
methods and it shows great potential to enhance robot perception ability.</description><identifier>DOI: 10.48550/arxiv.1806.07507</identifier><language>eng</language><subject>Computer Science - Robotics</subject><creationdate>2018-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1806.07507$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1806.07507$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Luo, Shan</creatorcontrib><creatorcontrib>Mou, Wenxuan</creatorcontrib><creatorcontrib>Althoefer, Kaspar</creatorcontrib><creatorcontrib>Liu, Hongbin</creatorcontrib><title>iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing</title><description>For humans, both the proprioception and touch sensing are highly utilized
when performing haptic perception. However, most approaches in robotics use
only either proprioceptive data or touch data in haptic object recognition. In
this paper, we present a novel method named Iterative Closest Labeled Point
(iCLAP) to link the kinesthetic cues and tactile patterns fundamentally and
also introduce its extensions to recognize object shapes. In the training
phase, the iCLAP first clusters the features of tactile readings into a
codebook and assigns these features with distinct label numbers. A 4D point
cloud of the object is then formed by taking the label numbers of the tactile
features as an additional dimension to the 3D sensor positions; hence, the two
sensing modalities are merged to achieve a synthesized perception of the
touched object. Furthermore, we developed and validated hybrid fusion
strategies, product based and weighted sum based, to combine decisions obtained
from iCLAP and single sensing modalities. Extensive experimentation
demonstrates a dramatic improvement of object recognition using the proposed
methods and it shows great potential to enhance robot perception ability.</description><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwAn_QIJNEq_LrYooIEWiorlHu_a6tUTtKAVE_54SOM1hRk_zhLjRqqxt06g7nL7jV6mtMqWCRsGlWMe2W20e5HaPI8s3dnmX4kfMSdJJtvlAMcW0k5spj1PMjse5w-Rlnz_dXm45Hc-DK3ER8P3I1_-5EP36sW-fi-716aVddQUagMIxWAWAliqlHQYOodaB7qFeGo9aQ_CGeVkZ67xpDDEZW6NW5MgzkasW4vYPO4sM50sHnE7Dr9AwC1U_20VG6A</recordid><startdate>20180619</startdate><enddate>20180619</enddate><creator>Luo, Shan</creator><creator>Mou, Wenxuan</creator><creator>Althoefer, Kaspar</creator><creator>Liu, Hongbin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180619</creationdate><title>iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing</title><author>Luo, Shan ; Mou, Wenxuan ; Althoefer, Kaspar ; Liu, Hongbin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-ce78077a8b301cafeff41fb27496da117fd6ee9368cd656beb684a10bcbdebbc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Luo, Shan</creatorcontrib><creatorcontrib>Mou, Wenxuan</creatorcontrib><creatorcontrib>Althoefer, Kaspar</creatorcontrib><creatorcontrib>Liu, Hongbin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Luo, Shan</au><au>Mou, Wenxuan</au><au>Althoefer, Kaspar</au><au>Liu, Hongbin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing</atitle><date>2018-06-19</date><risdate>2018</risdate><abstract>For humans, both the proprioception and touch sensing are highly utilized
when performing haptic perception. However, most approaches in robotics use
only either proprioceptive data or touch data in haptic object recognition. In
this paper, we present a novel method named Iterative Closest Labeled Point
(iCLAP) to link the kinesthetic cues and tactile patterns fundamentally and
also introduce its extensions to recognize object shapes. In the training
phase, the iCLAP first clusters the features of tactile readings into a
codebook and assigns these features with distinct label numbers. A 4D point
cloud of the object is then formed by taking the label numbers of the tactile
features as an additional dimension to the 3D sensor positions; hence, the two
sensing modalities are merged to achieve a synthesized perception of the
touched object. Furthermore, we developed and validated hybrid fusion
strategies, product based and weighted sum based, to combine decisions obtained
from iCLAP and single sensing modalities. Extensive experimentation
demonstrates a dramatic improvement of object recognition using the proposed
methods and it shows great potential to enhance robot perception ability.</abstract><doi>10.48550/arxiv.1806.07507</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1806.07507 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1806_07507 |
source | arXiv.org |
subjects | Computer Science - Robotics |
title | iCLAP: Shape Recognition by Combining Proprioception and Touch Sensing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T14%3A55%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=iCLAP:%20Shape%20Recognition%20by%20Combining%20Proprioception%20and%20Touch%20Sensing&rft.au=Luo,%20Shan&rft.date=2018-06-19&rft_id=info:doi/10.48550/arxiv.1806.07507&rft_dat=%3Carxiv_GOX%3E1806_07507%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |