Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions

Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classificat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEICE Transactions on Information and Systems 2017/06/01, Vol.E100.D(6), pp.1350-1359
Hauptverfasser: MATSUO, Tadashi, SHIMADA, Nobutaka
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1359
container_issue 6
container_start_page 1350
container_title IEICE Transactions on Information and Systems
container_volume E100.D
creator MATSUO, Tadashi
SHIMADA, Nobutaka
description Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions.
doi_str_mv 10.1587/transinf.2016EDP7410
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2014553838</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2014553838</sourcerecordid><originalsourceid>FETCH-LOGICAL-c567t-82c9bd455453fc86d63746a341570797b9f73026e1d71f1ade7c88fd22a990d83</originalsourceid><addsrcrecordid>eNpNkEtPAjEUhRujiYj-AxeTuB7snU4fszSAQoLR-IjLpnRanQm2Y1sW_nsHEWR1X-c7NzkIXQIeARX8OgXlYuPsqMDAppNHXgI-QgPgJc2BMDhGA1wBywUlxSk6i7HFGEQBdIDext7FFNY6Nd5l3mYLlYxL2cREHZou-ZA9d0qbTLk6mztrgnH9dO9rs9rIZ_0-f1i2Rqf-nExQv07xHJ1YtYrm4q8O0evt9GU8yxcPd_PxzSLXlPGUi0JXy7qktKTEasFqRnjJFCmBcswrvqwsJ7hgBmoOFlRtuBbC1kWhqgrXggzR1da3C_5rbWKSrV8H17-UfRi9MRFkoyq3Kh18jMFY2YXmU4VvCVhuIpS7COVBhD32tMXamNS72UMqpEavzD80BYzlRLJdc2CyF-sPFaRx5AcDYYM_</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2014553838</pqid></control><display><type>article</type><title>Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions</title><source>J-STAGE Free</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>MATSUO, Tadashi ; SHIMADA, Nobutaka</creator><creatorcontrib>MATSUO, Tadashi ; SHIMADA, Nobutaka</creatorcontrib><description>Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions.</description><identifier>ISSN: 0916-8532</identifier><identifier>EISSN: 1745-1361</identifier><identifier>DOI: 10.1587/transinf.2016EDP7410</identifier><language>eng</language><publisher>Tokyo: The Institute of Electronics, Information and Communication Engineers</publisher><subject>feature extraction ; Mathematical analysis ; Mathematical models ; object classification ; Object recognition ; unsupervised machine learning</subject><ispartof>IEICE Transactions on Information and Systems, 2017/06/01, Vol.E100.D(6), pp.1350-1359</ispartof><rights>2017 The Institute of Electronics, Information and Communication Engineers</rights><rights>Copyright Japan Science and Technology Agency 2017</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c567t-82c9bd455453fc86d63746a341570797b9f73026e1d71f1ade7c88fd22a990d83</citedby><cites>FETCH-LOGICAL-c567t-82c9bd455453fc86d63746a341570797b9f73026e1d71f1ade7c88fd22a990d83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,1877,27901,27902</link.rule.ids></links><search><creatorcontrib>MATSUO, Tadashi</creatorcontrib><creatorcontrib>SHIMADA, Nobutaka</creatorcontrib><title>Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions</title><title>IEICE Transactions on Information and Systems</title><addtitle>IEICE Trans. Inf. &amp; Syst.</addtitle><description>Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions.</description><subject>feature extraction</subject><subject>Mathematical analysis</subject><subject>Mathematical models</subject><subject>object classification</subject><subject>Object recognition</subject><subject>unsupervised machine learning</subject><issn>0916-8532</issn><issn>1745-1361</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNpNkEtPAjEUhRujiYj-AxeTuB7snU4fszSAQoLR-IjLpnRanQm2Y1sW_nsHEWR1X-c7NzkIXQIeARX8OgXlYuPsqMDAppNHXgI-QgPgJc2BMDhGA1wBywUlxSk6i7HFGEQBdIDext7FFNY6Nd5l3mYLlYxL2cREHZou-ZA9d0qbTLk6mztrgnH9dO9rs9rIZ_0-f1i2Rqf-nExQv07xHJ1YtYrm4q8O0evt9GU8yxcPd_PxzSLXlPGUi0JXy7qktKTEasFqRnjJFCmBcswrvqwsJ7hgBmoOFlRtuBbC1kWhqgrXggzR1da3C_5rbWKSrV8H17-UfRi9MRFkoyq3Kh18jMFY2YXmU4VvCVhuIpS7COVBhD32tMXamNS72UMqpEavzD80BYzlRLJdc2CyF-sPFaRx5AcDYYM_</recordid><startdate>20170101</startdate><enddate>20170101</enddate><creator>MATSUO, Tadashi</creator><creator>SHIMADA, Nobutaka</creator><general>The Institute of Electronics, Information and Communication Engineers</general><general>Japan Science and Technology Agency</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20170101</creationdate><title>Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions</title><author>MATSUO, Tadashi ; SHIMADA, Nobutaka</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c567t-82c9bd455453fc86d63746a341570797b9f73026e1d71f1ade7c88fd22a990d83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>feature extraction</topic><topic>Mathematical analysis</topic><topic>Mathematical models</topic><topic>object classification</topic><topic>Object recognition</topic><topic>unsupervised machine learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>MATSUO, Tadashi</creatorcontrib><creatorcontrib>SHIMADA, Nobutaka</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEICE Transactions on Information and Systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>MATSUO, Tadashi</au><au>SHIMADA, Nobutaka</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions</atitle><jtitle>IEICE Transactions on Information and Systems</jtitle><addtitle>IEICE Trans. Inf. &amp; Syst.</addtitle><date>2017-01-01</date><risdate>2017</risdate><volume>E100.D</volume><issue>6</issue><spage>1350</spage><epage>1359</epage><pages>1350-1359</pages><issn>0916-8532</issn><eissn>1745-1361</eissn><abstract>Appearance-based generic object recognition is a challenging problem because all possible appearances of objects cannot be registered, especially as new objects are produced every day. Function of objects, however, has a comparatively small number of prototypes. Therefore, function-based classification of new objects could be a valuable tool for generic object recognition. Object functions are closely related to hand-object interactions during handling of a functional object; i.e., how the hand approaches the object, which parts of the object and contact the hand, and the shape of the hand during interaction. Hand-object interactions are helpful for modeling object functions. However, it is difficult to assign discrete labels to interactions because an object shape and grasping hand-postures intrinsically have continuous variations. To describe these interactions, we propose the interaction descriptor space which is acquired from unlabeled appearances of human hand-object interactions. By using interaction descriptors, we can numerically describe the relation between an object's appearance and its possible interaction with the hand. The model infers the quantitative state of the interaction from the object image alone. It also identifies the parts of objects designed for hand interactions such as grips and handles. We demonstrate that the proposed method can unsupervisedly generate interaction descriptors that make clusters corresponding to interaction types. And also we demonstrate that the model can infer possible hand-object interactions.</abstract><cop>Tokyo</cop><pub>The Institute of Electronics, Information and Communication Engineers</pub><doi>10.1587/transinf.2016EDP7410</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0916-8532
ispartof IEICE Transactions on Information and Systems, 2017/06/01, Vol.E100.D(6), pp.1350-1359
issn 0916-8532
1745-1361
language eng
recordid cdi_proquest_journals_2014553838
source J-STAGE Free; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects feature extraction
Mathematical analysis
Mathematical models
object classification
Object recognition
unsupervised machine learning
title Construction of Latent Descriptor Space and Inference Model of Hand-Object Interactions
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T13%3A36%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Construction%20of%20Latent%20Descriptor%20Space%20and%20Inference%20Model%20of%20Hand-Object%20Interactions&rft.jtitle=IEICE%20Transactions%20on%20Information%20and%20Systems&rft.au=MATSUO,%20Tadashi&rft.date=2017-01-01&rft.volume=E100.D&rft.issue=6&rft.spage=1350&rft.epage=1359&rft.pages=1350-1359&rft.issn=0916-8532&rft.eissn=1745-1361&rft_id=info:doi/10.1587/transinf.2016EDP7410&rft_dat=%3Cproquest_cross%3E2014553838%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2014553838&rft_id=info:pmid/&rfr_iscdi=true