Toward Lifelong Affordance Learning Using a Distributed Markov Model
Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new object...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cognitive and developmental systems 2018-03, Vol.10 (1), p.44-55 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 55 |
---|---|
container_issue | 1 |
container_start_page | 44 |
container_title | IEEE transactions on cognitive and developmental systems |
container_volume | 10 |
creator | Glover, Arren J. Wyeth, Gordon F. |
description | Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new objects, but without a general rule for what an "object" is, the robot must learn about everything in the environment to determine their affordances. In this paper, sensorimotor coordination is modeled using a distributed semi-Markov decision process; it is created online during robot operation, and performs continual action selection to reach a goal state. In an initial experiment we show that this model captures an object's affordances, which are exploited to perform several different tasks using a mobile robot equipped with a gripper and infrared "tactile" sensor. In a secondary experiment, we show that the robot can learn that the marker is the only visual feature that can be gripped and that walls and floor do not have the affordance of being "grip-able." The distributed mechanism is necessary for the modeling of multiple sensory stimuli simultaneously, and the selection of the object with the necessary affordances for the task is emergent from the robot's actions, while other parts of the environment that are perceived, such as walls and floors, are ignored. |
doi_str_mv | 10.1109/TCDS.2016.2612721 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2299151857</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7574281</ieee_id><sourcerecordid>2299151857</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-d98088c7a19ea61127e24afb4bcefbaf5e72dc6b8fa00723fde8e9bc2661bda73</originalsourceid><addsrcrecordid>eNo9kEtLw0AQgBdRsNT-APES8Jy4jyS7eyytL0jxYHte9jErqTVbd1PFf29CSy8zw_DNgw-hW4ILQrB8WC-W7wXFpC5oTSin5AJNKOMyF5LJy3NN8TWapbTFeEAZFyWfoOU6_Orosqb1sAvdRzb3PkSnOwtZAzp27dDbpDHqbNmmPrbm0IPLVjp-hp9sFRzsbtCV17sEs1Oeos3T43rxkjdvz6-LeZNbKlmfOymwEJZrIkHXZPgUaKm9KY0Fb7SvgFNnayO8xphT5h0IkMbSuibGac6m6P64dx_D9wFSr7bhELvhpKJUSlIRUY0UOVI2hpQieLWP7ZeOf4pgNfpSoy81-lInX8PM3XGmBYAzzyteUkHYP6NeZow</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2299151857</pqid></control><display><type>article</type><title>Toward Lifelong Affordance Learning Using a Distributed Markov Model</title><source>IEEE Electronic Library (IEL)</source><creator>Glover, Arren J. ; Wyeth, Gordon F.</creator><creatorcontrib>Glover, Arren J. ; Wyeth, Gordon F.</creatorcontrib><description>Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new objects, but without a general rule for what an "object" is, the robot must learn about everything in the environment to determine their affordances. In this paper, sensorimotor coordination is modeled using a distributed semi-Markov decision process; it is created online during robot operation, and performs continual action selection to reach a goal state. In an initial experiment we show that this model captures an object's affordances, which are exploited to perform several different tasks using a mobile robot equipped with a gripper and infrared "tactile" sensor. In a secondary experiment, we show that the robot can learn that the marker is the only visual feature that can be gripped and that walls and floor do not have the affordance of being "grip-able." The distributed mechanism is necessary for the modeling of multiple sensory stimuli simultaneously, and the selection of the object with the necessary affordances for the task is emergent from the robot's actions, while other parts of the environment that are perceived, such as walls and floors, are ignored.</description><identifier>ISSN: 2379-8920</identifier><identifier>EISSN: 2379-8939</identifier><identifier>DOI: 10.1109/TCDS.2016.2612721</identifier><identifier>CODEN: ITCDA4</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Affordance ; Computational modeling ; Floors ; Infrared detectors ; Learning ; Markov analysis ; Markov chains ; Markov model ; Markov processes ; mobile robot ; Mobile robots ; Robot kinematics ; Robot sensing systems ; Robots ; Visualization</subject><ispartof>IEEE transactions on cognitive and developmental systems, 2018-03, Vol.10 (1), p.44-55</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-d98088c7a19ea61127e24afb4bcefbaf5e72dc6b8fa00723fde8e9bc2661bda73</citedby><cites>FETCH-LOGICAL-c293t-d98088c7a19ea61127e24afb4bcefbaf5e72dc6b8fa00723fde8e9bc2661bda73</cites><orcidid>0000-0003-4499-4070</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7574281$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7574281$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Glover, Arren J.</creatorcontrib><creatorcontrib>Wyeth, Gordon F.</creatorcontrib><title>Toward Lifelong Affordance Learning Using a Distributed Markov Model</title><title>IEEE transactions on cognitive and developmental systems</title><addtitle>TCDS</addtitle><description>Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new objects, but without a general rule for what an "object" is, the robot must learn about everything in the environment to determine their affordances. In this paper, sensorimotor coordination is modeled using a distributed semi-Markov decision process; it is created online during robot operation, and performs continual action selection to reach a goal state. In an initial experiment we show that this model captures an object's affordances, which are exploited to perform several different tasks using a mobile robot equipped with a gripper and infrared "tactile" sensor. In a secondary experiment, we show that the robot can learn that the marker is the only visual feature that can be gripped and that walls and floor do not have the affordance of being "grip-able." The distributed mechanism is necessary for the modeling of multiple sensory stimuli simultaneously, and the selection of the object with the necessary affordances for the task is emergent from the robot's actions, while other parts of the environment that are perceived, such as walls and floors, are ignored.</description><subject>Affordance</subject><subject>Computational modeling</subject><subject>Floors</subject><subject>Infrared detectors</subject><subject>Learning</subject><subject>Markov analysis</subject><subject>Markov chains</subject><subject>Markov model</subject><subject>Markov processes</subject><subject>mobile robot</subject><subject>Mobile robots</subject><subject>Robot kinematics</subject><subject>Robot sensing systems</subject><subject>Robots</subject><subject>Visualization</subject><issn>2379-8920</issn><issn>2379-8939</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kEtLw0AQgBdRsNT-APES8Jy4jyS7eyytL0jxYHte9jErqTVbd1PFf29CSy8zw_DNgw-hW4ILQrB8WC-W7wXFpC5oTSin5AJNKOMyF5LJy3NN8TWapbTFeEAZFyWfoOU6_Orosqb1sAvdRzb3PkSnOwtZAzp27dDbpDHqbNmmPrbm0IPLVjp-hp9sFRzsbtCV17sEs1Oeos3T43rxkjdvz6-LeZNbKlmfOymwEJZrIkHXZPgUaKm9KY0Fb7SvgFNnayO8xphT5h0IkMbSuibGac6m6P64dx_D9wFSr7bhELvhpKJUSlIRUY0UOVI2hpQieLWP7ZeOf4pgNfpSoy81-lInX8PM3XGmBYAzzyteUkHYP6NeZow</recordid><startdate>20180301</startdate><enddate>20180301</enddate><creator>Glover, Arren J.</creator><creator>Wyeth, Gordon F.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-4499-4070</orcidid></search><sort><creationdate>20180301</creationdate><title>Toward Lifelong Affordance Learning Using a Distributed Markov Model</title><author>Glover, Arren J. ; Wyeth, Gordon F.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-d98088c7a19ea61127e24afb4bcefbaf5e72dc6b8fa00723fde8e9bc2661bda73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Affordance</topic><topic>Computational modeling</topic><topic>Floors</topic><topic>Infrared detectors</topic><topic>Learning</topic><topic>Markov analysis</topic><topic>Markov chains</topic><topic>Markov model</topic><topic>Markov processes</topic><topic>mobile robot</topic><topic>Mobile robots</topic><topic>Robot kinematics</topic><topic>Robot sensing systems</topic><topic>Robots</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Glover, Arren J.</creatorcontrib><creatorcontrib>Wyeth, Gordon F.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on cognitive and developmental systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Glover, Arren J.</au><au>Wyeth, Gordon F.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward Lifelong Affordance Learning Using a Distributed Markov Model</atitle><jtitle>IEEE transactions on cognitive and developmental systems</jtitle><stitle>TCDS</stitle><date>2018-03-01</date><risdate>2018</risdate><volume>10</volume><issue>1</issue><spage>44</spage><epage>55</epage><pages>44-55</pages><issn>2379-8920</issn><eissn>2379-8939</eissn><coden>ITCDA4</coden><abstract>Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new objects, but without a general rule for what an "object" is, the robot must learn about everything in the environment to determine their affordances. In this paper, sensorimotor coordination is modeled using a distributed semi-Markov decision process; it is created online during robot operation, and performs continual action selection to reach a goal state. In an initial experiment we show that this model captures an object's affordances, which are exploited to perform several different tasks using a mobile robot equipped with a gripper and infrared "tactile" sensor. In a secondary experiment, we show that the robot can learn that the marker is the only visual feature that can be gripped and that walls and floor do not have the affordance of being "grip-able." The distributed mechanism is necessary for the modeling of multiple sensory stimuli simultaneously, and the selection of the object with the necessary affordances for the task is emergent from the robot's actions, while other parts of the environment that are perceived, such as walls and floors, are ignored.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TCDS.2016.2612721</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-4499-4070</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2379-8920 |
ispartof | IEEE transactions on cognitive and developmental systems, 2018-03, Vol.10 (1), p.44-55 |
issn | 2379-8920 2379-8939 |
language | eng |
recordid | cdi_proquest_journals_2299151857 |
source | IEEE Electronic Library (IEL) |
subjects | Affordance Computational modeling Floors Infrared detectors Learning Markov analysis Markov chains Markov model Markov processes mobile robot Mobile robots Robot kinematics Robot sensing systems Robots Visualization |
title | Toward Lifelong Affordance Learning Using a Distributed Markov Model |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T06%3A08%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20Lifelong%20Affordance%20Learning%20Using%20a%20Distributed%20Markov%20Model&rft.jtitle=IEEE%20transactions%20on%20cognitive%20and%20developmental%20systems&rft.au=Glover,%20Arren%20J.&rft.date=2018-03-01&rft.volume=10&rft.issue=1&rft.spage=44&rft.epage=55&rft.pages=44-55&rft.issn=2379-8920&rft.eissn=2379-8939&rft.coden=ITCDA4&rft_id=info:doi/10.1109/TCDS.2016.2612721&rft_dat=%3Cproquest_RIE%3E2299151857%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2299151857&rft_id=info:pmid/&rft_ieee_id=7574281&rfr_iscdi=true |