Learning agent’s spatial configuration from sensorimotor invariants

The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Robotics and autonomous systems 2015-09, Vol.71, p.49-59
Hauptverfasser: Laflaquière, Alban, O’Regan, J. Kevin, Argentieri, Sylvain, Gas, Bruno, Terekhov, Alexander V.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 59
container_issue
container_start_page 49
container_title Robotics and autonomous systems
container_volume 71
creator Laflaquière, Alban
O’Regan, J. Kevin
Argentieri, Sylvain
Gas, Bruno
Terekhov, Alexander V.
description The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot’s sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent’s exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina. •Autonomous robots should develop perceptual notions from raw sensorimotor data.•Environment-dependency of visual inputs complicates acquisition of spatial notions.•Agent can learn its spatial configuration through invariants in sensorimotor laws.•Approach is illustrated on a simulated planar multijoint agent with a mobile retina.
doi_str_mv 10.1016/j.robot.2015.01.003
format Article
fullrecord <record><control><sourceid>elsevier_hal_p</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_03196436v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S092188901500010X</els_id><sourcerecordid>S092188901500010X</sourcerecordid><originalsourceid>FETCH-LOGICAL-c337t-b2b8908c9d794fc27e8483cce0adfe59723f3d847c3c51ca8cdc2ddf3c25b99f3</originalsourceid><addsrcrecordid>eNp9kM9KAzEQh4MoWKtP4GWvHnbNn26THDyUolZY8KLgLWQnSU1pk5KsBW--hq_nk5ha8ehpmOH3DTMfQpcENwST6fWqSbGPQ0MxaRtMGozZERoRwWnNJXs5RiMsKamFkPgUneW8wiXRcjZCt53VKfiwrPTShuHr4zNXeasHr9cVxOD88i2VLobKpbipsg05Jr-JQ0yVDzudvA5DPkcnTq-zvfitY_R8d_s0X9Td4_3DfNbVwBgf6p725QIB0nA5cUC5FRPBACzWxtlWcsocM2LCgUFLQAswQI1xDGjbS-nYGF0d9r7qtdqWO3R6V1F7tZh1aj_DjMjphE13pGTZIQsp5pys-wMIVntraqV-rKm9NYWJKk4KdXOgbHlj521SGbwNYI1PFgZlov-X_wbEKXme</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning agent’s spatial configuration from sensorimotor invariants</title><source>ScienceDirect Journals (5 years ago - present)</source><creator>Laflaquière, Alban ; O’Regan, J. Kevin ; Argentieri, Sylvain ; Gas, Bruno ; Terekhov, Alexander V.</creator><creatorcontrib>Laflaquière, Alban ; O’Regan, J. Kevin ; Argentieri, Sylvain ; Gas, Bruno ; Terekhov, Alexander V.</creatorcontrib><description>The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot’s sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent’s exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina. •Autonomous robots should develop perceptual notions from raw sensorimotor data.•Environment-dependency of visual inputs complicates acquisition of spatial notions.•Agent can learn its spatial configuration through invariants in sensorimotor laws.•Approach is illustrated on a simulated planar multijoint agent with a mobile retina.</description><identifier>ISSN: 0921-8890</identifier><identifier>EISSN: 1872-793X</identifier><identifier>DOI: 10.1016/j.robot.2015.01.003</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Artificial Intelligence ; Computer Science ; Developmental robotics ; Learning ; Perception ; Robotics ; Sensorimotor theory ; Signal and Image Processing ; Space</subject><ispartof>Robotics and autonomous systems, 2015-09, Vol.71, p.49-59</ispartof><rights>2015 Elsevier B.V.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c337t-b2b8908c9d794fc27e8483cce0adfe59723f3d847c3c51ca8cdc2ddf3c25b99f3</citedby><cites>FETCH-LOGICAL-c337t-b2b8908c9d794fc27e8483cce0adfe59723f3d847c3c51ca8cdc2ddf3c25b99f3</cites><orcidid>0000-0002-4874-8339 ; 0000-0002-0276-8713 ; 0000-0001-7258-797X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.robot.2015.01.003$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,780,784,885,3548,27922,27923,45993</link.rule.ids><backlink>$$Uhttps://hal.science/hal-03196436$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Laflaquière, Alban</creatorcontrib><creatorcontrib>O’Regan, J. Kevin</creatorcontrib><creatorcontrib>Argentieri, Sylvain</creatorcontrib><creatorcontrib>Gas, Bruno</creatorcontrib><creatorcontrib>Terekhov, Alexander V.</creatorcontrib><title>Learning agent’s spatial configuration from sensorimotor invariants</title><title>Robotics and autonomous systems</title><description>The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot’s sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent’s exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina. •Autonomous robots should develop perceptual notions from raw sensorimotor data.•Environment-dependency of visual inputs complicates acquisition of spatial notions.•Agent can learn its spatial configuration through invariants in sensorimotor laws.•Approach is illustrated on a simulated planar multijoint agent with a mobile retina.</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Developmental robotics</subject><subject>Learning</subject><subject>Perception</subject><subject>Robotics</subject><subject>Sensorimotor theory</subject><subject>Signal and Image Processing</subject><subject>Space</subject><issn>0921-8890</issn><issn>1872-793X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><recordid>eNp9kM9KAzEQh4MoWKtP4GWvHnbNn26THDyUolZY8KLgLWQnSU1pk5KsBW--hq_nk5ha8ehpmOH3DTMfQpcENwST6fWqSbGPQ0MxaRtMGozZERoRwWnNJXs5RiMsKamFkPgUneW8wiXRcjZCt53VKfiwrPTShuHr4zNXeasHr9cVxOD88i2VLobKpbipsg05Jr-JQ0yVDzudvA5DPkcnTq-zvfitY_R8d_s0X9Td4_3DfNbVwBgf6p725QIB0nA5cUC5FRPBACzWxtlWcsocM2LCgUFLQAswQI1xDGjbS-nYGF0d9r7qtdqWO3R6V1F7tZh1aj_DjMjphE13pGTZIQsp5pys-wMIVntraqV-rKm9NYWJKk4KdXOgbHlj521SGbwNYI1PFgZlov-X_wbEKXme</recordid><startdate>20150901</startdate><enddate>20150901</enddate><creator>Laflaquière, Alban</creator><creator>O’Regan, J. Kevin</creator><creator>Argentieri, Sylvain</creator><creator>Gas, Bruno</creator><creator>Terekhov, Alexander V.</creator><general>Elsevier B.V</general><general>Elsevier</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-4874-8339</orcidid><orcidid>https://orcid.org/0000-0002-0276-8713</orcidid><orcidid>https://orcid.org/0000-0001-7258-797X</orcidid></search><sort><creationdate>20150901</creationdate><title>Learning agent’s spatial configuration from sensorimotor invariants</title><author>Laflaquière, Alban ; O’Regan, J. Kevin ; Argentieri, Sylvain ; Gas, Bruno ; Terekhov, Alexander V.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c337t-b2b8908c9d794fc27e8483cce0adfe59723f3d847c3c51ca8cdc2ddf3c25b99f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Developmental robotics</topic><topic>Learning</topic><topic>Perception</topic><topic>Robotics</topic><topic>Sensorimotor theory</topic><topic>Signal and Image Processing</topic><topic>Space</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Laflaquière, Alban</creatorcontrib><creatorcontrib>O’Regan, J. Kevin</creatorcontrib><creatorcontrib>Argentieri, Sylvain</creatorcontrib><creatorcontrib>Gas, Bruno</creatorcontrib><creatorcontrib>Terekhov, Alexander V.</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Robotics and autonomous systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Laflaquière, Alban</au><au>O’Regan, J. Kevin</au><au>Argentieri, Sylvain</au><au>Gas, Bruno</au><au>Terekhov, Alexander V.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning agent’s spatial configuration from sensorimotor invariants</atitle><jtitle>Robotics and autonomous systems</jtitle><date>2015-09-01</date><risdate>2015</risdate><volume>71</volume><spage>49</spage><epage>59</epage><pages>49-59</pages><issn>0921-8890</issn><eissn>1872-793X</eissn><abstract>The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot’s sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent’s exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina. •Autonomous robots should develop perceptual notions from raw sensorimotor data.•Environment-dependency of visual inputs complicates acquisition of spatial notions.•Agent can learn its spatial configuration through invariants in sensorimotor laws.•Approach is illustrated on a simulated planar multijoint agent with a mobile retina.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.robot.2015.01.003</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-4874-8339</orcidid><orcidid>https://orcid.org/0000-0002-0276-8713</orcidid><orcidid>https://orcid.org/0000-0001-7258-797X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0921-8890
ispartof Robotics and autonomous systems, 2015-09, Vol.71, p.49-59
issn 0921-8890
1872-793X
language eng
recordid cdi_hal_primary_oai_HAL_hal_03196436v1
source ScienceDirect Journals (5 years ago - present)
subjects Artificial Intelligence
Computer Science
Developmental robotics
Learning
Perception
Robotics
Sensorimotor theory
Signal and Image Processing
Space
title Learning agent’s spatial configuration from sensorimotor invariants
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T20%3A22%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_hal_p&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20agent%E2%80%99s%20spatial%20configuration%20from%20sensorimotor%20invariants&rft.jtitle=Robotics%20and%20autonomous%20systems&rft.au=Laflaqui%C3%A8re,%20Alban&rft.date=2015-09-01&rft.volume=71&rft.spage=49&rft.epage=59&rft.pages=49-59&rft.issn=0921-8890&rft.eissn=1872-793X&rft_id=info:doi/10.1016/j.robot.2015.01.003&rft_dat=%3Celsevier_hal_p%3ES092188901500010X%3C/elsevier_hal_p%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_els_id=S092188901500010X&rfr_iscdi=true