Grounding vision through experimental manipulation

Experimentation is crucial to human progress at all scales, from society as a whole to a young infant in its cradle. It allows us to elicit learning episodes suited to our own needs and limitations. This paper develops active strategies for a robot to acquire visual experience through simple experim...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences physical, and engineering sciences, 2003-10, Vol.361 (1811), p.2165-2185
Hauptverfasser: Damper, R. I., Fitzpatrick, Paul, Metta, Giorgio
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2185
container_issue 1811
container_start_page 2165
container_title Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences
container_volume 361
creator Damper, R. I.
Fitzpatrick, Paul
Metta, Giorgio
description Experimentation is crucial to human progress at all scales, from society as a whole to a young infant in its cradle. It allows us to elicit learning episodes suited to our own needs and limitations. This paper develops active strategies for a robot to acquire visual experience through simple experimental manipulation. The experiments are oriented towards determining what parts of the environment are physically coherent-that is, which parts will move together, and which are more or less independent. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.
doi_str_mv 10.1098/rsta.2003.1251
format Article
fullrecord <record><control><sourceid>jstor_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1098_rsta_2003_1251</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>3559118</jstor_id><sourcerecordid>3559118</sourcerecordid><originalsourceid>FETCH-LOGICAL-c601t-5da881be8cec8a0e0cccb62630350d128ac28351c7d9958dacc7b330e1c41c03</originalsourceid><addsrcrecordid>eNp9kk2P0zAQhiMEYj_gygmhnrileOw4sW8sFRSklZCgAm4j13FblzQOtrNs99fjNNWiCrEnf8wz78y8mix7AWQKRIo3PkQ1pYSwKVAOj7JzKCrIqSzp43RnZZFzwn6cZRchbAkBKDl9mp1BwaVkUJxndO5d39a2XU9ubLCuncRN-llvJua2M97uTBtVM9mp1nZ9o2IinmVPVqoJ5vnxvMwWH94vZh_z68_zT7Or61yXBGLOayUELI3QRgtFDNFaL0taMsI4qYEKpalgHHRVS8lFrbSulowRA7oATdhl9nqU7bz71ZsQcWeDNk2jWuP6gBWwoiASEjgdQe1dCN6ssEt9K79HIDiYhINJOJiEg0kp4dVRuV_uTP0XP7qSADYC3u3ThE5bE_e4db1v0_P_suGhrC9fF1cgZXnDSrAgAJAIBqSirJB4Z7uD3ABgAtCG0Bs8YKdl_q36cqy6DdH5-1kY5xJApHA-hm2I5vY-rPxPLCtWcfwmCpzBnBXk3QK_J_7tyG_sevPbeoMn0xyKa9fGtBaHPg8d0rRWuOqbBrt6lSTgQQm375LISTL7A8-i2g4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>71344091</pqid></control><display><type>article</type><title>Grounding vision through experimental manipulation</title><source>MEDLINE</source><source>Free Full-Text Journals in Chemistry</source><source>JSTOR Mathematics &amp; Statistics</source><creator>Damper, R. I. ; Fitzpatrick, Paul ; Metta, Giorgio</creator><contributor>Damper, R. I.</contributor><creatorcontrib>Damper, R. I. ; Fitzpatrick, Paul ; Metta, Giorgio ; Damper, R. I.</creatorcontrib><description>Experimentation is crucial to human progress at all scales, from society as a whole to a young infant in its cradle. It allows us to elicit learning episodes suited to our own needs and limitations. This paper develops active strategies for a robot to acquire visual experience through simple experimental manipulation. The experiments are oriented towards determining what parts of the environment are physically coherent-that is, which parts will move together, and which are more or less independent. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.</description><identifier>ISSN: 1364-503X</identifier><identifier>EISSN: 1471-2962</identifier><identifier>DOI: 10.1098/rsta.2003.1251</identifier><identifier>PMID: 14599314</identifier><language>eng</language><publisher>England: The Royal Society</publisher><subject>Active Vision ; Adaptation, Physiological - physiology ; Arm ; Artificial Intelligence ; Cogs ; Cubes ; Feedback ; Humanoid Robot ; Legal objections ; Mental objects ; Mirror Neuron ; Mirror neurons ; Motion ; Movement - physiology ; Neural Networks (Computer) ; Neurons ; Neurons - physiology ; Optics ; Orientation - physiology ; Pattern Recognition, Automated ; Robotics - methods ; Robots ; Segmentation ; Vision, Ocular - physiology ; Visual perception ; Visual Perception - physiology</subject><ispartof>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences, 2003-10, Vol.361 (1811), p.2165-2185</ispartof><rights>Copyright 2003 The Royal Society</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c601t-5da881be8cec8a0e0cccb62630350d128ac28351c7d9958dacc7b330e1c41c03</citedby><cites>FETCH-LOGICAL-c601t-5da881be8cec8a0e0cccb62630350d128ac28351c7d9958dacc7b330e1c41c03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.jstor.org/stable/pdf/3559118$$EPDF$$P50$$Gjstor$$H</linktopdf><linktohtml>$$Uhttps://www.jstor.org/stable/3559118$$EHTML$$P50$$Gjstor$$H</linktohtml><link.rule.ids>314,776,780,828,27901,27902,57996,58229</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/14599314$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Damper, R. I.</contributor><creatorcontrib>Damper, R. I.</creatorcontrib><creatorcontrib>Fitzpatrick, Paul</creatorcontrib><creatorcontrib>Metta, Giorgio</creatorcontrib><title>Grounding vision through experimental manipulation</title><title>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences</title><addtitle>Philos Trans A Math Phys Eng Sci</addtitle><description>Experimentation is crucial to human progress at all scales, from society as a whole to a young infant in its cradle. It allows us to elicit learning episodes suited to our own needs and limitations. This paper develops active strategies for a robot to acquire visual experience through simple experimental manipulation. The experiments are oriented towards determining what parts of the environment are physically coherent-that is, which parts will move together, and which are more or less independent. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.</description><subject>Active Vision</subject><subject>Adaptation, Physiological - physiology</subject><subject>Arm</subject><subject>Artificial Intelligence</subject><subject>Cogs</subject><subject>Cubes</subject><subject>Feedback</subject><subject>Humanoid Robot</subject><subject>Legal objections</subject><subject>Mental objects</subject><subject>Mirror Neuron</subject><subject>Mirror neurons</subject><subject>Motion</subject><subject>Movement - physiology</subject><subject>Neural Networks (Computer)</subject><subject>Neurons</subject><subject>Neurons - physiology</subject><subject>Optics</subject><subject>Orientation - physiology</subject><subject>Pattern Recognition, Automated</subject><subject>Robotics - methods</subject><subject>Robots</subject><subject>Segmentation</subject><subject>Vision, Ocular - physiology</subject><subject>Visual perception</subject><subject>Visual Perception - physiology</subject><issn>1364-503X</issn><issn>1471-2962</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2003</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kk2P0zAQhiMEYj_gygmhnrileOw4sW8sFRSklZCgAm4j13FblzQOtrNs99fjNNWiCrEnf8wz78y8mix7AWQKRIo3PkQ1pYSwKVAOj7JzKCrIqSzp43RnZZFzwn6cZRchbAkBKDl9mp1BwaVkUJxndO5d39a2XU9ubLCuncRN-llvJua2M97uTBtVM9mp1nZ9o2IinmVPVqoJ5vnxvMwWH94vZh_z68_zT7Or61yXBGLOayUELI3QRgtFDNFaL0taMsI4qYEKpalgHHRVS8lFrbSulowRA7oATdhl9nqU7bz71ZsQcWeDNk2jWuP6gBWwoiASEjgdQe1dCN6ssEt9K79HIDiYhINJOJiEg0kp4dVRuV_uTP0XP7qSADYC3u3ThE5bE_e4db1v0_P_suGhrC9fF1cgZXnDSrAgAJAIBqSirJB4Z7uD3ABgAtCG0Bs8YKdl_q36cqy6DdH5-1kY5xJApHA-hm2I5vY-rPxPLCtWcfwmCpzBnBXk3QK_J_7tyG_sevPbeoMn0xyKa9fGtBaHPg8d0rRWuOqbBrt6lSTgQQm375LISTL7A8-i2g4</recordid><startdate>20031015</startdate><enddate>20031015</enddate><creator>Damper, R. I.</creator><creator>Fitzpatrick, Paul</creator><creator>Metta, Giorgio</creator><general>The Royal Society</general><scope>BSCLL</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>20031015</creationdate><title>Grounding vision through experimental manipulation</title><author>Damper, R. I. ; Fitzpatrick, Paul ; Metta, Giorgio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c601t-5da881be8cec8a0e0cccb62630350d128ac28351c7d9958dacc7b330e1c41c03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2003</creationdate><topic>Active Vision</topic><topic>Adaptation, Physiological - physiology</topic><topic>Arm</topic><topic>Artificial Intelligence</topic><topic>Cogs</topic><topic>Cubes</topic><topic>Feedback</topic><topic>Humanoid Robot</topic><topic>Legal objections</topic><topic>Mental objects</topic><topic>Mirror Neuron</topic><topic>Mirror neurons</topic><topic>Motion</topic><topic>Movement - physiology</topic><topic>Neural Networks (Computer)</topic><topic>Neurons</topic><topic>Neurons - physiology</topic><topic>Optics</topic><topic>Orientation - physiology</topic><topic>Pattern Recognition, Automated</topic><topic>Robotics - methods</topic><topic>Robots</topic><topic>Segmentation</topic><topic>Vision, Ocular - physiology</topic><topic>Visual perception</topic><topic>Visual Perception - physiology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Damper, R. I.</creatorcontrib><creatorcontrib>Fitzpatrick, Paul</creatorcontrib><creatorcontrib>Metta, Giorgio</creatorcontrib><collection>Istex</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Damper, R. I.</au><au>Fitzpatrick, Paul</au><au>Metta, Giorgio</au><au>Damper, R. I.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Grounding vision through experimental manipulation</atitle><jtitle>Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences</jtitle><addtitle>Philos Trans A Math Phys Eng Sci</addtitle><date>2003-10-15</date><risdate>2003</risdate><volume>361</volume><issue>1811</issue><spage>2165</spage><epage>2185</epage><pages>2165-2185</pages><issn>1364-503X</issn><eissn>1471-2962</eissn><abstract>Experimentation is crucial to human progress at all scales, from society as a whole to a young infant in its cradle. It allows us to elicit learning episodes suited to our own needs and limitations. This paper develops active strategies for a robot to acquire visual experience through simple experimental manipulation. The experiments are oriented towards determining what parts of the environment are physically coherent-that is, which parts will move together, and which are more or less independent. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.</abstract><cop>England</cop><pub>The Royal Society</pub><pmid>14599314</pmid><doi>10.1098/rsta.2003.1251</doi><tpages>21</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1364-503X
ispartof Philosophical transactions of the Royal Society of London. Series A: Mathematical, physical, and engineering sciences, 2003-10, Vol.361 (1811), p.2165-2185
issn 1364-503X
1471-2962
language eng
recordid cdi_crossref_primary_10_1098_rsta_2003_1251
source MEDLINE; Free Full-Text Journals in Chemistry; JSTOR Mathematics & Statistics
subjects Active Vision
Adaptation, Physiological - physiology
Arm
Artificial Intelligence
Cogs
Cubes
Feedback
Humanoid Robot
Legal objections
Mental objects
Mirror Neuron
Mirror neurons
Motion
Movement - physiology
Neural Networks (Computer)
Neurons
Neurons - physiology
Optics
Orientation - physiology
Pattern Recognition, Automated
Robotics - methods
Robots
Segmentation
Vision, Ocular - physiology
Visual perception
Visual Perception - physiology
title Grounding vision through experimental manipulation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T00%3A47%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Grounding%20vision%20through%20experimental%20manipulation&rft.jtitle=Philosophical%20transactions%20of%20the%20Royal%20Society%20of%20London.%20Series%20A:%20Mathematical,%20physical,%20and%20engineering%20sciences&rft.au=Damper,%20R.%20I.&rft.date=2003-10-15&rft.volume=361&rft.issue=1811&rft.spage=2165&rft.epage=2185&rft.pages=2165-2185&rft.issn=1364-503X&rft.eissn=1471-2962&rft_id=info:doi/10.1098/rsta.2003.1251&rft_dat=%3Cjstor_cross%3E3559118%3C/jstor_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=71344091&rft_id=info:pmid/14599314&rft_jstor_id=3559118&rfr_iscdi=true