Continual Robot Learning with Constructive Neural Networks

In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continue...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Großmann, Axel, Poli, Riccardo
Format: Buchkapitel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 108
container_issue
container_start_page 95
container_title
container_volume 1545
creator Großmann, Axel
Poli, Riccardo
description In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-action rules in a constructive high-order neural network. Preliminary experiments are reported which show that incremental, hierarchical development, bootstrapped by imitative learning, allows the robot to adapt to changes in its environment during its entire lifetime very efficiently, even if only delayed reinforcements are given.
doi_str_mv 10.1007/3-540-49240-2_7
format Book Chapter
fullrecord <record><control><sourceid>proquest_pasca</sourceid><recordid>TN_cdi_pascalfrancis_primary_1574080</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>EBC3072076_12_103</sourcerecordid><originalsourceid>FETCH-LOGICAL-p266t-e37623f6ae3e17d64de14e23c9cdfa8c7f2956c1b1b543df4db7f87b25ab1c5e3</originalsourceid><addsrcrecordid>eNotUMtOwzAQNE8RSs9cc-BqsL1-JNxQxUuqioTgbDmO04aWpNgOFX-P-9jDrnZndqQZhK4puaWEqDvAghPMS5Y60-oIXUI67HZ2jDIqKcUAvDxB41IVW0wKXhA4RRkBwnCpOJyjrBSFYJSWxQUah_BFUgHjwIsM3U_6LrbdYFb5e1_1MZ8647u2m-ebNi7yhIboBxvbX5fP3OATb-bipvfLcIXOGrMKbnyYI_T59PgxecHTt-fXycMUr5mUETtQkkEjjQNHVS157Sh3DGxp68YUVjWsFNLSilaCQ93wulJNoSomTEWtcDBCN3vdtQnWrBpvOtsGvfbtt_F_mgrFSfI8QnhPCwnp5s7rqu-XQVOit1lq0CkevctOpywTHw6yvv8ZXIjabR-s62JyaRdmHZ0PGohiRElNWZIB-AebPXG5</addsrcrecordid><sourcetype>Index Database</sourcetype><iscdi>true</iscdi><recordtype>book_chapter</recordtype><pqid>EBC3072076_12_103</pqid></control><display><type>book_chapter</type><title>Continual Robot Learning with Constructive Neural Networks</title><source>Springer Books</source><creator>Großmann, Axel ; Poli, Riccardo</creator><contributor>Demiris, John ; van Leeuwen, J ; Birk, Andreas ; Demiris, John ; Birk, Andreas</contributor><creatorcontrib>Großmann, Axel ; Poli, Riccardo ; Demiris, John ; van Leeuwen, J ; Birk, Andreas ; Demiris, John ; Birk, Andreas</creatorcontrib><description>In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-action rules in a constructive high-order neural network. Preliminary experiments are reported which show that incremental, hierarchical development, bootstrapped by imitative learning, allows the robot to adapt to changes in its environment during its entire lifetime very efficiently, even if only delayed reinforcements are given.</description><identifier>ISSN: 0302-9743</identifier><identifier>ISBN: 9783540654803</identifier><identifier>ISBN: 3540654801</identifier><identifier>EISSN: 1611-3349</identifier><identifier>EISBN: 3540492402</identifier><identifier>EISBN: 9783540492405</identifier><identifier>DOI: 10.1007/3-540-49240-2_7</identifier><identifier>OCLC: 958521198</identifier><identifier>LCCallNum: TA1-2040</identifier><language>eng</language><publisher>Germany: Springer Berlin / Heidelberg</publisher><subject>Applied sciences ; Artificial intelligence ; Computer science; control theory; systems ; Control theory. Systems ; Exact sciences and technology ; Goal Position ; Learning and adaptive systems ; Learning Task ; Mobile Robot ; Navigation Task ; Robotics ; Supervise Learning Algorithm</subject><ispartof>Learning Robots, 1998, Vol.1545, p.95-108</ispartof><rights>Springer-Verlag Berlin Heidelberg 1998</rights><rights>1999 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><relation>Lecture Notes in Computer Science</relation></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Uhttps://ebookcentral.proquest.com/covers/3072076-l.jpg</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/3-540-49240-2_7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/3-540-49240-2_7$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>309,310,775,776,780,785,786,789,27902,38232,41418,42487</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=1574080$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><contributor>Demiris, John</contributor><contributor>van Leeuwen, J</contributor><contributor>Birk, Andreas</contributor><contributor>Demiris, John</contributor><contributor>Birk, Andreas</contributor><creatorcontrib>Großmann, Axel</creatorcontrib><creatorcontrib>Poli, Riccardo</creatorcontrib><title>Continual Robot Learning with Constructive Neural Networks</title><title>Learning Robots</title><description>In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-action rules in a constructive high-order neural network. Preliminary experiments are reported which show that incremental, hierarchical development, bootstrapped by imitative learning, allows the robot to adapt to changes in its environment during its entire lifetime very efficiently, even if only delayed reinforcements are given.</description><subject>Applied sciences</subject><subject>Artificial intelligence</subject><subject>Computer science; control theory; systems</subject><subject>Control theory. Systems</subject><subject>Exact sciences and technology</subject><subject>Goal Position</subject><subject>Learning and adaptive systems</subject><subject>Learning Task</subject><subject>Mobile Robot</subject><subject>Navigation Task</subject><subject>Robotics</subject><subject>Supervise Learning Algorithm</subject><issn>0302-9743</issn><issn>1611-3349</issn><isbn>9783540654803</isbn><isbn>3540654801</isbn><isbn>3540492402</isbn><isbn>9783540492405</isbn><fulltext>true</fulltext><rsrctype>book_chapter</rsrctype><creationdate>1998</creationdate><recordtype>book_chapter</recordtype><recordid>eNotUMtOwzAQNE8RSs9cc-BqsL1-JNxQxUuqioTgbDmO04aWpNgOFX-P-9jDrnZndqQZhK4puaWEqDvAghPMS5Y60-oIXUI67HZ2jDIqKcUAvDxB41IVW0wKXhA4RRkBwnCpOJyjrBSFYJSWxQUah_BFUgHjwIsM3U_6LrbdYFb5e1_1MZ8647u2m-ebNi7yhIboBxvbX5fP3OATb-bipvfLcIXOGrMKbnyYI_T59PgxecHTt-fXycMUr5mUETtQkkEjjQNHVS157Sh3DGxp68YUVjWsFNLSilaCQ93wulJNoSomTEWtcDBCN3vdtQnWrBpvOtsGvfbtt_F_mgrFSfI8QnhPCwnp5s7rqu-XQVOit1lq0CkevctOpywTHw6yvv8ZXIjabR-s62JyaRdmHZ0PGohiRElNWZIB-AebPXG5</recordid><startdate>19980101</startdate><enddate>19980101</enddate><creator>Großmann, Axel</creator><creator>Poli, Riccardo</creator><general>Springer Berlin / Heidelberg</general><general>Springer Berlin Heidelberg</general><general>Springer</general><scope>FFUUA</scope><scope>IQODW</scope></search><sort><creationdate>19980101</creationdate><title>Continual Robot Learning with Constructive Neural Networks</title><author>Großmann, Axel ; Poli, Riccardo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p266t-e37623f6ae3e17d64de14e23c9cdfa8c7f2956c1b1b543df4db7f87b25ab1c5e3</frbrgroupid><rsrctype>book_chapters</rsrctype><prefilter>book_chapters</prefilter><language>eng</language><creationdate>1998</creationdate><topic>Applied sciences</topic><topic>Artificial intelligence</topic><topic>Computer science; control theory; systems</topic><topic>Control theory. Systems</topic><topic>Exact sciences and technology</topic><topic>Goal Position</topic><topic>Learning and adaptive systems</topic><topic>Learning Task</topic><topic>Mobile Robot</topic><topic>Navigation Task</topic><topic>Robotics</topic><topic>Supervise Learning Algorithm</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Großmann, Axel</creatorcontrib><creatorcontrib>Poli, Riccardo</creatorcontrib><collection>ProQuest Ebook Central - Book Chapters - Demo use only</collection><collection>Pascal-Francis</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Großmann, Axel</au><au>Poli, Riccardo</au><au>Demiris, John</au><au>van Leeuwen, J</au><au>Birk, Andreas</au><au>Demiris, John</au><au>Birk, Andreas</au><format>book</format><genre>bookitem</genre><ristype>CHAP</ristype><atitle>Continual Robot Learning with Constructive Neural Networks</atitle><btitle>Learning Robots</btitle><seriestitle>Lecture Notes in Computer Science</seriestitle><date>1998-01-01</date><risdate>1998</risdate><volume>1545</volume><spage>95</spage><epage>108</epage><pages>95-108</pages><issn>0302-9743</issn><eissn>1611-3349</eissn><isbn>9783540654803</isbn><isbn>3540654801</isbn><eisbn>3540492402</eisbn><eisbn>9783540492405</eisbn><abstract>In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-action rules in a constructive high-order neural network. Preliminary experiments are reported which show that incremental, hierarchical development, bootstrapped by imitative learning, allows the robot to adapt to changes in its environment during its entire lifetime very efficiently, even if only delayed reinforcements are given.</abstract><cop>Germany</cop><pub>Springer Berlin / Heidelberg</pub><doi>10.1007/3-540-49240-2_7</doi><oclcid>958521198</oclcid><tpages>14</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0302-9743
ispartof Learning Robots, 1998, Vol.1545, p.95-108
issn 0302-9743
1611-3349
language eng
recordid cdi_pascalfrancis_primary_1574080
source Springer Books
subjects Applied sciences
Artificial intelligence
Computer science
control theory
systems
Control theory. Systems
Exact sciences and technology
Goal Position
Learning and adaptive systems
Learning Task
Mobile Robot
Navigation Task
Robotics
Supervise Learning Algorithm
title Continual Robot Learning with Constructive Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T22%3A15%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pasca&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=bookitem&rft.atitle=Continual%20Robot%20Learning%20with%20Constructive%20Neural%20Networks&rft.btitle=Learning%20Robots&rft.au=Gro%C3%9Fmann,%20Axel&rft.date=1998-01-01&rft.volume=1545&rft.spage=95&rft.epage=108&rft.pages=95-108&rft.issn=0302-9743&rft.eissn=1611-3349&rft.isbn=9783540654803&rft.isbn_list=3540654801&rft_id=info:doi/10.1007/3-540-49240-2_7&rft_dat=%3Cproquest_pasca%3EEBC3072076_12_103%3C/proquest_pasca%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540492402&rft.eisbn_list=9783540492405&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=EBC3072076_12_103&rft_id=info:pmid/&rfr_iscdi=true