The training set and generalization in grammatical evolution for autonomous agent navigation
Over recent years, evolutionary computation research has begun to emphasize the issue of generalization. Instead of evolving solutions that are optimized for a particular problem instance, the goal is to evolve solutions that can generalize to various different scenarios. This paper compares objecti...
Gespeichert in:
Veröffentlicht in: | Soft computing (Berlin, Germany) Germany), 2017-08, Vol.21 (15), p.4399-4416 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4416 |
---|---|
container_issue | 15 |
container_start_page | 4399 |
container_title | Soft computing (Berlin, Germany) |
container_volume | 21 |
creator | Naredo, Enrique Urbano, Paulo Trujillo, Leonardo |
description | Over recent years, evolutionary computation research has begun to emphasize the issue of generalization. Instead of evolving solutions that are optimized for a particular problem instance, the goal is to evolve solutions that can generalize to various different scenarios. This paper compares objective-based search and novelty search on a set of generalization oriented experiments for a navigation task using grammatical evolution (GE). In particular, this paper studies the impact that the training set has on the generalization of evolved solutions, considering: (1) the training set size; (2) the manner in which the training set is chosen (random or manual); and (3) if the training set is fixed throughout the run or dynamically changed every generation. Experimental results suggest that novelty search outperforms objective-based search in terms of evolving navigation behaviors that are able to cope with different initial conditions. The traditional objective-based search requires larger training sets and its performance degrades when the training set is not fixed. On the other hand, novelty search seems to be robust to different training sets, finding general solutions in almost all of the studied conditions with almost perfect generalization in many scenarios. |
doi_str_mv | 10.1007/s00500-016-2072-7 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2917939017</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2917939017</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-dae70f46cc0600b7be4a3f771a65baa6211f62d42f2895744c5662c40d4597ec3</originalsourceid><addsrcrecordid>eNp1kEtLxDAUhYMoOI7-AHcB19WbRxO7lEEdYcDNuBPCnTStHdpkTNoB_fV2poIrV_d1vnPhEHLN4JYB6LsEkANkwFTGQfNMn5AZk0JkWuri9NiPSyXFOblIaQvAmc7FjLyvPxztIza-8TVNrqfoS1o77yK2zTf2TfC08bSO2HXjZLGlbh_a4XioQqQ49MGHLgyJ4sj11OO-qY_gJTmrsE3u6rfOydvT43qxzFavzy-Lh1VmBVN9VqLTUEllLSiAjd44iaLSmqHKN4iKM1YpXkpe8fsi11LaXCluJZQyL7SzYk5uJt9dDJ-DS73ZhiH68aXhBdOFKIDpUcUmlY0hpegqs4tNh_HLMDCHEM0UohlDNIcQzYHhE5NGra9d_HP-H_oBGvp1Xg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2917939017</pqid></control><display><type>article</type><title>The training set and generalization in grammatical evolution for autonomous agent navigation</title><source>SpringerLink Journals</source><source>ProQuest Central</source><creator>Naredo, Enrique ; Urbano, Paulo ; Trujillo, Leonardo</creator><creatorcontrib>Naredo, Enrique ; Urbano, Paulo ; Trujillo, Leonardo</creatorcontrib><description>Over recent years, evolutionary computation research has begun to emphasize the issue of generalization. Instead of evolving solutions that are optimized for a particular problem instance, the goal is to evolve solutions that can generalize to various different scenarios. This paper compares objective-based search and novelty search on a set of generalization oriented experiments for a navigation task using grammatical evolution (GE). In particular, this paper studies the impact that the training set has on the generalization of evolved solutions, considering: (1) the training set size; (2) the manner in which the training set is chosen (random or manual); and (3) if the training set is fixed throughout the run or dynamically changed every generation. Experimental results suggest that novelty search outperforms objective-based search in terms of evolving navigation behaviors that are able to cope with different initial conditions. The traditional objective-based search requires larger training sets and its performance degrades when the training set is not fixed. On the other hand, novelty search seems to be robust to different training sets, finding general solutions in almost all of the studied conditions with almost perfect generalization in many scenarios.</description><identifier>ISSN: 1432-7643</identifier><identifier>EISSN: 1433-7479</identifier><identifier>DOI: 10.1007/s00500-016-2072-7</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Artificial Intelligence ; Autonomous navigation ; Computational Intelligence ; Control ; Engineering ; Evolutionary computation ; Initial conditions ; Learning ; Mathematical Logic and Foundations ; Mechatronics ; Methodologies and Application ; Performance degradation ; Robotics ; Searching ; Training</subject><ispartof>Soft computing (Berlin, Germany), 2017-08, Vol.21 (15), p.4399-4416</ispartof><rights>Springer-Verlag Berlin Heidelberg 2016</rights><rights>Springer-Verlag Berlin Heidelberg 2016.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-dae70f46cc0600b7be4a3f771a65baa6211f62d42f2895744c5662c40d4597ec3</citedby><cites>FETCH-LOGICAL-c316t-dae70f46cc0600b7be4a3f771a65baa6211f62d42f2895744c5662c40d4597ec3</cites><orcidid>0000-0003-1812-5736</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00500-016-2072-7$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2917939017?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,21367,27901,27902,33721,41464,42533,43781,51294</link.rule.ids></links><search><creatorcontrib>Naredo, Enrique</creatorcontrib><creatorcontrib>Urbano, Paulo</creatorcontrib><creatorcontrib>Trujillo, Leonardo</creatorcontrib><title>The training set and generalization in grammatical evolution for autonomous agent navigation</title><title>Soft computing (Berlin, Germany)</title><addtitle>Soft Comput</addtitle><description>Over recent years, evolutionary computation research has begun to emphasize the issue of generalization. Instead of evolving solutions that are optimized for a particular problem instance, the goal is to evolve solutions that can generalize to various different scenarios. This paper compares objective-based search and novelty search on a set of generalization oriented experiments for a navigation task using grammatical evolution (GE). In particular, this paper studies the impact that the training set has on the generalization of evolved solutions, considering: (1) the training set size; (2) the manner in which the training set is chosen (random or manual); and (3) if the training set is fixed throughout the run or dynamically changed every generation. Experimental results suggest that novelty search outperforms objective-based search in terms of evolving navigation behaviors that are able to cope with different initial conditions. The traditional objective-based search requires larger training sets and its performance degrades when the training set is not fixed. On the other hand, novelty search seems to be robust to different training sets, finding general solutions in almost all of the studied conditions with almost perfect generalization in many scenarios.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Autonomous navigation</subject><subject>Computational Intelligence</subject><subject>Control</subject><subject>Engineering</subject><subject>Evolutionary computation</subject><subject>Initial conditions</subject><subject>Learning</subject><subject>Mathematical Logic and Foundations</subject><subject>Mechatronics</subject><subject>Methodologies and Application</subject><subject>Performance degradation</subject><subject>Robotics</subject><subject>Searching</subject><subject>Training</subject><issn>1432-7643</issn><issn>1433-7479</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNp1kEtLxDAUhYMoOI7-AHcB19WbRxO7lEEdYcDNuBPCnTStHdpkTNoB_fV2poIrV_d1vnPhEHLN4JYB6LsEkANkwFTGQfNMn5AZk0JkWuri9NiPSyXFOblIaQvAmc7FjLyvPxztIza-8TVNrqfoS1o77yK2zTf2TfC08bSO2HXjZLGlbh_a4XioQqQ49MGHLgyJ4sj11OO-qY_gJTmrsE3u6rfOydvT43qxzFavzy-Lh1VmBVN9VqLTUEllLSiAjd44iaLSmqHKN4iKM1YpXkpe8fsi11LaXCluJZQyL7SzYk5uJt9dDJ-DS73ZhiH68aXhBdOFKIDpUcUmlY0hpegqs4tNh_HLMDCHEM0UohlDNIcQzYHhE5NGra9d_HP-H_oBGvp1Xg</recordid><startdate>20170801</startdate><enddate>20170801</enddate><creator>Naredo, Enrique</creator><creator>Urbano, Paulo</creator><creator>Trujillo, Leonardo</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><orcidid>https://orcid.org/0000-0003-1812-5736</orcidid></search><sort><creationdate>20170801</creationdate><title>The training set and generalization in grammatical evolution for autonomous agent navigation</title><author>Naredo, Enrique ; Urbano, Paulo ; Trujillo, Leonardo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-dae70f46cc0600b7be4a3f771a65baa6211f62d42f2895744c5662c40d4597ec3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Autonomous navigation</topic><topic>Computational Intelligence</topic><topic>Control</topic><topic>Engineering</topic><topic>Evolutionary computation</topic><topic>Initial conditions</topic><topic>Learning</topic><topic>Mathematical Logic and Foundations</topic><topic>Mechatronics</topic><topic>Methodologies and Application</topic><topic>Performance degradation</topic><topic>Robotics</topic><topic>Searching</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Naredo, Enrique</creatorcontrib><creatorcontrib>Urbano, Paulo</creatorcontrib><creatorcontrib>Trujillo, Leonardo</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Soft computing (Berlin, Germany)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Naredo, Enrique</au><au>Urbano, Paulo</au><au>Trujillo, Leonardo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The training set and generalization in grammatical evolution for autonomous agent navigation</atitle><jtitle>Soft computing (Berlin, Germany)</jtitle><stitle>Soft Comput</stitle><date>2017-08-01</date><risdate>2017</risdate><volume>21</volume><issue>15</issue><spage>4399</spage><epage>4416</epage><pages>4399-4416</pages><issn>1432-7643</issn><eissn>1433-7479</eissn><abstract>Over recent years, evolutionary computation research has begun to emphasize the issue of generalization. Instead of evolving solutions that are optimized for a particular problem instance, the goal is to evolve solutions that can generalize to various different scenarios. This paper compares objective-based search and novelty search on a set of generalization oriented experiments for a navigation task using grammatical evolution (GE). In particular, this paper studies the impact that the training set has on the generalization of evolved solutions, considering: (1) the training set size; (2) the manner in which the training set is chosen (random or manual); and (3) if the training set is fixed throughout the run or dynamically changed every generation. Experimental results suggest that novelty search outperforms objective-based search in terms of evolving navigation behaviors that are able to cope with different initial conditions. The traditional objective-based search requires larger training sets and its performance degrades when the training set is not fixed. On the other hand, novelty search seems to be robust to different training sets, finding general solutions in almost all of the studied conditions with almost perfect generalization in many scenarios.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00500-016-2072-7</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0003-1812-5736</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1432-7643 |
ispartof | Soft computing (Berlin, Germany), 2017-08, Vol.21 (15), p.4399-4416 |
issn | 1432-7643 1433-7479 |
language | eng |
recordid | cdi_proquest_journals_2917939017 |
source | SpringerLink Journals; ProQuest Central |
subjects | Algorithms Artificial Intelligence Autonomous navigation Computational Intelligence Control Engineering Evolutionary computation Initial conditions Learning Mathematical Logic and Foundations Mechatronics Methodologies and Application Performance degradation Robotics Searching Training |
title | The training set and generalization in grammatical evolution for autonomous agent navigation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T15%3A10%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20training%20set%20and%20generalization%20in%20grammatical%20evolution%20for%20autonomous%20agent%20navigation&rft.jtitle=Soft%20computing%20(Berlin,%20Germany)&rft.au=Naredo,%20Enrique&rft.date=2017-08-01&rft.volume=21&rft.issue=15&rft.spage=4399&rft.epage=4416&rft.pages=4399-4416&rft.issn=1432-7643&rft.eissn=1433-7479&rft_id=info:doi/10.1007/s00500-016-2072-7&rft_dat=%3Cproquest_cross%3E2917939017%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2917939017&rft_id=info:pmid/&rfr_iscdi=true |