Retaining the lessons from past for better performance in a dynamic multiple task environment
Human beings learn to do a task and then go on to learn some other task. However, they do not forget the previous learning. If need arises, they can call upon their previous learning and do not have to relearn from scratch again. In this paper, we build upon our earlier work in which we presented a...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1056 |
---|---|
container_issue | |
container_start_page | 1049 |
container_title | |
container_volume | |
creator | Mujtaba, H. Baig, A.R. |
description | Human beings learn to do a task and then go on to learn some other task. However, they do not forget the previous learning. If need arises, they can call upon their previous learning and do not have to relearn from scratch again. In this paper, we build upon our earlier work in which we presented a mechanism for learning multiple tasks in a dynamic environment where the tasks can change arbitrarily without any warning to the learning agents. The main feature of the mechanism is that a percentage of the learning agents is periodically made to reset its previous learning and restart learning again. Thus, there is always a sub-population which can learn the new task, whenever there is a task change, without being hampered by previous learning. The learning then spreads to the other members of the population also. In our current work we experiment with the incorporation of archive for preserving those strategies which have performed well. The strategies in the archive are tested time to time in the current environment. If the current task is the same as the task for which the strategy was first discovered, then that strategy rapidly comes in vogue for the whole population. The criteria by which strategies are selected for storage in the archive, the deletion of some strategies because the archive has limited space and the mechanism for selecting strategies for utilization in the current environment are presented. |
doi_str_mv | 10.1109/CEC.2009.4983062 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_4983062</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4983062</ieee_id><sourcerecordid>4983062</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-394e88c3b03068a79fc2d1193e012591cee15fd9fa8b1d24c630eed781f7027b3</originalsourceid><addsrcrecordid>eNpFkF1LwzAYhePHwG16L3iTP9D5vkm6JJdS5gcMBFHwRkbavtVom5YmCvv3Fhx4dTg88MA5jF0irBDBXhebYiUA7EpZI2EtjtgClVBK2Nzmx2yOVmEGINYn_8Do0wmAsZnW5nXGFpPAWDBa2jO2iPETAFWOds7enig5H3x45-mDeEsx9iHyZuw7PriYeNOPvKSUaOQDjVPrXKiI-8Adr_fBdb7i3Xeb_NASTy5-cQo_fuxDRyGds1nj2kgXh1yyl9vNc3GfbR_vHoqbbeZR5ymTVpExlSxhGmictk0lakQrCVDkFisizJvaNs6UWAtVrSUQ1dpgo0HoUi7Z1Z_XE9FuGH3nxv3ucJj8BSaVWeE</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Retaining the lessons from past for better performance in a dynamic multiple task environment</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Mujtaba, H. ; Baig, A.R.</creator><creatorcontrib>Mujtaba, H. ; Baig, A.R.</creatorcontrib><description>Human beings learn to do a task and then go on to learn some other task. However, they do not forget the previous learning. If need arises, they can call upon their previous learning and do not have to relearn from scratch again. In this paper, we build upon our earlier work in which we presented a mechanism for learning multiple tasks in a dynamic environment where the tasks can change arbitrarily without any warning to the learning agents. The main feature of the mechanism is that a percentage of the learning agents is periodically made to reset its previous learning and restart learning again. Thus, there is always a sub-population which can learn the new task, whenever there is a task change, without being hampered by previous learning. The learning then spreads to the other members of the population also. In our current work we experiment with the incorporation of archive for preserving those strategies which have performed well. The strategies in the archive are tested time to time in the current environment. If the current task is the same as the task for which the strategy was first discovered, then that strategy rapidly comes in vogue for the whole population. The criteria by which strategies are selected for storage in the archive, the deletion of some strategies because the archive has limited space and the mechanism for selecting strategies for utilization in the current environment are presented.</description><identifier>ISSN: 1089-778X</identifier><identifier>ISBN: 1424429587</identifier><identifier>ISBN: 9781424429585</identifier><identifier>EISSN: 1941-0026</identifier><identifier>EISBN: 1424429595</identifier><identifier>EISBN: 9781424429592</identifier><identifier>DOI: 10.1109/CEC.2009.4983062</identifier><identifier>LCCN: 2008908739</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial neural networks ; Computational modeling ; Computer displays ; Computer science education ; Computer simulation ; History ; Humans ; Immune system ; Intelligent agent ; Testing</subject><ispartof>2009 IEEE Congress on Evolutionary Computation, 2009, p.1049-1056</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4983062$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>310,311,781,785,790,791,797,2059,27929,54762,54924</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4983062$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Mujtaba, H.</creatorcontrib><creatorcontrib>Baig, A.R.</creatorcontrib><title>Retaining the lessons from past for better performance in a dynamic multiple task environment</title><title>2009 IEEE Congress on Evolutionary Computation</title><addtitle>CEC</addtitle><description>Human beings learn to do a task and then go on to learn some other task. However, they do not forget the previous learning. If need arises, they can call upon their previous learning and do not have to relearn from scratch again. In this paper, we build upon our earlier work in which we presented a mechanism for learning multiple tasks in a dynamic environment where the tasks can change arbitrarily without any warning to the learning agents. The main feature of the mechanism is that a percentage of the learning agents is periodically made to reset its previous learning and restart learning again. Thus, there is always a sub-population which can learn the new task, whenever there is a task change, without being hampered by previous learning. The learning then spreads to the other members of the population also. In our current work we experiment with the incorporation of archive for preserving those strategies which have performed well. The strategies in the archive are tested time to time in the current environment. If the current task is the same as the task for which the strategy was first discovered, then that strategy rapidly comes in vogue for the whole population. The criteria by which strategies are selected for storage in the archive, the deletion of some strategies because the archive has limited space and the mechanism for selecting strategies for utilization in the current environment are presented.</description><subject>Artificial neural networks</subject><subject>Computational modeling</subject><subject>Computer displays</subject><subject>Computer science education</subject><subject>Computer simulation</subject><subject>History</subject><subject>Humans</subject><subject>Immune system</subject><subject>Intelligent agent</subject><subject>Testing</subject><issn>1089-778X</issn><issn>1941-0026</issn><isbn>1424429587</isbn><isbn>9781424429585</isbn><isbn>1424429595</isbn><isbn>9781424429592</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2009</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNpFkF1LwzAYhePHwG16L3iTP9D5vkm6JJdS5gcMBFHwRkbavtVom5YmCvv3Fhx4dTg88MA5jF0irBDBXhebYiUA7EpZI2EtjtgClVBK2Nzmx2yOVmEGINYn_8Do0wmAsZnW5nXGFpPAWDBa2jO2iPETAFWOds7enig5H3x45-mDeEsx9iHyZuw7PriYeNOPvKSUaOQDjVPrXKiI-8Adr_fBdb7i3Xeb_NASTy5-cQo_fuxDRyGds1nj2kgXh1yyl9vNc3GfbR_vHoqbbeZR5ymTVpExlSxhGmictk0lakQrCVDkFisizJvaNs6UWAtVrSUQ1dpgo0HoUi7Z1Z_XE9FuGH3nxv3ucJj8BSaVWeE</recordid><startdate>200905</startdate><enddate>200905</enddate><creator>Mujtaba, H.</creator><creator>Baig, A.R.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>200905</creationdate><title>Retaining the lessons from past for better performance in a dynamic multiple task environment</title><author>Mujtaba, H. ; Baig, A.R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-394e88c3b03068a79fc2d1193e012591cee15fd9fa8b1d24c630eed781f7027b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Artificial neural networks</topic><topic>Computational modeling</topic><topic>Computer displays</topic><topic>Computer science education</topic><topic>Computer simulation</topic><topic>History</topic><topic>Humans</topic><topic>Immune system</topic><topic>Intelligent agent</topic><topic>Testing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mujtaba, H.</creatorcontrib><creatorcontrib>Baig, A.R.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mujtaba, H.</au><au>Baig, A.R.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Retaining the lessons from past for better performance in a dynamic multiple task environment</atitle><btitle>2009 IEEE Congress on Evolutionary Computation</btitle><stitle>CEC</stitle><date>2009-05</date><risdate>2009</risdate><spage>1049</spage><epage>1056</epage><pages>1049-1056</pages><issn>1089-778X</issn><eissn>1941-0026</eissn><isbn>1424429587</isbn><isbn>9781424429585</isbn><eisbn>1424429595</eisbn><eisbn>9781424429592</eisbn><abstract>Human beings learn to do a task and then go on to learn some other task. However, they do not forget the previous learning. If need arises, they can call upon their previous learning and do not have to relearn from scratch again. In this paper, we build upon our earlier work in which we presented a mechanism for learning multiple tasks in a dynamic environment where the tasks can change arbitrarily without any warning to the learning agents. The main feature of the mechanism is that a percentage of the learning agents is periodically made to reset its previous learning and restart learning again. Thus, there is always a sub-population which can learn the new task, whenever there is a task change, without being hampered by previous learning. The learning then spreads to the other members of the population also. In our current work we experiment with the incorporation of archive for preserving those strategies which have performed well. The strategies in the archive are tested time to time in the current environment. If the current task is the same as the task for which the strategy was first discovered, then that strategy rapidly comes in vogue for the whole population. The criteria by which strategies are selected for storage in the archive, the deletion of some strategies because the archive has limited space and the mechanism for selecting strategies for utilization in the current environment are presented.</abstract><pub>IEEE</pub><doi>10.1109/CEC.2009.4983062</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1089-778X |
ispartof | 2009 IEEE Congress on Evolutionary Computation, 2009, p.1049-1056 |
issn | 1089-778X 1941-0026 |
language | eng |
recordid | cdi_ieee_primary_4983062 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Artificial neural networks Computational modeling Computer displays Computer science education Computer simulation History Humans Immune system Intelligent agent Testing |
title | Retaining the lessons from past for better performance in a dynamic multiple task environment |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T01%3A04%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Retaining%20the%20lessons%20from%20past%20for%20better%20performance%20in%20a%20dynamic%20multiple%20task%20environment&rft.btitle=2009%20IEEE%20Congress%20on%20Evolutionary%20Computation&rft.au=Mujtaba,%20H.&rft.date=2009-05&rft.spage=1049&rft.epage=1056&rft.pages=1049-1056&rft.issn=1089-778X&rft.eissn=1941-0026&rft.isbn=1424429587&rft.isbn_list=9781424429585&rft_id=info:doi/10.1109/CEC.2009.4983062&rft_dat=%3Cieee_6IE%3E4983062%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1424429595&rft.eisbn_list=9781424429592&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=4983062&rfr_iscdi=true |