Restarting Particle Swarm Optimisation for deceptive problems

Particle Swarm Optimisation (PSO) has the advantage of finding, if not the optimum of a continuous problem space, at least a very good position and doing this with modest computational cost. However, as the number of possible optima increases, PSO will only explore a subset of these positions. Techn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Hendtlass, T.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 9
container_issue
container_start_page 1
container_title
container_volume
creator Hendtlass, T.
description Particle Swarm Optimisation (PSO) has the advantage of finding, if not the optimum of a continuous problem space, at least a very good position and doing this with modest computational cost. However, as the number of possible optima increases, PSO will only explore a subset of these positions. Techniques such as niching can allow a small number of positions to be explored in parallel but by the time that a problem has become truly deceptive, has many many optima, there is little choice but to explore optima sequentially. PSO, once having converged, has no way of dispersing its particle so as to allow a further convergence, hopefully to a new optima. Random restarts are one way of providing this divergence, this paper suggest another inspired by Extremal Optimisation (EO). The technique proposed allows the particles to disperse by way of positions that are fitter than average. After a while dispersion ceases and PSO takes over again, but since it starts from better than average fitnesses the point it converges to is also better than average. This alternation of algorithms can carry on indefinitely. This paper examines the performance of sequential PSO exploration on a range of problems, some deceptive, some non-deceptive. As predicted, performance on deceptive problems tends to improve significantly with time while performance on non-deceptive problems, which do not have multiple positions with comparable fitness to spread through, does not.
doi_str_mv 10.1109/CEC.2012.6256424
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_6256424</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>6256424</ieee_id><sourcerecordid>6256424</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-c970348d114aae5aff7d7028a35d4015268247dba029e2a0bccaac7774ab685a3</originalsourceid><addsrcrecordid>eNo9UEtLxDAYjC9wXfcueMkf6PolTfKlBw9S1gcsrPgAb8vXNJVIuy1JUfz3Vlycy8AMMwPD2IWApRBQXJWrcilByKWR2iipDtiZUAZzocHiIZuJQokMQJqjf2OKHU8G2CJDtG-nbJHSB0xAK4TCGbt-8mmkOIbdO3_8Zdd6_vxFseObYQxdSDSGfsebPvLaOz9pn54Psa9a36VzdtJQm_xiz3P2ert6Ke-z9ebuobxZZ0GgHjNXIOTK1tMkkdfUNFgjSEu5rhUILY2VCuuKQBZeElTOETlEVFQZqymfs8u_3uC93w4xdBS_t_sb8h82PEys</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Restarting Particle Swarm Optimisation for deceptive problems</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Hendtlass, T.</creator><creatorcontrib>Hendtlass, T.</creatorcontrib><description>Particle Swarm Optimisation (PSO) has the advantage of finding, if not the optimum of a continuous problem space, at least a very good position and doing this with modest computational cost. However, as the number of possible optima increases, PSO will only explore a subset of these positions. Techniques such as niching can allow a small number of positions to be explored in parallel but by the time that a problem has become truly deceptive, has many many optima, there is little choice but to explore optima sequentially. PSO, once having converged, has no way of dispersing its particle so as to allow a further convergence, hopefully to a new optima. Random restarts are one way of providing this divergence, this paper suggest another inspired by Extremal Optimisation (EO). The technique proposed allows the particles to disperse by way of positions that are fitter than average. After a while dispersion ceases and PSO takes over again, but since it starts from better than average fitnesses the point it converges to is also better than average. This alternation of algorithms can carry on indefinitely. This paper examines the performance of sequential PSO exploration on a range of problems, some deceptive, some non-deceptive. As predicted, performance on deceptive problems tends to improve significantly with time while performance on non-deceptive problems, which do not have multiple positions with comparable fitness to spread through, does not.</description><identifier>ISSN: 1089-778X</identifier><identifier>ISBN: 1467315109</identifier><identifier>ISBN: 9781467315104</identifier><identifier>EISSN: 1941-0026</identifier><identifier>EISBN: 1467315087</identifier><identifier>EISBN: 9781467315081</identifier><identifier>EISBN: 1467315095</identifier><identifier>EISBN: 9781467315098</identifier><identifier>DOI: 10.1109/CEC.2012.6256424</identifier><language>eng</language><publisher>IEEE</publisher><subject>Algorithms ; Australia ; Computational efficiency ; Convergence ; Dispersion ; Histograms ; Optimization ; Particle swarm optimization</subject><ispartof>2012 IEEE Congress on Evolutionary Computation, 2012, p.1-9</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/6256424$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,793,2052,27906,54739,54901</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/6256424$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Hendtlass, T.</creatorcontrib><title>Restarting Particle Swarm Optimisation for deceptive problems</title><title>2012 IEEE Congress on Evolutionary Computation</title><addtitle>CEC</addtitle><description>Particle Swarm Optimisation (PSO) has the advantage of finding, if not the optimum of a continuous problem space, at least a very good position and doing this with modest computational cost. However, as the number of possible optima increases, PSO will only explore a subset of these positions. Techniques such as niching can allow a small number of positions to be explored in parallel but by the time that a problem has become truly deceptive, has many many optima, there is little choice but to explore optima sequentially. PSO, once having converged, has no way of dispersing its particle so as to allow a further convergence, hopefully to a new optima. Random restarts are one way of providing this divergence, this paper suggest another inspired by Extremal Optimisation (EO). The technique proposed allows the particles to disperse by way of positions that are fitter than average. After a while dispersion ceases and PSO takes over again, but since it starts from better than average fitnesses the point it converges to is also better than average. This alternation of algorithms can carry on indefinitely. This paper examines the performance of sequential PSO exploration on a range of problems, some deceptive, some non-deceptive. As predicted, performance on deceptive problems tends to improve significantly with time while performance on non-deceptive problems, which do not have multiple positions with comparable fitness to spread through, does not.</description><subject>Algorithms</subject><subject>Australia</subject><subject>Computational efficiency</subject><subject>Convergence</subject><subject>Dispersion</subject><subject>Histograms</subject><subject>Optimization</subject><subject>Particle swarm optimization</subject><issn>1089-778X</issn><issn>1941-0026</issn><isbn>1467315109</isbn><isbn>9781467315104</isbn><isbn>1467315087</isbn><isbn>9781467315081</isbn><isbn>1467315095</isbn><isbn>9781467315098</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2012</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo9UEtLxDAYjC9wXfcueMkf6PolTfKlBw9S1gcsrPgAb8vXNJVIuy1JUfz3Vlycy8AMMwPD2IWApRBQXJWrcilByKWR2iipDtiZUAZzocHiIZuJQokMQJqjf2OKHU8G2CJDtG-nbJHSB0xAK4TCGbt-8mmkOIbdO3_8Zdd6_vxFseObYQxdSDSGfsebPvLaOz9pn54Psa9a36VzdtJQm_xiz3P2ert6Ke-z9ebuobxZZ0GgHjNXIOTK1tMkkdfUNFgjSEu5rhUILY2VCuuKQBZeElTOETlEVFQZqymfs8u_3uC93w4xdBS_t_sb8h82PEys</recordid><startdate>201206</startdate><enddate>201206</enddate><creator>Hendtlass, T.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201206</creationdate><title>Restarting Particle Swarm Optimisation for deceptive problems</title><author>Hendtlass, T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-c970348d114aae5aff7d7028a35d4015268247dba029e2a0bccaac7774ab685a3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2012</creationdate><topic>Algorithms</topic><topic>Australia</topic><topic>Computational efficiency</topic><topic>Convergence</topic><topic>Dispersion</topic><topic>Histograms</topic><topic>Optimization</topic><topic>Particle swarm optimization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hendtlass, T.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Hendtlass, T.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Restarting Particle Swarm Optimisation for deceptive problems</atitle><btitle>2012 IEEE Congress on Evolutionary Computation</btitle><stitle>CEC</stitle><date>2012-06</date><risdate>2012</risdate><spage>1</spage><epage>9</epage><pages>1-9</pages><issn>1089-778X</issn><eissn>1941-0026</eissn><isbn>1467315109</isbn><isbn>9781467315104</isbn><eisbn>1467315087</eisbn><eisbn>9781467315081</eisbn><eisbn>1467315095</eisbn><eisbn>9781467315098</eisbn><abstract>Particle Swarm Optimisation (PSO) has the advantage of finding, if not the optimum of a continuous problem space, at least a very good position and doing this with modest computational cost. However, as the number of possible optima increases, PSO will only explore a subset of these positions. Techniques such as niching can allow a small number of positions to be explored in parallel but by the time that a problem has become truly deceptive, has many many optima, there is little choice but to explore optima sequentially. PSO, once having converged, has no way of dispersing its particle so as to allow a further convergence, hopefully to a new optima. Random restarts are one way of providing this divergence, this paper suggest another inspired by Extremal Optimisation (EO). The technique proposed allows the particles to disperse by way of positions that are fitter than average. After a while dispersion ceases and PSO takes over again, but since it starts from better than average fitnesses the point it converges to is also better than average. This alternation of algorithms can carry on indefinitely. This paper examines the performance of sequential PSO exploration on a range of problems, some deceptive, some non-deceptive. As predicted, performance on deceptive problems tends to improve significantly with time while performance on non-deceptive problems, which do not have multiple positions with comparable fitness to spread through, does not.</abstract><pub>IEEE</pub><doi>10.1109/CEC.2012.6256424</doi><tpages>9</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1089-778X
ispartof 2012 IEEE Congress on Evolutionary Computation, 2012, p.1-9
issn 1089-778X
1941-0026
language eng
recordid cdi_ieee_primary_6256424
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Algorithms
Australia
Computational efficiency
Convergence
Dispersion
Histograms
Optimization
Particle swarm optimization
title Restarting Particle Swarm Optimisation for deceptive problems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T14%3A08%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Restarting%20Particle%20Swarm%20Optimisation%20for%20deceptive%20problems&rft.btitle=2012%20IEEE%20Congress%20on%20Evolutionary%20Computation&rft.au=Hendtlass,%20T.&rft.date=2012-06&rft.spage=1&rft.epage=9&rft.pages=1-9&rft.issn=1089-778X&rft.eissn=1941-0026&rft.isbn=1467315109&rft.isbn_list=9781467315104&rft_id=info:doi/10.1109/CEC.2012.6256424&rft_dat=%3Cieee_6IE%3E6256424%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=1467315087&rft.eisbn_list=9781467315081&rft.eisbn_list=1467315095&rft.eisbn_list=9781467315098&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=6256424&rfr_iscdi=true