Time series forecasting using massively parallel genetic programming

In this paper we propose a massively parallel GP model in hardware as an efficient, flexible and scaleable machine learning system. This fine-grained diffusion architecture consists of a large amount of independent processing nodes that evolve a large number of small, overlapping subpopulations. Eve...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Eklund, S.E.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 5 pp.
container_title
container_volume
creator Eklund, S.E.
description In this paper we propose a massively parallel GP model in hardware as an efficient, flexible and scaleable machine learning system. This fine-grained diffusion architecture consists of a large amount of independent processing nodes that evolve a large number of small, overlapping subpopulations. Every node has an embedded CPU that executes a linear machine code GP representation at a rate of up to 20,000 generations per second. Besides being efficient, implementing the system in VLSI makes it highly portable and makes it possible to target mobile, on-line applications. The SIMD-like architecture also makes the system scalable so that larger problems can be addressed with a system with more processing nodes. Finally, the use of GP representation and VHDL modeling makes the system highly flexible and easy to adapt to different applications. We demonstrate the effectiveness of the system on a time series forecasting application.
doi_str_mv 10.1109/IPDPS.2003.1213272
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_1213272</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1213272</ieee_id><sourcerecordid>1213272</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-db990b989e55f6578cb338982e766584e840f3885659c33fb76d0241920562053</originalsourceid><addsrcrecordid>eNotj1tLw0AUhBdUsNb-AX3ZP5B69nL28iitl0LBgnkvm_QkrCRt2I1C_70ROzAzL8PAx9iDgKUQ4J82u_XucykB1FJIoaSVV-wOrPEovDTims0EKigkWLxli5y_YJJG1BpmbF3GnnimFCnz5pSoDnmMx5Z_57_sQ87xh7ozH0IKXUcdb-lIY6z5kE5tCn0_ze7ZTRO6TItLz1n5-lKu3ovtx9tm9bwtooexOFTeQ-WdJ8TGoHV1pZTzTpI1Bp0mp6FRzqFBXyvVVNYcQOoJAtBMVnP2-H8biWg_pNiHdN5fmNUvaj5KIA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Time series forecasting using massively parallel genetic programming</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Eklund, S.E.</creator><creatorcontrib>Eklund, S.E.</creatorcontrib><description>In this paper we propose a massively parallel GP model in hardware as an efficient, flexible and scaleable machine learning system. This fine-grained diffusion architecture consists of a large amount of independent processing nodes that evolve a large number of small, overlapping subpopulations. Every node has an embedded CPU that executes a linear machine code GP representation at a rate of up to 20,000 generations per second. Besides being efficient, implementing the system in VLSI makes it highly portable and makes it possible to target mobile, on-line applications. The SIMD-like architecture also makes the system scalable so that larger problems can be addressed with a system with more processing nodes. Finally, the use of GP representation and VHDL modeling makes the system highly flexible and easy to adapt to different applications. We demonstrate the effectiveness of the system on a time series forecasting application.</description><identifier>ISSN: 1530-2075</identifier><identifier>ISBN: 0769519261</identifier><identifier>ISBN: 9780769519265</identifier><identifier>DOI: 10.1109/IPDPS.2003.1213272</identifier><language>eng</language><publisher>IEEE</publisher><subject>Biological system modeling ; Centralized control ; Computer architecture ; Computer science ; Genetic algorithms ; Genetic programming ; Hardware ; Learning systems ; Topology ; Very large scale integration</subject><ispartof>Proceedings International Parallel and Distributed Processing Symposium, 2003, p.5 pp.</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1213272$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2056,4048,4049,27924,54919</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1213272$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Eklund, S.E.</creatorcontrib><title>Time series forecasting using massively parallel genetic programming</title><title>Proceedings International Parallel and Distributed Processing Symposium</title><addtitle>IPDPS</addtitle><description>In this paper we propose a massively parallel GP model in hardware as an efficient, flexible and scaleable machine learning system. This fine-grained diffusion architecture consists of a large amount of independent processing nodes that evolve a large number of small, overlapping subpopulations. Every node has an embedded CPU that executes a linear machine code GP representation at a rate of up to 20,000 generations per second. Besides being efficient, implementing the system in VLSI makes it highly portable and makes it possible to target mobile, on-line applications. The SIMD-like architecture also makes the system scalable so that larger problems can be addressed with a system with more processing nodes. Finally, the use of GP representation and VHDL modeling makes the system highly flexible and easy to adapt to different applications. We demonstrate the effectiveness of the system on a time series forecasting application.</description><subject>Biological system modeling</subject><subject>Centralized control</subject><subject>Computer architecture</subject><subject>Computer science</subject><subject>Genetic algorithms</subject><subject>Genetic programming</subject><subject>Hardware</subject><subject>Learning systems</subject><subject>Topology</subject><subject>Very large scale integration</subject><issn>1530-2075</issn><isbn>0769519261</isbn><isbn>9780769519265</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2003</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotj1tLw0AUhBdUsNb-AX3ZP5B69nL28iitl0LBgnkvm_QkrCRt2I1C_70ROzAzL8PAx9iDgKUQ4J82u_XucykB1FJIoaSVV-wOrPEovDTims0EKigkWLxli5y_YJJG1BpmbF3GnnimFCnz5pSoDnmMx5Z_57_sQ87xh7ozH0IKXUcdb-lIY6z5kE5tCn0_ze7ZTRO6TItLz1n5-lKu3ovtx9tm9bwtooexOFTeQ-WdJ8TGoHV1pZTzTpI1Bp0mp6FRzqFBXyvVVNYcQOoJAtBMVnP2-H8biWg_pNiHdN5fmNUvaj5KIA</recordid><startdate>2003</startdate><enddate>2003</enddate><creator>Eklund, S.E.</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>2003</creationdate><title>Time series forecasting using massively parallel genetic programming</title><author>Eklund, S.E.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-db990b989e55f6578cb338982e766584e840f3885659c33fb76d0241920562053</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2003</creationdate><topic>Biological system modeling</topic><topic>Centralized control</topic><topic>Computer architecture</topic><topic>Computer science</topic><topic>Genetic algorithms</topic><topic>Genetic programming</topic><topic>Hardware</topic><topic>Learning systems</topic><topic>Topology</topic><topic>Very large scale integration</topic><toplevel>online_resources</toplevel><creatorcontrib>Eklund, S.E.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Eklund, S.E.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Time series forecasting using massively parallel genetic programming</atitle><btitle>Proceedings International Parallel and Distributed Processing Symposium</btitle><stitle>IPDPS</stitle><date>2003</date><risdate>2003</risdate><spage>5 pp.</spage><pages>5 pp.-</pages><issn>1530-2075</issn><isbn>0769519261</isbn><isbn>9780769519265</isbn><abstract>In this paper we propose a massively parallel GP model in hardware as an efficient, flexible and scaleable machine learning system. This fine-grained diffusion architecture consists of a large amount of independent processing nodes that evolve a large number of small, overlapping subpopulations. Every node has an embedded CPU that executes a linear machine code GP representation at a rate of up to 20,000 generations per second. Besides being efficient, implementing the system in VLSI makes it highly portable and makes it possible to target mobile, on-line applications. The SIMD-like architecture also makes the system scalable so that larger problems can be addressed with a system with more processing nodes. Finally, the use of GP representation and VHDL modeling makes the system highly flexible and easy to adapt to different applications. We demonstrate the effectiveness of the system on a time series forecasting application.</abstract><pub>IEEE</pub><doi>10.1109/IPDPS.2003.1213272</doi></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1530-2075
ispartof Proceedings International Parallel and Distributed Processing Symposium, 2003, p.5 pp.
issn 1530-2075
language eng
recordid cdi_ieee_primary_1213272
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Biological system modeling
Centralized control
Computer architecture
Computer science
Genetic algorithms
Genetic programming
Hardware
Learning systems
Topology
Very large scale integration
title Time series forecasting using massively parallel genetic programming
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T23%3A25%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Time%20series%20forecasting%20using%20massively%20parallel%20genetic%20programming&rft.btitle=Proceedings%20International%20Parallel%20and%20Distributed%20Processing%20Symposium&rft.au=Eklund,%20S.E.&rft.date=2003&rft.spage=5%20pp.&rft.pages=5%20pp.-&rft.issn=1530-2075&rft.isbn=0769519261&rft.isbn_list=9780769519265&rft_id=info:doi/10.1109/IPDPS.2003.1213272&rft_dat=%3Cieee_6IE%3E1213272%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=1213272&rfr_iscdi=true