PVM-based training of large neural architectures

A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Plagianakos, V.P., Magoulas, G.D., Nousis, N.K., Vrahatis, M.N.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2589 vol.4
container_issue
container_start_page 2584
container_title
container_volume 4
creator Plagianakos, V.P.
Magoulas, G.D.
Nousis, N.K.
Vrahatis, M.N.
description A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of various architectures. The methodology proposed has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the relatively easy setup of the PVM (using existing workstations), and parallelization of the training algorithms results in considerable speed-ups especially when large network architectures and training vectors are used.
doi_str_mv 10.1109/IJCNN.2001.938777
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_938777</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>938777</ieee_id><sourcerecordid>938777</sourcerecordid><originalsourceid>FETCH-LOGICAL-i147t-df946286e95b21169b2de4f084701e091d1b41eabd8b5b305c6bf6e06989f9ad3</originalsourceid><addsrcrecordid>eNotj0tOwzAUAC0-Em3hALDKBRye4-9boghKUSksgG1lx8_FKATkpAtuD1JZzWKkkYaxSwG1EIDXq4d2s6kbAFGjdNbaIzYTWjsuEZpjNgfrQFpQCk_-BKDjVltzxubj-AFgwCqcMXh-e-TBjxSrqfg85GFXfaWq92VH1UD74vvKl-49T9RN-0LjOTtNvh_p4p8L9np3-9Le8_XTctXerHkWyk48JlSmcYZQh0YIg6GJpBI4ZUEQoIgiKEE-RBd0kKA7E5IhMOgwoY9ywa4O3UxE2--SP3352R5G5S-TTUSi</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>PVM-based training of large neural architectures</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Plagianakos, V.P. ; Magoulas, G.D. ; Nousis, N.K. ; Vrahatis, M.N.</creator><creatorcontrib>Plagianakos, V.P. ; Magoulas, G.D. ; Nousis, N.K. ; Vrahatis, M.N.</creatorcontrib><description>A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of various architectures. The methodology proposed has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the relatively easy setup of the PVM (using existing workstations), and parallelization of the training algorithms results in considerable speed-ups especially when large network architectures and training vectors are used.</description><identifier>ISSN: 1098-7576</identifier><identifier>ISBN: 0780370449</identifier><identifier>ISBN: 9780780370449</identifier><identifier>EISSN: 1558-3902</identifier><identifier>DOI: 10.1109/IJCNN.2001.938777</identifier><language>eng</language><publisher>IEEE</publisher><subject>Artificial intelligence ; Artificial neural networks ; Computer architecture ; Computer errors ; Equations ; Information systems ; Mathematics ; Neurons ; Testing ; Virtual machining</subject><ispartof>IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), 2001, Vol.4, p.2584-2589 vol.4</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/938777$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2051,4035,4036,27904,54898</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/938777$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Plagianakos, V.P.</creatorcontrib><creatorcontrib>Magoulas, G.D.</creatorcontrib><creatorcontrib>Nousis, N.K.</creatorcontrib><creatorcontrib>Vrahatis, M.N.</creatorcontrib><title>PVM-based training of large neural architectures</title><title>IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)</title><addtitle>IJCNN</addtitle><description>A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of various architectures. The methodology proposed has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the relatively easy setup of the PVM (using existing workstations), and parallelization of the training algorithms results in considerable speed-ups especially when large network architectures and training vectors are used.</description><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Computer architecture</subject><subject>Computer errors</subject><subject>Equations</subject><subject>Information systems</subject><subject>Mathematics</subject><subject>Neurons</subject><subject>Testing</subject><subject>Virtual machining</subject><issn>1098-7576</issn><issn>1558-3902</issn><isbn>0780370449</isbn><isbn>9780780370449</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2001</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotj0tOwzAUAC0-Em3hALDKBRye4-9boghKUSksgG1lx8_FKATkpAtuD1JZzWKkkYaxSwG1EIDXq4d2s6kbAFGjdNbaIzYTWjsuEZpjNgfrQFpQCk_-BKDjVltzxubj-AFgwCqcMXh-e-TBjxSrqfg85GFXfaWq92VH1UD74vvKl-49T9RN-0LjOTtNvh_p4p8L9np3-9Le8_XTctXerHkWyk48JlSmcYZQh0YIg6GJpBI4ZUEQoIgiKEE-RBd0kKA7E5IhMOgwoY9ywa4O3UxE2--SP3352R5G5S-TTUSi</recordid><startdate>2001</startdate><enddate>2001</enddate><creator>Plagianakos, V.P.</creator><creator>Magoulas, G.D.</creator><creator>Nousis, N.K.</creator><creator>Vrahatis, M.N.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2001</creationdate><title>PVM-based training of large neural architectures</title><author>Plagianakos, V.P. ; Magoulas, G.D. ; Nousis, N.K. ; Vrahatis, M.N.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i147t-df946286e95b21169b2de4f084701e091d1b41eabd8b5b305c6bf6e06989f9ad3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2001</creationdate><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Computer architecture</topic><topic>Computer errors</topic><topic>Equations</topic><topic>Information systems</topic><topic>Mathematics</topic><topic>Neurons</topic><topic>Testing</topic><topic>Virtual machining</topic><toplevel>online_resources</toplevel><creatorcontrib>Plagianakos, V.P.</creatorcontrib><creatorcontrib>Magoulas, G.D.</creatorcontrib><creatorcontrib>Nousis, N.K.</creatorcontrib><creatorcontrib>Vrahatis, M.N.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Plagianakos, V.P.</au><au>Magoulas, G.D.</au><au>Nousis, N.K.</au><au>Vrahatis, M.N.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>PVM-based training of large neural architectures</atitle><btitle>IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)</btitle><stitle>IJCNN</stitle><date>2001</date><risdate>2001</risdate><volume>4</volume><spage>2584</spage><epage>2589 vol.4</epage><pages>2584-2589 vol.4</pages><issn>1098-7576</issn><eissn>1558-3902</eissn><isbn>0780370449</isbn><isbn>9780780370449</isbn><abstract>A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of various architectures. The methodology proposed has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the relatively easy setup of the PVM (using existing workstations), and parallelization of the training algorithms results in considerable speed-ups especially when large network architectures and training vectors are used.</abstract><pub>IEEE</pub><doi>10.1109/IJCNN.2001.938777</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1098-7576
ispartof IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), 2001, Vol.4, p.2584-2589 vol.4
issn 1098-7576
1558-3902
language eng
recordid cdi_ieee_primary_938777
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Artificial intelligence
Artificial neural networks
Computer architecture
Computer errors
Equations
Information systems
Mathematics
Neurons
Testing
Virtual machining
title PVM-based training of large neural architectures
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T01%3A13%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=PVM-based%20training%20of%20large%20neural%20architectures&rft.btitle=IJCNN'01.%20International%20Joint%20Conference%20on%20Neural%20Networks.%20Proceedings%20(Cat.%20No.01CH37222)&rft.au=Plagianakos,%20V.P.&rft.date=2001&rft.volume=4&rft.spage=2584&rft.epage=2589%20vol.4&rft.pages=2584-2589%20vol.4&rft.issn=1098-7576&rft.eissn=1558-3902&rft.isbn=0780370449&rft.isbn_list=9780780370449&rft_id=info:doi/10.1109/IJCNN.2001.938777&rft_dat=%3Cieee_6IE%3E938777%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=938777&rfr_iscdi=true