PVM-based training of large neural architectures

A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Plagianakos, V.P., Magoulas, G.D., Nousis, N.K., Vrahatis, M.N.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A methodology for parallelizing neural network training algorithms is described, based on the parallel evaluation of the error function and gradient using the parallel virtual machine (PVM). PVM is an integrated set of software tools and libraries that emulates a general-purpose, flexible, heterogeneous concurrent computing framework on interconnected computers of various architectures. The methodology proposed has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the relatively easy setup of the PVM (using existing workstations), and parallelization of the training algorithms results in considerable speed-ups especially when large network architectures and training vectors are used.
ISSN:1098-7576
1558-3902
DOI:10.1109/IJCNN.2001.938777