REDUCING TRAINING TIMES OF DEEP NEURAL NETWORKS THROUGH EFFICIENT HYBRID PARALLELISM

Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: ELANGO, Venmugil
Format: Patent
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Presented are systems and methods to automatically find efficient parallelization strategies for deep neural networks (DNNs). A computation graph comprising an efficiently ordered sequence of vertices aids in computing the best parallelizing strategy in a relatively short time. Effectiveness of the parallelization strategies is evaluated on various DNNs, and the performance of the strategies proposed by various embodiments is compared against data parallelism, expert-designed strategies, and other state-of-the-art approaches. Experimental results demonstrate that the proposed strategies outperform a baseline data parallelism strategy and achieve better performance than expert-designed strategies and state-of-the-art approaches.