Parallelized Training of Restricted Boltzmann Machines Using Markov-Chain Monte Carlo Methods

Restricted Boltzmann machine (RBM) is a generative stochastic neural network that can be applied to collaborative filtering technique used by recommendation systems. Prediction accuracy of the RBM model is usually better than that of other models for recommendation systems. However, training the RBM...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SN computer science 2020-05, Vol.1 (3), p.165, Article 165
Hauptverfasser: Yang, Pei, Varadharajan, Srinivas, Wilson, Lucas A., Smith, Don D., Lockman, John A., Gundecha, Vineet, Ta, Quy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Restricted Boltzmann machine (RBM) is a generative stochastic neural network that can be applied to collaborative filtering technique used by recommendation systems. Prediction accuracy of the RBM model is usually better than that of other models for recommendation systems. However, training the RBM model involves Markov-Chain Monte Carlo method, which is computationally expensive. In this paper, we have successfully applied distributed parallel training using Horovod framework to improve the training time of the RBM model. Our tests show that the distributed training approach of the RBM model has a good scaling efficiency. We also show that this approach effectively reduces the training time to little over 12 min on 64 CPU nodes compared to 5 h on a single CPU node. This will make RBM models more practically applicable in recommendation systems.
ISSN:2662-995X
2661-8907
DOI:10.1007/s42979-020-00170-7