Overlapped Data Processing Scheme for Accelerating Training and Validation in Machine Learning

For several years, machine learning (ML) technologies open up new opportunities which solve traditional problems based on a rich set of hardware resources. Unfortunately, ML technologies sometimes waste available hardware resources ( e.g. , CPU and GPU) because they spend a lot of time waiting for a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.72015-72023
Hauptverfasser: Choi, Jinseo, Kang, Donghyun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:For several years, machine learning (ML) technologies open up new opportunities which solve traditional problems based on a rich set of hardware resources. Unfortunately, ML technologies sometimes waste available hardware resources ( e.g. , CPU and GPU) because they spend a lot of time waiting for a previous step inside ML procedure. In this paper, we first study data flows of the ML procedure in detail to find avoidable performance bottlenecks. Then, we propose ol.data , the first software-based data processing scheme that aims to (1) overlap training and validation steps in one epoch or two adjacent epochs, and (2) perform validation steps in parallel, which helps to significantly improve not only the computation time but also the resource utilization. To confirm the positive effectiveness of ol.data , we implemented a convolution neural network (CNN) model with ol.data and compared it with the traditional approaches, Numpy ( i.e. , baseline) and tf.data on three different datasets. As a result, we confirmed that ol.data reduces the inference time by up to 41.8% and increases the utilization of CPU and GPU resources by up to 75.7% and 38.7%, respectively.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3189373