Optimising AI Training Deployments using Graph Compilers and Containers
Artificial Intelligence (AI) applications based on Deep Neural Networks (DNN) or Deep Learning (DL) have become popular due to their success in solving problems likeimage analysis and speech recognition. Training a DNN is computationally intensive and High Performance Computing(HPC) has been a key d...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Artificial Intelligence (AI) applications based on Deep Neural Networks (DNN)
or Deep Learning (DL) have become popular due to their success in solving
problems likeimage analysis and speech recognition. Training a DNN is
computationally intensive and High Performance Computing(HPC) has been a key
driver in AI growth. Virtualisation and container technology have led to the
convergence of cloud and HPC infrastructure. These infrastructures with diverse
hardware increase the complexity of deploying and optimising AI training
workloads. AI training deployments in HPC or cloud can be optimised with
target-specific libraries, graph compilers, andby improving data movement or
IO. Graph compilers aim to optimise the execution of a DNN graph by generating
an optimised code for a target hardware/backend. As part of SODALITE (a Horizon
2020 project), MODAK tool is developed to optimise application deployment in
software defined infrastructures. Using input from the data scientist and
performance modelling, MODAK maps optimal application parameters to a target
infrastructure and builds an optimised container. In this paper, we introduce
MODAK and review container technologies and graph compilers for AI. We
illustrate optimisation of AI training deployments using graph compilers and
Singularity containers. Evaluation using MNIST-CNN and ResNet50 training
workloads shows that custom built optimised containers outperform the official
images from DockerHub. We also found that the performance of graph compilers
depends on the target hardware and the complexity of the neural network. |
---|---|
DOI: | 10.48550/arxiv.2008.11675 |