Elastic Model Aggregation with Parameter Service
Model aggregation, the process that updates model parameters, is an important step for model convergence in distributed deep learning (DDL). However, the parameter server (PS), a popular paradigm of performing model aggregation, causes CPU underutilization in deep learning (DL) clusters, due to the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Model aggregation, the process that updates model parameters, is an important
step for model convergence in distributed deep learning (DDL). However, the
parameter server (PS), a popular paradigm of performing model aggregation,
causes CPU underutilization in deep learning (DL) clusters, due to the bursty
nature of aggregation and static resource allocation. To remedy this problem,
we propose Parameter Service, an elastic model aggregation framework for DDL
training, which decouples the function of model aggregation from individual
training jobs and provides a shared model aggregation service to all jobs in
the cluster. In Parameter Service, model aggregations are efficiently packed
and dynamically migrated to fit into the available CPUs with negligible time
overhead. Furthermore, Parameter Service can elastically manage its CPU
resources based on its load to enhance resource efficiency. We have implemented
Parameter Service in a prototype system called AutoPS and evaluated it via
testbed experimentation and trace-driven simulations. AutoPS reduces up to 75%
of CPU consumption with little or no performance impact on the training jobs.
The design of Parameter Service is transparent to the users and can be
incorporated in popular DL frameworks. |
---|---|
DOI: | 10.48550/arxiv.2204.03211 |