TrainMover: Efficient ML Training Live Migration with No Memory Overhead
Machine learning training has emerged as one of the most prominent workloads in modern data centers. These training jobs are large-scale, long-lasting, and tightly coupled, and are often disrupted by various events in the cluster such as failures, maintenance, and job scheduling. To handle these eve...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning training has emerged as one of the most prominent workloads
in modern data centers. These training jobs are large-scale, long-lasting, and
tightly coupled, and are often disrupted by various events in the cluster such
as failures, maintenance, and job scheduling. To handle these events, we rely
on cold migration, where we first checkpoint the entire cluster, replace the
related machines, and then restart the training. This approach leads to
disruptions to the training jobs, resulting in significant downtime. In this
paper, we present TrainMover, a live migration system that enables machine
replacement during machine learning training. TrainMover minimizes downtime by
leveraging member replacement of collective communication groups and sandbox
lazy initialization. Our evaluation demonstrates that TrainMover achieves 16x
less downtime compared to all baselines, effectively handling data center
events like straggler rebalancing, maintenance, and unexpected failures. |
---|---|
DOI: | 10.48550/arxiv.2412.12636 |