Scalable Fault-Tolerant MapReduce
Supercomputers getting ever larger and energy-efficient is at odds with the reliability of the used hardware. Thus, the time intervals between component failures are decreasing. Contrarily, the latencies for individual operations of coarse-grained big-data tools grow with the number of processors. T...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Supercomputers getting ever larger and energy-efficient is at odds with the
reliability of the used hardware. Thus, the time intervals between component
failures are decreasing. Contrarily, the latencies for individual operations of
coarse-grained big-data tools grow with the number of processors. To overcome
the resulting scalability limit, we need to go beyond the current practice of
interoperation checkpointing. We give first results on how to achieve this for
the popular MapReduce framework where huge multisets are processed by
user-defined mapping and reducing functions. We observe that the full state of
a MapReduce algorithm is described by its network communication. We present a
low-overhead technique with no additional work during fault-free execution and
the negligible expected relative communication overhead of $1/(p-1)$ on $p$
PEs. Recovery takes approximately the time of processing $1/p$ of the data on
the surviving PEs. We achieve this by backing up self-messages and locally
storing all messages sent through the network on the sending and receiving PEs
until the next round of global communication. A prototypical implementation
already indicates low overhead $ |
---|---|
DOI: | 10.48550/arxiv.2411.16255 |