Provably efficient scheduling for languages with fine-grained parallelism
Many high-level parallel programming languages allow for fine-grained parallelism. As in the popular work-time framework for parallel algorithm design, programs written in such languages can express the full parallelism in the program without specifying the mapping of program tasks to processors. A...
Gespeichert in:
Veröffentlicht in: | Journal of the ACM 1999-03, Vol.46 (2), p.281-321 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many high-level parallel programming languages allow for fine-grained parallelism. As in the popular work-time framework for parallel algorithm design, programs written in such languages can express the full parallelism in the program without specifying the mapping of program tasks to processors. A common concern in executing such programs is to schedule tasks to processors dynamically so as to minimize not only the execution time, but also the amount of space (memory) needed. Without careful scheduling, the parallel execution on
p
processors can use a factor of
p
or larger more space than a sequential implementation of the same program.
This paper first identifies a class of parallel schedules that are provably efficient in both time and space. For any computation with
w
units of work and critical path length
d
, and for any sequential schedule that takes space s
1
, we provide a parallel schedule that takes fewer than w/p + d steps on p processors and requires less than s
1
+ p·d space. This matches the lower bound that we show, and significantly improves upon the best previous bound of s
1
·p spaces for the common case where
d
«
s
1
.
The paper then describes a scheduler for implementing high-level languages with
nested
parallelism, that generates schedules in this class. During program execution, as the structure of the computation is revealed, the scheduler keeps track of the active tasks, allocates the tasks to the processors, and performs the necessary task synchronization. The scheduler is itself a parallel algorithm, and incurs at most a constant factor overhead in time and space, even when the scheduling granularity is individual units of work. The algorithm is the first efficient solution to the scheduling problem discussed here, even if space considerations are ignored. |
---|---|
ISSN: | 0004-5411 1557-735X |
DOI: | 10.1145/301970.301974 |