Effective implementation of a parallel software on a multiprocessor system
If a software system can be structured as a collection of largely independent subtasks, significant reduction in elapsed time can be realized by executing these subtasks in parallel on multiple processors. The total amount of processor idle time increases with the number of processors; due to factor...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | If a software system can be structured as a collection of largely independent subtasks, significant reduction in elapsed time can be realized by executing these subtasks in parallel on multiple processors. The total amount of processor idle time increases with the number of processors; due to factors such as contention for shared resources, intercommunication, and software structure. In this paper the inherent parallelism of a software system is investigated. A new definition for the partial average parallelism is introduced. Using this definition two analytical expressions are developed to compute the minimum number of processors executing a parallel software at maximum obtainable speedup, and to compute the minimum time to execute a software in a fixed number of processors. The presented example shows that these two expressions are extensively useful when choosing the optimal schedule algorithm. The exact location of the knee (the point where the benefit per unit cost is maximized) is very important in multiprogramming environment where maximum efficiency is required. An expression for the number of processors at the knee is also deduced. A computer program is given that calculates the minimum number of processors, the minimum time, and the exact location of the knee. |
---|---|
DOI: | 10.1109/NRSC.1998.711463 |