MPI based cluster computing for performance evaluation of parallel applications

Parallel computing operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently to save time (wall clock time) by taking advantage of non-local resources and overcoming memory constraints. The main aim is to form a cluster oriented parallel...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Nanjesh, B. R., Kumar, K. S. Vinay, Madhu, C. K., Kumar, G. Hareesh
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Parallel computing operates on the principle that large problems can often be divided into smaller ones, which are then solved concurrently to save time (wall clock time) by taking advantage of non-local resources and overcoming memory constraints. The main aim is to form a cluster oriented parallel computing architecture for MPI based applications which demonstrates the performance gain and losses achieved through parallel processing using MPI. This can be realized by implementing the parallel applications like parallel merge sorting, using MPI. The architecture for demonstrating MPI based parallel applications works on the Master-Slave computing paradigm. The master will monitor the progress and be able to report the time taken to solve the problem, taking into account the time spent in breaking the problem into sub-tasks and combining the results along with the communication delays. The slaves are capable of accepting sub problems from the master and finding the solution and sending back to the master. We aim to evaluate these statistics of parallel execution and do comparison with the time taken to solve the same problem in serial execution to demonstrate communication overhead involved in parallel computation. The results with runs on different number of nodes are compared to evaluate the efficiency of MPI based parallel applications. We also show the performance dependency of parallel and serial computation, on RAM.
DOI:10.1109/CICT.2013.6558268