A modified method of calculating High Dimensional Model Representation (HDMR) Terms for parallelization with MPI and CUDA

If the values of a multivariate function f ( x 1 , x 2 ,…, x N ) are given at only a finite number of points in the space of its arguments and an interpolation which employs continuous functions is considered standard multivariate routines may become cumbersome as the dimensionality grows. This urge...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of supercomputing 2012-10, Vol.62 (1), p.199-213
Hauptverfasser: Kanal, M. E., Demiralp, M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:If the values of a multivariate function f ( x 1 , x 2 ,…, x N ) are given at only a finite number of points in the space of its arguments and an interpolation which employs continuous functions is considered standard multivariate routines may become cumbersome as the dimensionality grows. This urges us to develop a divide-and-conquer algorithm which approximates the function. The given multivariate data are partitioned into low-variate data. This approach is called High Dimensional Model Representation (HDMR). However, the method in its current form is not applicable to problems having huge volumes of data. With the increasing dimension number and the number of the corresponding nodes, the volume of data in question reaches such a high level that it is beyond the capacity of any individual PC because huge volume of data requires much higher RAM capacity. Another aspect is that the structure of equalities used in the calculation of HDMR terms varies according to the dimension number of the problem. The number of loops in the algorithm increases with the increasing dimension number. In this work, as a first step, the equations used are modified in such a way that their structure does not depend on the dimension number. With the newly obtained equalities, the method becomes appropriate for parallelization. Due to the parallelization, the RAM problem arising from problems with high volume of data is solved. Finally, the performance of the parallelized method is analyzed.
ISSN:0920-8542
1573-0484
DOI:10.1007/s11227-011-0695-0