A GPU IMPLEMENTATION OF THE TSUNAMI EQUATION
In this paper, we consider numerical simulation and GPU (graphics processing unit) computing for the two-dimensional non-linear tsunami equation, which is a fundamental equation of tsunami propagation in shallow water areas. Tsunamis are highly destructive natural disasters that have a significant i...
Gespeichert in:
Veröffentlicht in: | Scientific journal of Astana IT University (Online) 2023-03, p.24-31 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we consider numerical simulation and GPU (graphics processing unit) computing for the two-dimensional non-linear tsunami equation, which is a fundamental equation of tsunami propagation in shallow water areas. Tsunamis are highly destructive natural disasters that have a significant impact on coastal regions. These events are typically caused by undersea earthquakes, volcanic eruptions, landslides, and possibly an asteroid impact. To solve numerically, firstly we discretized these equations in a rectangular domain and then transformed the partial differential equations into semi-implicit finite difference schemes. The spatial and time derivatives are approximated by using the second-order centered differences following the Crank-Nicolson method and the calculation method is based on the Jacobi method; the computation is performed using the C++ programming language; and the visualization of numerical results is performed by Matlab 2021. The initial condition was given as a Gaussian, and the basin profile has been approximated by a hyperbolic tangent. To accelerate the sequential algorithm, a parallel computation algorithm is developed using CUDA (Compute Unified Device Architecture) technology. CUDA technology has long been used for the numerical solution of partial differential equations (PDEs). It uses the parallel computing capabilities of graphics processing units (GPUs) to speed up the PDE solution. By taking advantage of the GPU’s massive parallelism, CUDA technology can significantly speed up PDE computations, making it an effective tool for scientific computing in a variety of fields. The performance of the parallel implementation is tested by comparing the computation time between the sequential (CPU) solver and CUDA implementations for various mesh sizes. The comparison shows that our parallel implementation gives significant acceleration in the implementation of CUDA. |
---|---|
ISSN: | 2707-9031 2707-904X |
DOI: | 10.37943/13SCQO3041 |