Speedup simulation for OFDM over PLC channel using a multithreading GPU
The huge computing power available in some graphic cards may be used to significantly speedup scientific computing compared with common parallel clusters. The low price and virtually ubiquitous Graphics Processing Units (GPU), in conjunction with C style parallel programming tools, like CUDA (Comput...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The huge computing power available in some graphic cards may be used to significantly speedup scientific computing compared with common parallel clusters. The low price and virtually ubiquitous Graphics Processing Units (GPU), in conjunction with C style parallel programming tools, like CUDA (Compute Unified Device Architecture), allow the programmers to exploit their fine grain parallelism and multithreading management capacities to speedup generic purpose applications. In this paper, we expose our experience applying a multithreading GPU to speedup Monte Carlo simulations for an OFDM schema over a power-line communication (PLC) channel, making emphasis on practical considerations that help to reach the best performance for both, GPU and overall system. |
---|---|
ISSN: | 2330-989X 2689-7563 |
DOI: | 10.1109/LatinCOM.2011.6107390 |