A GPU-accelerated parallel K-means algorithm
Clustering approaches are widely used methodologies to analyse large data sets. The K-means algorithm is well-known as a procedure too computational-intensive for the large data analytic problem. In this work, we focus on a parallel technique to reduce the execution time when the K-means is used to...
Gespeichert in:
Veröffentlicht in: | Computers & electrical engineering 2019-05, Vol.75, p.262-274 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Clustering approaches are widely used methodologies to analyse large data sets. The K-means algorithm is well-known as a procedure too computational-intensive for the large data analytic problem. In this work, we focus on a parallel technique to reduce the execution time when the K-means is used to cluster large dataset. We exploit computational powerful of its design when the Graphic Processor Units (GPUs), a massively parallel architecture, is adopted. We optimize the proposed implementation to handle (i) the space limitation issue of GPUs; (ii) the host-device data transfer time. Experimental results, on real and synthetic data, show how our parallelization approach give good results in terms of execution time and speed-up. |
---|---|
ISSN: | 0045-7906 1879-0755 |
DOI: | 10.1016/j.compeleceng.2017.12.002 |