GPU-Accelerated GLRLM Algorithm for Feature Extraction of MRI
The gray level run length matrix (GLRLM) whose entries are statistics recording distribution and relationship of images pixels is a widely used method for extracting statistical features for medical images, e.g., magnetic resonance (MR) images. Recently these features are usually employed in some ar...
Gespeichert in:
Veröffentlicht in: | Scientific reports 2019-07, Vol.9 (1), p.10883-10883, Article 10883 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The gray level run length matrix (GLRLM) whose entries are statistics recording distribution and relationship of images pixels is a widely used method for extracting statistical features for medical images, e.g., magnetic resonance (MR) images. Recently these features are usually employed in some artificial neural networks to identify and distinguish texture patterns. But GLRLM construction and features extraction are tedious and computationally intensive while the images are too big with high resolution, or there are too many small or intermediate Regions of Interest (ROI) to process in a single image, which makes the preprocess a time consuming stage. Hence, it is of great importance to accelerate the procedure which is nowadays possible with the rapid development of massively parallel Graphics Processing Unit, i.e. the GPU computing technology. In this article, we propose a new paradigm based on mature parallel primitives for generating GLRLMs and extracting multiple features for many ROIs simultaneously in a single image. Experiments show that such a paradigm is easy to implement and offers an acceleration over 5 fold increase in speed than an optimized serial counterpart. |
---|---|
ISSN: | 2045-2322 2045-2322 |
DOI: | 10.1038/s41598-019-46622-w |