Efficient Flow Processing in 5G-Envisioned SDN-Based Internet of Vehicles Using GPUs

In the 5G-envisioned Internet of vehicles (IoV), a significant volume of data is exchanged through networks between intelligent transport systems (ITS) and clouds or fogs. With the introduction of Software-Defined Networking (SDN), the problems mentioned above are resolved by high-speed flow-based p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent transportation systems 2021-08, Vol.22 (8), p.5283-5292
Hauptverfasser: Abbasi, Mahdi, Najafi, Ali, Rafiee, Milad, Khosravi, Mohammad R., Menon, Varun G., Muhammad, Ghulam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the 5G-envisioned Internet of vehicles (IoV), a significant volume of data is exchanged through networks between intelligent transport systems (ITS) and clouds or fogs. With the introduction of Software-Defined Networking (SDN), the problems mentioned above are resolved by high-speed flow-based processing of data in network systems. To classify flows of packets in the SDN network, high throughput packet classification systems are needed. Although software packet classifiers are cheaper and more flexible than hardware classifiers, they could only deliver limited performance. A key idea to resolve this problem is parallelizing packet classification on graphical processing units (GPUs). In this paper, we study parallel forms of Tuple Space Search and Pruned Tuple Space Search algorithms for the flow classification suitable for GPUs using CUDA (Compute Unified Device Architecture). The key idea behind the offered methodology is to transfer the stream of packets from host memory to the global memory of the CUDA device, then assigning each of them to a classifier thread. To evaluate the proposed method, the GPU-based versions of the algorithms were implemented on two different CUDA devices, and two different CPU-based implementations of the algorithms were used as references. Experimental results showed that GPU computing enhances the performance of Pruned Tuple Space Search remarkably more than Tuple Space Search. Moreover, results evinced the computational efficiency of the proposed method for parallelizing packet classification algorithms.
ISSN:1524-9050
1558-0016
DOI:10.1109/TITS.2020.3038250