BFAST: High-Speed and Memory-Efficient Approach for NDN Forwarding Engine

Named data networking (NDN) is a future Internet architecture that directly emphasizes accessible content by assigning each piece of content a unique name. Data transmission in NDN is realized via name-based routing and forwarding. Name-based forwarding information base (FIB) usually has much more a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on networking 2017-04, Vol.25 (2), p.1235-1248
Hauptverfasser: Dai, Huichen, Lu, Jianyuan, Wang, Yi, Pan, Tian, Liu, Bin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Named data networking (NDN) is a future Internet architecture that directly emphasizes accessible content by assigning each piece of content a unique name. Data transmission in NDN is realized via name-based routing and forwarding. Name-based forwarding information base (FIB) usually has much more and longer prefixes than IP-based ones, and therefore, name-based forwarding brings more challenges on the NDN router in terms of high forwarding throughput, low memory consumption, and fast FIB update. In this paper, we present an index data structure called BFAST for the name-based FIB. BFAST is designed based on a basic hash table, it employs a counting Bloom filter to balance the load among hash table slots, so that the number of items in each non-empty slot is close to 1, leading to low searching time in each slot. Meanwhile, the first-rank-indexed scheme is proposed to effectively reduce the massive memory consumption required by the pointers in all the hash table slots. Evaluation results show that, for the longest prefix match FIB lookup, BFAST achieves a speed of 2.14 MS/S using one thread, and meanwhile, the memory consumption is reasonably low. By leveraging the parallelism of today's multi-core CPU, BFAST arrives at an FIB lookup speed of 33.64 MS/S using 24 threads, and the latency is around 0.71 μs.
ISSN:1063-6692
1558-2566
DOI:10.1109/TNET.2016.2623379