A High-Throughput Network Processor Architecture for Latency-Critical Applications
This article presents the recent advancements on the Advanced IO Processor (AIOP), a network processor architecture designed by NXP Semiconductors. The AIOP is a multicore accelerated computing architecture where each core is equipped with dedicated hardware for rapid task switching on every hardwar...
Gespeichert in:
Veröffentlicht in: | IEEE MICRO 2020-01, Vol.40 (1), p.50-56 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This article presents the recent advancements on the Advanced IO Processor (AIOP), a network processor architecture designed by NXP Semiconductors. The AIOP is a multicore accelerated computing architecture where each core is equipped with dedicated hardware for rapid task switching on every hardware accelerator call. A hardware preemption controller snoops on the accelerator completions and sends task preemption requests to the cores, thus reducing the latency of real-time tasks. A technique of priority thresholding is used to avoid latency uncertainty on lower priority tasks and head-of-line blocking. In this way, the AIOP handles the conflicting requirements of high throughput and low latency for next-generation wireless applications such as WiFi and 5G. In presence of frequent preemptions, the throughput reduces by only 3% on AIOP, compared to 25% on a similar network processor. Moreover, the absolute throughput and latency numbers are 2X better. The area and power overhead of adding hardware task-scheduling and preemption is only about 3%. |
---|---|
ISSN: | 0272-1732 1937-4143 |
DOI: | 10.1109/MM.2019.2958896 |