Simultaneously Reducing Latency and Power Consumption in OpenFlow Switches

The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of pac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ACM transactions on networking 2014-06, Vol.22 (3), p.1007-1020
Hauptverfasser: Congdon, Paul T., Mohapatra, Prasant, Farrens, Matthew, Akella, Venkatesh
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%.
ISSN:1063-6692
1558-2566
DOI:10.1109/TNET.2013.2270436