Characterizing sources and remedies for packet loss in network intrusion detection systems
Network intrusion detection is becoming an increasingly important tool to protect critical information and infrastructure from unauthorized access. Network intrusion detection systems (NIDS) are commonly based on general-purpose workstations connected to a network tap. However, these general-purpose...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Network intrusion detection is becoming an increasingly important tool to protect critical information and infrastructure from unauthorized access. Network intrusion detection systems (NIDS) are commonly based on general-purpose workstations connected to a network tap. However, these general-purpose systems, although cost-efficient, are not able to sustain the packet rates of modern high-speed networks. The resulting packet loss degrades the system's overall effectiveness, since attackers can intentionally overload the NIDS to evade detection. This paper studies the performance requirements of a commonly used open-source NIDS on a modern workstation architecture. Using full-system simulation, this paper characterizes the impact of a number of system-level optimizations and architectural trends on packet loss, and highlights the key bottlenecks for this type of network-intensive workloads. Results suggest that interrupt aggregation combined with rule set pruning is most effective in minimizing packet loss. Surprisingly, the workload also exhibits sufficient locality to benefit from larger level-2 caches as well. On the other hand, many other common architecture and system optimizations have only a negligible impact on throughput. |
---|---|
DOI: | 10.1109/IISWC.2005.1526016 |