PipeLLM: Fast and Confidential Large Language Model Services with Speculative Pipelined Encryption
Confidential computing on GPUs, like NVIDIA H100, mitigates the security risks of outsourced Large Language Models (LLMs) by implementing strong isolation and data encryption. Nonetheless, this encryption incurs a significant performance overhead, reaching up to 52.8 percent and 88.2 percent through...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Confidential computing on GPUs, like NVIDIA H100, mitigates the security
risks of outsourced Large Language Models (LLMs) by implementing strong
isolation and data encryption. Nonetheless, this encryption incurs a
significant performance overhead, reaching up to 52.8 percent and 88.2 percent
throughput drop when serving OPT-30B and OPT-66B, respectively. To address this
challenge, we introduce PipeLLM, a user-transparent runtime system. PipeLLM
removes the overhead by overlapping the encryption and GPU computation through
pipelining - an idea inspired by the CPU instruction pipelining - thereby
effectively concealing the latency increase caused by encryption. The primary
technical challenge is that, unlike CPUs, the encryption module lacks prior
knowledge of the specific data needing encryption until it is requested by the
GPUs. To this end, we propose speculative pipelined encryption to predict the
data requiring encryption by analyzing the serving patterns of LLMs. Further,
we have developed an efficient, low-cost pipeline relinquishing approach for
instances of incorrect predictions. Our experiments on NVIDIA H100 GPU show
that compared with vanilla systems without confidential computing (e.g., vLLM,
PEFT, and FlexGen), PipeLLM incurs modest overhead (less than 19.6 percent in
throughput) across various LLM sizes, from 13B to 175B. |
---|---|
DOI: | 10.48550/arxiv.2411.03357 |