Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention

Interacting with humans through multi-turn conversations is a fundamental feature of large language models (LLMs). However, existing LLM serving engines executing multi-turn conversations are inefficient due to the need to repeatedly compute the key-value (KV) caches of historical tokens, incurring...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gao, Bin, He, Zhuomin, Sharma, Puru, Kang, Qingxuan, Jevdjic, Djordje, Deng, Junbo, Yang, Xingkun, Yu, Zhou, Zuo, Pengfei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!