Cross-layer Attention Sharing for Large Language Models

As large language models (LLMs) evolve, the increase in model depth and parameter number leads to substantial redundancy. To enhance the efficiency of the attention mechanism, previous works primarily compress the KV cache or group attention heads, while largely overlooking redundancy between layers...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mu, Yongyu, Wu, Yuzhang, Fan, Yuchun, Wang, Chenglong, Li, Hengyu, He, Qiaozhi, Yang, Murun, Xiao, Tong, Zhu, Jingbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!