Gemma 2: Improving Open Language Models at a Practical Size
In this work, we introduce Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters. In this new version, we apply several known technical modifications to the Transformer architecture, such as interleaving loc...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we introduce Gemma 2, a new addition to the Gemma family of
lightweight, state-of-the-art open models, ranging in scale from 2 billion to
27 billion parameters. In this new version, we apply several known technical
modifications to the Transformer architecture, such as interleaving
local-global attentions (Beltagy et al., 2020a) and group-query attention
(Ainslie et al., 2023). We also train the 2B and 9B models with knowledge
distillation (Hinton et al., 2015) instead of next token prediction. The
resulting models deliver the best performance for their size, and even offer
competitive alternatives to models that are 2-3 times bigger. We release all
our models to the community. |
---|---|
DOI: | 10.48550/arxiv.2408.00118 |