Gemma 2: Improving Open Language Models at a Practical Size

In this work, we introduce Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters. In this new version, we apply several known technical modifications to the Transformer architecture, such as interleaving loc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Sessa, Pier Giuseppe, Hardin, Cassidy, Bhupatiraju, Surya, Hussenot, Léonard, Shahriari, Bobak, Ramé, Alexandre, Ferret, Johan, Friesen, Abe, Tsitsulin, Anton, Vieillard, Nino, Girgin, Sertan, Hoffman, Matt, Grill, Jean-Bastien, Neyshabur, Behnam, Abdagic, Alvin, Carl, Amanda, Brock, Andy, Paterson, Antonia, Royal, Brandon, Choquette-Choo, Christopher A, Weinberger, David, Vijaykumar, Dimple, Herbison, Dustin, Bandy, Elisa, Wang, Emma, Noland, Eric, Moreira, Erica, Senter, Evan, Eltyshev, Evgenii, Rasskin, Gabriel, Wei, Gary, Cameron, Glenn, Martins, Gus, Hashemi, Hadi, Klimczak-Plucińska, Hanna, Zhou, Jack, Stanway, Jeff, Chan, Jetha, Becker, Jocelyn, Fernandez, Joe, Gordon, Josh, Lipschultz, Josh, Newlan, Josh, Ji, Ju-yeong, Mohamed, Kareem, Badola, Kartikeya, Black, Kat, Millican, Katie, Greene, Kish, Sjoesund, Lars Lowe, Usui, Lauren, Kilpatrick, Logan, Dixon, Lucas, Reid, Machel, Iverson, Mark, Miller, Matt, Rahtz, Matthew, Risdal, Meg, Rahman, Mofi, Khatwani, Mohit, Bardoliwalla, Nenshad, Dumai, Neta, Botarda, Pankil, Barham, Paul, Culliton, Phil, Comanescu, Ramona, Jana, Reena, Agarwal, Rishabh, Saadat, Samaneh, Cogan, Sarah, Perrin, Sarah, Arnold, Sébastien M. R, Krause, Sebastian, Garg, Shruti, Sheth, Shruti, Chan, Susan, Yu, Ting, Kocisky, Tomas, Jain, Vihan, Yadav, Vikas, Meshram, Vilobh, Dharmadhikari, Vishal, Barkley, Warren, Shen, Zhe, Gong, Zhitao, Kirk, Phoebe, Rao, Anand, Warkentin, Tris, Ghahramani, Zoubin, Hadsell, Raia, Banks, Jeanine, Dragan, Anca, Vinyals, Oriol, Dean, Jeff, Kavukcuoglu, Koray, Farabet, Clement, Fiedel, Noah, Kenealy, Kathleen, Dadashi, Robert, Andreev, Alek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this work, we introduce Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters. In this new version, we apply several known technical modifications to the Transformer architecture, such as interleaving local-global attentions (Beltagy et al., 2020a) and group-query attention (Ainslie et al., 2023). We also train the 2B and 9B models with knowledge distillation (Hinton et al., 2015) instead of next token prediction. The resulting models deliver the best performance for their size, and even offer competitive alternatives to models that are 2-3 times bigger. We release all our models to the community.
DOI:10.48550/arxiv.2408.00118