A Survey of Small Language Models

Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we pre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-10
Hauptverfasser: Chien Van Nguyen, Shen, Xuan, Aponte, Ryan, Yu, Xia, Basu, Samyadeep, Hu, Zhengmian, Chen, Jian, Parmar, Mihir, Kunapuli, Sasidhar, Barrow, Joe, Wu, Junda, Singh, Ashish, Wang, Yu, Gu, Jiuxiang, Dernoncourt, Franck, Ahmed, Nesreen K, Lipka, Nedim, Zhang, Ruiyi, Chen, Xiang, Yu, Tong, Kim, Sungchul, Deilamsalehy, Hanieh, Park, Namyong, Rimer, Mike, Zhang, Zhehao, Yang, Huanrui, Rossi, Ryan A, Nguyen, Thien Huu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.
ISSN:2331-8422