A Survey of Small Language Models

Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we pre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-10
Hauptverfasser: Chien Van Nguyen, Shen, Xuan, Aponte, Ryan, Yu, Xia, Basu, Samyadeep, Hu, Zhengmian, Chen, Jian, Parmar, Mihir, Kunapuli, Sasidhar, Barrow, Joe, Wu, Junda, Singh, Ashish, Wang, Yu, Gu, Jiuxiang, Dernoncourt, Franck, Ahmed, Nesreen K, Lipka, Nedim, Zhang, Ruiyi, Chen, Xiang, Yu, Tong, Kim, Sungchul, Deilamsalehy, Hanieh, Park, Namyong, Rimer, Mike, Zhang, Zhehao, Yang, Huanrui, Rossi, Ryan A, Nguyen, Thien Huu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chien Van Nguyen
Shen, Xuan
Aponte, Ryan
Yu, Xia
Basu, Samyadeep
Hu, Zhengmian
Chen, Jian
Parmar, Mihir
Kunapuli, Sasidhar
Barrow, Joe
Wu, Junda
Singh, Ashish
Wang, Yu
Gu, Jiuxiang
Dernoncourt, Franck
Ahmed, Nesreen K
Lipka, Nedim
Zhang, Ruiyi
Chen, Xiang
Yu, Tong
Kim, Sungchul
Deilamsalehy, Hanieh
Park, Namyong
Rimer, Mike
Zhang, Zhehao
Yang, Huanrui
Rossi, Ryan A
Nguyen, Thien Huu
description Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3121798841</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3121798841</sourcerecordid><originalsourceid>FETCH-proquest_journals_31217988413</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRQdFQILi0qS61UyE9TCM5NzMlR8EnMSy9NTE9V8M1PSc0p5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeGNDI0NzSwsLE0Nj4lQBANdaLMM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3121798841</pqid></control><display><type>article</type><title>A Survey of Small Language Models</title><source>Free E- Journals</source><creator>Chien Van Nguyen ; Shen, Xuan ; Aponte, Ryan ; Yu, Xia ; Basu, Samyadeep ; Hu, Zhengmian ; Chen, Jian ; Parmar, Mihir ; Kunapuli, Sasidhar ; Barrow, Joe ; Wu, Junda ; Singh, Ashish ; Wang, Yu ; Gu, Jiuxiang ; Dernoncourt, Franck ; Ahmed, Nesreen K ; Lipka, Nedim ; Zhang, Ruiyi ; Chen, Xiang ; Yu, Tong ; Kim, Sungchul ; Deilamsalehy, Hanieh ; Park, Namyong ; Rimer, Mike ; Zhang, Zhehao ; Yang, Huanrui ; Rossi, Ryan A ; Nguyen, Thien Huu</creator><creatorcontrib>Chien Van Nguyen ; Shen, Xuan ; Aponte, Ryan ; Yu, Xia ; Basu, Samyadeep ; Hu, Zhengmian ; Chen, Jian ; Parmar, Mihir ; Kunapuli, Sasidhar ; Barrow, Joe ; Wu, Junda ; Singh, Ashish ; Wang, Yu ; Gu, Jiuxiang ; Dernoncourt, Franck ; Ahmed, Nesreen K ; Lipka, Nedim ; Zhang, Ruiyi ; Chen, Xiang ; Yu, Tong ; Kim, Sungchul ; Deilamsalehy, Hanieh ; Park, Namyong ; Rimer, Mike ; Zhang, Zhehao ; Yang, Huanrui ; Rossi, Ryan A ; Nguyen, Thien Huu</creatorcontrib><description>Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Taxonomy</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Chien Van Nguyen</creatorcontrib><creatorcontrib>Shen, Xuan</creatorcontrib><creatorcontrib>Aponte, Ryan</creatorcontrib><creatorcontrib>Yu, Xia</creatorcontrib><creatorcontrib>Basu, Samyadeep</creatorcontrib><creatorcontrib>Hu, Zhengmian</creatorcontrib><creatorcontrib>Chen, Jian</creatorcontrib><creatorcontrib>Parmar, Mihir</creatorcontrib><creatorcontrib>Kunapuli, Sasidhar</creatorcontrib><creatorcontrib>Barrow, Joe</creatorcontrib><creatorcontrib>Wu, Junda</creatorcontrib><creatorcontrib>Singh, Ashish</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><creatorcontrib>Gu, Jiuxiang</creatorcontrib><creatorcontrib>Dernoncourt, Franck</creatorcontrib><creatorcontrib>Ahmed, Nesreen K</creatorcontrib><creatorcontrib>Lipka, Nedim</creatorcontrib><creatorcontrib>Zhang, Ruiyi</creatorcontrib><creatorcontrib>Chen, Xiang</creatorcontrib><creatorcontrib>Yu, Tong</creatorcontrib><creatorcontrib>Kim, Sungchul</creatorcontrib><creatorcontrib>Deilamsalehy, Hanieh</creatorcontrib><creatorcontrib>Park, Namyong</creatorcontrib><creatorcontrib>Rimer, Mike</creatorcontrib><creatorcontrib>Zhang, Zhehao</creatorcontrib><creatorcontrib>Yang, Huanrui</creatorcontrib><creatorcontrib>Rossi, Ryan A</creatorcontrib><creatorcontrib>Nguyen, Thien Huu</creatorcontrib><title>A Survey of Small Language Models</title><title>arXiv.org</title><description>Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.</description><subject>Taxonomy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mRQdFQILi0qS61UyE9TCM5NzMlR8EnMSy9NTE9V8M1PSc0p5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeGNDI0NzSwsLE0Nj4lQBANdaLMM</recordid><startdate>20241025</startdate><enddate>20241025</enddate><creator>Chien Van Nguyen</creator><creator>Shen, Xuan</creator><creator>Aponte, Ryan</creator><creator>Yu, Xia</creator><creator>Basu, Samyadeep</creator><creator>Hu, Zhengmian</creator><creator>Chen, Jian</creator><creator>Parmar, Mihir</creator><creator>Kunapuli, Sasidhar</creator><creator>Barrow, Joe</creator><creator>Wu, Junda</creator><creator>Singh, Ashish</creator><creator>Wang, Yu</creator><creator>Gu, Jiuxiang</creator><creator>Dernoncourt, Franck</creator><creator>Ahmed, Nesreen K</creator><creator>Lipka, Nedim</creator><creator>Zhang, Ruiyi</creator><creator>Chen, Xiang</creator><creator>Yu, Tong</creator><creator>Kim, Sungchul</creator><creator>Deilamsalehy, Hanieh</creator><creator>Park, Namyong</creator><creator>Rimer, Mike</creator><creator>Zhang, Zhehao</creator><creator>Yang, Huanrui</creator><creator>Rossi, Ryan A</creator><creator>Nguyen, Thien Huu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241025</creationdate><title>A Survey of Small Language Models</title><author>Chien Van Nguyen ; Shen, Xuan ; Aponte, Ryan ; Yu, Xia ; Basu, Samyadeep ; Hu, Zhengmian ; Chen, Jian ; Parmar, Mihir ; Kunapuli, Sasidhar ; Barrow, Joe ; Wu, Junda ; Singh, Ashish ; Wang, Yu ; Gu, Jiuxiang ; Dernoncourt, Franck ; Ahmed, Nesreen K ; Lipka, Nedim ; Zhang, Ruiyi ; Chen, Xiang ; Yu, Tong ; Kim, Sungchul ; Deilamsalehy, Hanieh ; Park, Namyong ; Rimer, Mike ; Zhang, Zhehao ; Yang, Huanrui ; Rossi, Ryan A ; Nguyen, Thien Huu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31217988413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Taxonomy</topic><toplevel>online_resources</toplevel><creatorcontrib>Chien Van Nguyen</creatorcontrib><creatorcontrib>Shen, Xuan</creatorcontrib><creatorcontrib>Aponte, Ryan</creatorcontrib><creatorcontrib>Yu, Xia</creatorcontrib><creatorcontrib>Basu, Samyadeep</creatorcontrib><creatorcontrib>Hu, Zhengmian</creatorcontrib><creatorcontrib>Chen, Jian</creatorcontrib><creatorcontrib>Parmar, Mihir</creatorcontrib><creatorcontrib>Kunapuli, Sasidhar</creatorcontrib><creatorcontrib>Barrow, Joe</creatorcontrib><creatorcontrib>Wu, Junda</creatorcontrib><creatorcontrib>Singh, Ashish</creatorcontrib><creatorcontrib>Wang, Yu</creatorcontrib><creatorcontrib>Gu, Jiuxiang</creatorcontrib><creatorcontrib>Dernoncourt, Franck</creatorcontrib><creatorcontrib>Ahmed, Nesreen K</creatorcontrib><creatorcontrib>Lipka, Nedim</creatorcontrib><creatorcontrib>Zhang, Ruiyi</creatorcontrib><creatorcontrib>Chen, Xiang</creatorcontrib><creatorcontrib>Yu, Tong</creatorcontrib><creatorcontrib>Kim, Sungchul</creatorcontrib><creatorcontrib>Deilamsalehy, Hanieh</creatorcontrib><creatorcontrib>Park, Namyong</creatorcontrib><creatorcontrib>Rimer, Mike</creatorcontrib><creatorcontrib>Zhang, Zhehao</creatorcontrib><creatorcontrib>Yang, Huanrui</creatorcontrib><creatorcontrib>Rossi, Ryan A</creatorcontrib><creatorcontrib>Nguyen, Thien Huu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chien Van Nguyen</au><au>Shen, Xuan</au><au>Aponte, Ryan</au><au>Yu, Xia</au><au>Basu, Samyadeep</au><au>Hu, Zhengmian</au><au>Chen, Jian</au><au>Parmar, Mihir</au><au>Kunapuli, Sasidhar</au><au>Barrow, Joe</au><au>Wu, Junda</au><au>Singh, Ashish</au><au>Wang, Yu</au><au>Gu, Jiuxiang</au><au>Dernoncourt, Franck</au><au>Ahmed, Nesreen K</au><au>Lipka, Nedim</au><au>Zhang, Ruiyi</au><au>Chen, Xiang</au><au>Yu, Tong</au><au>Kim, Sungchul</au><au>Deilamsalehy, Hanieh</au><au>Park, Namyong</au><au>Rimer, Mike</au><au>Zhang, Zhehao</au><au>Yang, Huanrui</au><au>Rossi, Ryan A</au><au>Nguyen, Thien Huu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>A Survey of Small Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-10-25</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources, making them ideal for various settings including on-device, mobile, edge devices, among many others. In this article, we present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques. We propose a novel taxonomy for categorizing the methods used to optimize SLMs, including model compression, pruning, and quantization techniques. We summarize the benchmark datasets that are useful for benchmarking SLMs along with the evaluation metrics commonly used. Additionally, we highlight key open challenges that remain to be addressed. Our survey aims to serve as a valuable resource for researchers and practitioners interested in developing and deploying small yet efficient language models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_3121798841
source Free E- Journals
subjects Taxonomy
title A Survey of Small Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T04%3A14%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=A%20Survey%20of%20Small%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Chien%20Van%20Nguyen&rft.date=2024-10-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3121798841%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3121798841&rft_id=info:pmid/&rfr_iscdi=true