An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
Supervised finetuning (SFT) on instruction datasets has played a crucial role in achieving the remarkable zero-shot generalization capabilities observed in modern large language models (LLMs). However, the annotation efforts required to produce high quality responses for instructions are becoming pr...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bhatt, Gantavya Chen, Yifang Das, Arnav M Zhang, Jifan Truong, Sang T Mussmann, Stephen Zhu, Yinglun Bilmes, Jeffrey Du, Simon S Jamieson, Kevin Ash, Jordan T Nowak, Robert D |
description | Supervised finetuning (SFT) on instruction datasets has played a crucial role
in achieving the remarkable zero-shot generalization capabilities observed in
modern large language models (LLMs). However, the annotation efforts required
to produce high quality responses for instructions are becoming prohibitively
expensive, especially as the number of tasks spanned by instruction datasets
continues to increase. Active learning is effective in identifying useful
subsets of samples to annotate from an unlabeled pool, but its high
computational cost remains a barrier to its widespread applicability in the
context of LLMs. To mitigate the annotation cost of SFT and circumvent the
computational bottlenecks of active learning, we propose using experimental
design. Experimental design techniques select the most informative samples to
label, and typically maximize some notion of uncertainty and/or diversity. In
our work, we implement a framework that evaluates several existing and novel
experimental design techniques and find that these methods consistently yield
significant gains in label efficiency with little computational overhead. On
generative tasks, our methods achieve the same generalization performance with
only $50\%$ of annotation cost required by random sampling. |
doi_str_mv | 10.48550/arxiv.2401.06692 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_06692</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_06692</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-76f057061117ee41ce4e045f867fb859d47133a5f322202083e71b2417aba253</originalsourceid><addsrcrecordid>eNotj79OwzAYxL0woMIDMOEXSPB_p2NVEkBKxVD26EvyObJIncppSnl7TGG5u-HupB8hD5zlqtCaPUG8-HMuFOM5M2Ytbkm3CbS8HDH6A4YTjPQZZz8EWkU44NcUP6mbIq2hxTErnfOdTzW6X9Li7GfsaeUDnpbgw0Anl4pxwKRhWCCF3dTjON-RGwfjjPf_viL7qvzYvmb1-8vbdlNnYKzIrHFMW2Y45xZR8Q4VMqVdYaxrC73uleVSgnZSCMEEKyRa3grFLbQgtFyRx7_XK2RzTEQQv5tf2OYKK38AK3xOwQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models</title><source>arXiv.org</source><creator>Bhatt, Gantavya ; Chen, Yifang ; Das, Arnav M ; Zhang, Jifan ; Truong, Sang T ; Mussmann, Stephen ; Zhu, Yinglun ; Bilmes, Jeffrey ; Du, Simon S ; Jamieson, Kevin ; Ash, Jordan T ; Nowak, Robert D</creator><creatorcontrib>Bhatt, Gantavya ; Chen, Yifang ; Das, Arnav M ; Zhang, Jifan ; Truong, Sang T ; Mussmann, Stephen ; Zhu, Yinglun ; Bilmes, Jeffrey ; Du, Simon S ; Jamieson, Kevin ; Ash, Jordan T ; Nowak, Robert D</creatorcontrib><description>Supervised finetuning (SFT) on instruction datasets has played a crucial role
in achieving the remarkable zero-shot generalization capabilities observed in
modern large language models (LLMs). However, the annotation efforts required
to produce high quality responses for instructions are becoming prohibitively
expensive, especially as the number of tasks spanned by instruction datasets
continues to increase. Active learning is effective in identifying useful
subsets of samples to annotate from an unlabeled pool, but its high
computational cost remains a barrier to its widespread applicability in the
context of LLMs. To mitigate the annotation cost of SFT and circumvent the
computational bottlenecks of active learning, we propose using experimental
design. Experimental design techniques select the most informative samples to
label, and typically maximize some notion of uncertainty and/or diversity. In
our work, we implement a framework that evaluates several existing and novel
experimental design techniques and find that these methods consistently yield
significant gains in label efficiency with little computational overhead. On
generative tasks, our methods achieve the same generalization performance with
only $50\%$ of annotation cost required by random sampling.</description><identifier>DOI: 10.48550/arxiv.2401.06692</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-01</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.06692$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.06692$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bhatt, Gantavya</creatorcontrib><creatorcontrib>Chen, Yifang</creatorcontrib><creatorcontrib>Das, Arnav M</creatorcontrib><creatorcontrib>Zhang, Jifan</creatorcontrib><creatorcontrib>Truong, Sang T</creatorcontrib><creatorcontrib>Mussmann, Stephen</creatorcontrib><creatorcontrib>Zhu, Yinglun</creatorcontrib><creatorcontrib>Bilmes, Jeffrey</creatorcontrib><creatorcontrib>Du, Simon S</creatorcontrib><creatorcontrib>Jamieson, Kevin</creatorcontrib><creatorcontrib>Ash, Jordan T</creatorcontrib><creatorcontrib>Nowak, Robert D</creatorcontrib><title>An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models</title><description>Supervised finetuning (SFT) on instruction datasets has played a crucial role
in achieving the remarkable zero-shot generalization capabilities observed in
modern large language models (LLMs). However, the annotation efforts required
to produce high quality responses for instructions are becoming prohibitively
expensive, especially as the number of tasks spanned by instruction datasets
continues to increase. Active learning is effective in identifying useful
subsets of samples to annotate from an unlabeled pool, but its high
computational cost remains a barrier to its widespread applicability in the
context of LLMs. To mitigate the annotation cost of SFT and circumvent the
computational bottlenecks of active learning, we propose using experimental
design. Experimental design techniques select the most informative samples to
label, and typically maximize some notion of uncertainty and/or diversity. In
our work, we implement a framework that evaluates several existing and novel
experimental design techniques and find that these methods consistently yield
significant gains in label efficiency with little computational overhead. On
generative tasks, our methods achieve the same generalization performance with
only $50\%$ of annotation cost required by random sampling.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj79OwzAYxL0woMIDMOEXSPB_p2NVEkBKxVD26EvyObJIncppSnl7TGG5u-HupB8hD5zlqtCaPUG8-HMuFOM5M2Ytbkm3CbS8HDH6A4YTjPQZZz8EWkU44NcUP6mbIq2hxTErnfOdTzW6X9Li7GfsaeUDnpbgw0Anl4pxwKRhWCCF3dTjON-RGwfjjPf_viL7qvzYvmb1-8vbdlNnYKzIrHFMW2Y45xZR8Q4VMqVdYaxrC73uleVSgnZSCMEEKyRa3grFLbQgtFyRx7_XK2RzTEQQv5tf2OYKK38AK3xOwQ</recordid><startdate>20240112</startdate><enddate>20240112</enddate><creator>Bhatt, Gantavya</creator><creator>Chen, Yifang</creator><creator>Das, Arnav M</creator><creator>Zhang, Jifan</creator><creator>Truong, Sang T</creator><creator>Mussmann, Stephen</creator><creator>Zhu, Yinglun</creator><creator>Bilmes, Jeffrey</creator><creator>Du, Simon S</creator><creator>Jamieson, Kevin</creator><creator>Ash, Jordan T</creator><creator>Nowak, Robert D</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240112</creationdate><title>An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models</title><author>Bhatt, Gantavya ; Chen, Yifang ; Das, Arnav M ; Zhang, Jifan ; Truong, Sang T ; Mussmann, Stephen ; Zhu, Yinglun ; Bilmes, Jeffrey ; Du, Simon S ; Jamieson, Kevin ; Ash, Jordan T ; Nowak, Robert D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-76f057061117ee41ce4e045f867fb859d47133a5f322202083e71b2417aba253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bhatt, Gantavya</creatorcontrib><creatorcontrib>Chen, Yifang</creatorcontrib><creatorcontrib>Das, Arnav M</creatorcontrib><creatorcontrib>Zhang, Jifan</creatorcontrib><creatorcontrib>Truong, Sang T</creatorcontrib><creatorcontrib>Mussmann, Stephen</creatorcontrib><creatorcontrib>Zhu, Yinglun</creatorcontrib><creatorcontrib>Bilmes, Jeffrey</creatorcontrib><creatorcontrib>Du, Simon S</creatorcontrib><creatorcontrib>Jamieson, Kevin</creatorcontrib><creatorcontrib>Ash, Jordan T</creatorcontrib><creatorcontrib>Nowak, Robert D</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bhatt, Gantavya</au><au>Chen, Yifang</au><au>Das, Arnav M</au><au>Zhang, Jifan</au><au>Truong, Sang T</au><au>Mussmann, Stephen</au><au>Zhu, Yinglun</au><au>Bilmes, Jeffrey</au><au>Du, Simon S</au><au>Jamieson, Kevin</au><au>Ash, Jordan T</au><au>Nowak, Robert D</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models</atitle><date>2024-01-12</date><risdate>2024</risdate><abstract>Supervised finetuning (SFT) on instruction datasets has played a crucial role
in achieving the remarkable zero-shot generalization capabilities observed in
modern large language models (LLMs). However, the annotation efforts required
to produce high quality responses for instructions are becoming prohibitively
expensive, especially as the number of tasks spanned by instruction datasets
continues to increase. Active learning is effective in identifying useful
subsets of samples to annotate from an unlabeled pool, but its high
computational cost remains a barrier to its widespread applicability in the
context of LLMs. To mitigate the annotation cost of SFT and circumvent the
computational bottlenecks of active learning, we propose using experimental
design. Experimental design techniques select the most informative samples to
label, and typically maximize some notion of uncertainty and/or diversity. In
our work, we implement a framework that evaluates several existing and novel
experimental design techniques and find that these methods consistently yield
significant gains in label efficiency with little computational overhead. On
generative tasks, our methods achieve the same generalization performance with
only $50\%$ of annotation cost required by random sampling.</abstract><doi>10.48550/arxiv.2401.06692</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2401.06692 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2401_06692 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Learning |
title | An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T20%3A51%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Experimental%20Design%20Framework%20for%20Label-Efficient%20Supervised%20Finetuning%20of%20Large%20Language%20Models&rft.au=Bhatt,%20Gantavya&rft.date=2024-01-12&rft_id=info:doi/10.48550/arxiv.2401.06692&rft_dat=%3Carxiv_GOX%3E2401_06692%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |