CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models

Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes throu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ormazabal, Aitor, Artetxe, Mikel, Agirre, Eneko
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ormazabal, Aitor
Artetxe, Mikel
Agirre, Eneko
description Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9%, while using a domain expert 23x smaller.
doi_str_mv 10.48550/arxiv.2305.16876
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_16876</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_16876</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-a848d94dd924fa6abc3012376f2e278a22e0539e5181608905c0d92f87d3e2d83</originalsourceid><addsrcrecordid>eNotz71OwzAUBWAvDKjwAEz4BRz8E9s3bG1E-VEqBrJHt_FNGuEkVdqg8vZA6XSGc3Skj7E7JZMUrJUPOJ26r0QbaRPlwLtr9paP_bbYPPJlwP2xG1q-ilh_itV44gUO7Ywt8c0YKB74cTeNc7vjHz3GyNfdQKKcBwqX_oZdNRgPdHvJBSvXT2X-Ior359d8WQh03gmEFEKWhpDptEGH29pIpY13jSbtAbUmaU1GVoFyEjJpa_m7bcAHQzqAWbD7_9szptpPXY_Td_WHqs4o8wM2Y0Vx</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models</title><source>arXiv.org</source><creator>Ormazabal, Aitor ; Artetxe, Mikel ; Agirre, Eneko</creator><creatorcontrib>Ormazabal, Aitor ; Artetxe, Mikel ; Agirre, Eneko</creatorcontrib><description>Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9%, while using a domain expert 23x smaller.</description><identifier>DOI: 10.48550/arxiv.2305.16876</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-05</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.16876$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.16876$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ormazabal, Aitor</creatorcontrib><creatorcontrib>Artetxe, Mikel</creatorcontrib><creatorcontrib>Agirre, Eneko</creatorcontrib><title>CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models</title><description>Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9%, while using a domain expert 23x smaller.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUBWAvDKjwAEz4BRz8E9s3bG1E-VEqBrJHt_FNGuEkVdqg8vZA6XSGc3Skj7E7JZMUrJUPOJ26r0QbaRPlwLtr9paP_bbYPPJlwP2xG1q-ilh_itV44gUO7Ywt8c0YKB74cTeNc7vjHz3GyNfdQKKcBwqX_oZdNRgPdHvJBSvXT2X-Ior359d8WQh03gmEFEKWhpDptEGH29pIpY13jSbtAbUmaU1GVoFyEjJpa_m7bcAHQzqAWbD7_9szptpPXY_Td_WHqs4o8wM2Y0Vx</recordid><startdate>20230523</startdate><enddate>20230523</enddate><creator>Ormazabal, Aitor</creator><creator>Artetxe, Mikel</creator><creator>Agirre, Eneko</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230523</creationdate><title>CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models</title><author>Ormazabal, Aitor ; Artetxe, Mikel ; Agirre, Eneko</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-a848d94dd924fa6abc3012376f2e278a22e0539e5181608905c0d92f87d3e2d83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Ormazabal, Aitor</creatorcontrib><creatorcontrib>Artetxe, Mikel</creatorcontrib><creatorcontrib>Agirre, Eneko</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ormazabal, Aitor</au><au>Artetxe, Mikel</au><au>Agirre, Eneko</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models</atitle><date>2023-05-23</date><risdate>2023</risdate><abstract>Methods for adapting language models (LMs) to new tasks and domains have traditionally assumed white-box access to the model, and work by modifying its parameters. However, this is incompatible with a recent trend in the field, where the highest quality models are only available as black-boxes through inference APIs. Even when the model weights are available, the computational cost of fine-tuning large LMs can be prohibitive for most practitioners. In this work, we present a lightweight method for adapting large LMs to new domains and tasks, assuming no access to their weights or intermediate activations. Our approach fine-tunes a small white-box LM and combines it with the large black-box LM at the probability level through a small network, learned on a small validation set. We validate our approach by adapting a large LM (OPT-30B) to several domains and a downstream task (machine translation), observing improved performance in all cases, of up to 9%, while using a domain expert 23x smaller.</abstract><doi>10.48550/arxiv.2305.16876</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.16876
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_16876
source arXiv.org
subjects Computer Science - Computation and Language
title CombLM: Adapting Black-Box Language Models through Small Fine-Tuned Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T12%3A50%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CombLM:%20Adapting%20Black-Box%20Language%20Models%20through%20Small%20Fine-Tuned%20Models&rft.au=Ormazabal,%20Aitor&rft.date=2023-05-23&rft_id=info:doi/10.48550/arxiv.2305.16876&rft_dat=%3Carxiv_GOX%3E2305_16876%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true