Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values

Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Schoch, Stephanie, Mishra, Ritwick, Ji, Yangfeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Schoch, Stephanie
Mishra, Ritwick
Ji, Yangfeng
description Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset.
doi_str_mv 10.48550/arxiv.2306.10165
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_10165</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_10165</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-d0ed5aba482aae907e8b17cacdde748eb487aaaf207cf04c5d939d3a2fa0c6823</originalsourceid><addsrcrecordid>eNotj01OwzAYRL1hgQoHYIUvkOD8OHaWqFBASsWiKdtoYn8OkYJTOQmit6ctbGZGetJIj7G7RMS5llI8IPz033GaiSJORFLIa7Z_wgy-o4HM3I-euzHwTe8pmhff-45XCB2d0ncLTmM7Whomvp_OrA7wk6MQyPLdJw4DHfkHhoWmG3blMEx0-98rVm-e6_VrVL2_vK0fqwiFkpEVZCVa5DoFqBSKdJsoA2MtqVxTm2sFwKVCGSdyI22ZlTZD6iBModNsxe7_bi9ezSH0XwjH5uzXXPyyXxXkTGw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values</title><source>arXiv.org</source><creator>Schoch, Stephanie ; Mishra, Ritwick ; Ji, Yangfeng</creator><creatorcontrib>Schoch, Stephanie ; Mishra, Ritwick ; Ji, Yangfeng</creatorcontrib><description>Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset.</description><identifier>DOI: 10.48550/arxiv.2306.10165</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2023-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.10165$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.10165$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Schoch, Stephanie</creatorcontrib><creatorcontrib>Mishra, Ritwick</creatorcontrib><creatorcontrib>Ji, Yangfeng</creatorcontrib><title>Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values</title><description>Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj01OwzAYRL1hgQoHYIUvkOD8OHaWqFBASsWiKdtoYn8OkYJTOQmit6ctbGZGetJIj7G7RMS5llI8IPz033GaiSJORFLIa7Z_wgy-o4HM3I-euzHwTe8pmhff-45XCB2d0ncLTmM7Whomvp_OrA7wk6MQyPLdJw4DHfkHhoWmG3blMEx0-98rVm-e6_VrVL2_vK0fqwiFkpEVZCVa5DoFqBSKdJsoA2MtqVxTm2sFwKVCGSdyI22ZlTZD6iBModNsxe7_bi9ezSH0XwjH5uzXXPyyXxXkTGw</recordid><startdate>20230616</startdate><enddate>20230616</enddate><creator>Schoch, Stephanie</creator><creator>Mishra, Ritwick</creator><creator>Ji, Yangfeng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230616</creationdate><title>Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values</title><author>Schoch, Stephanie ; Mishra, Ritwick ; Ji, Yangfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-d0ed5aba482aae907e8b17cacdde748eb487aaaf207cf04c5d939d3a2fa0c6823</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Schoch, Stephanie</creatorcontrib><creatorcontrib>Mishra, Ritwick</creatorcontrib><creatorcontrib>Ji, Yangfeng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Schoch, Stephanie</au><au>Mishra, Ritwick</au><au>Ji, Yangfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values</atitle><date>2023-06-16</date><risdate>2023</risdate><abstract>Although Shapley values have been shown to be highly effective for identifying harmful training instances, dataset size and model complexity constraints limit the ability to apply Shapley-based data valuation to fine-tuning large pre-trained language models. To address this, we propose TS-DShapley, an algorithm that reduces computational cost of Shapley-based data valuation through: 1) an efficient sampling-based method that aggregates Shapley values computed from subsets for valuation of the entire training set, and 2) a value transfer method that leverages value information extracted from a simple classifier trained using representations from the target language model. Our experiments applying TS-DShapley to select data for fine-tuning BERT-based language models on benchmark natural language understanding (NLU) datasets show that TS-DShapley outperforms existing data selection methods. Further, TS-DShapley can filter fine-tuning data to increase language model performance compared to training with the full fine-tuning dataset.</abstract><doi>10.48550/arxiv.2306.10165</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2306.10165
ispartof
issn
language eng
recordid cdi_arxiv_primary_2306_10165
source arXiv.org
subjects Computer Science - Computation and Language
title Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T14%3A15%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Data%20Selection%20for%20Fine-tuning%20Large%20Language%20Models%20Using%20Transferred%20Shapley%20Values&rft.au=Schoch,%20Stephanie&rft.date=2023-06-16&rft_id=info:doi/10.48550/arxiv.2306.10165&rft_dat=%3Carxiv_GOX%3E2306_10165%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true