Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More

Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: Philippy, Fred, Guo, Siwen, Haddadan, Shohreh, Lothritz, Cedric, Klein, Jacques, Bissyandé, Tegawendé F
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Philippy, Fred
Guo, Siwen
Haddadan, Shohreh
Lothritz, Cedric
Klein, Jacques
Bissyandé, Tegawendé F
description Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2923177891</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2923177891</sourcerecordid><originalsourceid>FETCH-proquest_journals_29231778913</originalsourceid><addsrcrecordid>eNqNisEKgkAQQJcgSMp_GOgs6Gymdgqk6GAQJHQUD7ul2I7NuP-fhz6g0-Px3kIFqHUS5TvElQpF-jiOcZ9hmupAHe9kJ7gxvccJau869wRLDCWTSFTN6tsBam6dWMMHeLyMg8qIQCdwJTYbtbTtICb8ca2251NdXqKR6eONTE1Pnt2cGixQJ1mWF4n-7_oC1244Ow</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2923177891</pqid></control><display><type>article</type><title>Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More</title><source>Free E- Journals</source><creator>Philippy, Fred ; Guo, Siwen ; Haddadan, Shohreh ; Lothritz, Cedric ; Klein, Jacques ; Bissyandé, Tegawendé F</creator><creatorcontrib>Philippy, Fred ; Guo, Siwen ; Haddadan, Shohreh ; Lothritz, Cedric ; Klein, Jacques ; Bissyandé, Tegawendé F</creatorcontrib><description>Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Mathematical models ; Parameter modification</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Philippy, Fred</creatorcontrib><creatorcontrib>Guo, Siwen</creatorcontrib><creatorcontrib>Haddadan, Shohreh</creatorcontrib><creatorcontrib>Lothritz, Cedric</creatorcontrib><creatorcontrib>Klein, Jacques</creatorcontrib><creatorcontrib>Bissyandé, Tegawendé F</creatorcontrib><title>Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More</title><title>arXiv.org</title><description>Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.</description><subject>Mathematical models</subject><subject>Parameter modification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNisEKgkAQQJcgSMp_GOgs6Gymdgqk6GAQJHQUD7ul2I7NuP-fhz6g0-Px3kIFqHUS5TvElQpF-jiOcZ9hmupAHe9kJ7gxvccJau869wRLDCWTSFTN6tsBam6dWMMHeLyMg8qIQCdwJTYbtbTtICb8ca2251NdXqKR6eONTE1Pnt2cGixQJ1mWF4n-7_oC1244Ow</recordid><startdate>20240206</startdate><enddate>20240206</enddate><creator>Philippy, Fred</creator><creator>Guo, Siwen</creator><creator>Haddadan, Shohreh</creator><creator>Lothritz, Cedric</creator><creator>Klein, Jacques</creator><creator>Bissyandé, Tegawendé F</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240206</creationdate><title>Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More</title><author>Philippy, Fred ; Guo, Siwen ; Haddadan, Shohreh ; Lothritz, Cedric ; Klein, Jacques ; Bissyandé, Tegawendé F</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29231778913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Mathematical models</topic><topic>Parameter modification</topic><toplevel>online_resources</toplevel><creatorcontrib>Philippy, Fred</creatorcontrib><creatorcontrib>Guo, Siwen</creatorcontrib><creatorcontrib>Haddadan, Shohreh</creatorcontrib><creatorcontrib>Lothritz, Cedric</creatorcontrib><creatorcontrib>Klein, Jacques</creatorcontrib><creatorcontrib>Bissyandé, Tegawendé F</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Philippy, Fred</au><au>Guo, Siwen</au><au>Haddadan, Shohreh</au><au>Lothritz, Cedric</au><au>Klein, Jacques</au><au>Bissyandé, Tegawendé F</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More</atitle><jtitle>arXiv.org</jtitle><date>2024-02-06</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingual transfer. Unlike previous studies on SPT for cross-lingual transfer that often fine-tune both the soft prompt and the model parameters, we adhere to the original intent of SPT by keeping the model parameters frozen and only training the soft prompt. This does not only reduce the computational cost and storage overhead of full-model fine-tuning, but we also demonstrate that this very parameter efficiency intrinsic to SPT can enhance cross-lingual transfer performance to linguistically distant languages. Moreover, we explore how different factors related to the prompt, such as the length or its reparameterization, affect cross-lingual transfer performance.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2923177891
source Free E- Journals
subjects Mathematical models
Parameter modification
title Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T17%3A19%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Soft%20Prompt%20Tuning%20for%20Cross-Lingual%20Transfer:%20When%20Less%20is%20More&rft.jtitle=arXiv.org&rft.au=Philippy,%20Fred&rft.date=2024-02-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2923177891%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2923177891&rft_id=info:pmid/&rfr_iscdi=true