Soft Prompt Tuning for Cross-Lingual Transfer: When Less is More
Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting pre-trained language models (PLMs) to specific tasks by inserting learnable embeddings, or soft prompts, at the input layer of the PLM, without modifying its parameters. This paper investigates the potential of SPT for cross-lingu...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Soft Prompt Tuning (SPT) is a parameter-efficient method for adapting
pre-trained language models (PLMs) to specific tasks by inserting learnable
embeddings, or soft prompts, at the input layer of the PLM, without modifying
its parameters. This paper investigates the potential of SPT for cross-lingual
transfer. Unlike previous studies on SPT for cross-lingual transfer that often
fine-tune both the soft prompt and the model parameters, we adhere to the
original intent of SPT by keeping the model parameters frozen and only training
the soft prompt. This does not only reduce the computational cost and storage
overhead of full-model fine-tuning, but we also demonstrate that this very
parameter efficiency intrinsic to SPT can enhance cross-lingual transfer
performance to linguistically distant languages. Moreover, we explore how
different factors related to the prompt, such as the length or its
reparameterization, affect cross-lingual transfer performance. |
---|---|
DOI: | 10.48550/arxiv.2402.03782 |