Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL
With the advent of foundation models, prompt tuning has positioned itself as an important technique for directing model behaviors and eliciting desired responses. Prompt tuning regards selecting appropriate keywords included into the input, thereby adapting to the downstream task without adjusting o...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the advent of foundation models, prompt tuning has positioned itself as
an important technique for directing model behaviors and eliciting desired
responses. Prompt tuning regards selecting appropriate keywords included into
the input, thereby adapting to the downstream task without adjusting or
fine-tuning the model parameters. There is a wide range of work in prompt
tuning, from approaches that directly harness the backpropagated gradient
signals from the model, to those employing black-box optimization such as
reinforcement learning (RL) methods. Our primary focus is on RLPrompt, which
aims to find optimal prompt tokens leveraging soft Q-learning. While the
results show promise, we have observed that the prompts frequently appear
unnatural, which impedes their interpretability. We address this limitation by
using sparse Tsallis entropy regularization, a principled approach to filtering
out unlikely tokens from consideration. We extensively evaluate our approach
across various tasks, including few-shot text classification, unsupervised text
style transfer, and textual inversion from images. The results indicate a
notable improvement over baselines, highlighting the efficacy of our approach
in addressing the challenges of prompt tuning. Moreover, we show that the
prompts discovered using our method are more natural and interpretable compared
to those from other baselines. |
---|---|
DOI: | 10.48550/arxiv.2407.14733 |