Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt
Continual pre-training has been urgent for adapting a pre-trained model to a multitude of domains and tasks in the fast-evolving world. In practice, a continually pre-trained model is expected to demonstrate not only greater capacity when fine-tuned on pre-trained domains but also a non-decreasing p...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Continual pre-training has been urgent for adapting a pre-trained model to a
multitude of domains and tasks in the fast-evolving world. In practice, a
continually pre-trained model is expected to demonstrate not only greater
capacity when fine-tuned on pre-trained domains but also a non-decreasing
performance on unseen ones. In this work, we first investigate such anytime
fine-tuning effectiveness of existing continual pre-training approaches,
concluding with unanimously decreased performance on unseen domains. To this
end, we propose a prompt-guided continual pre-training method, where we train a
hypernetwork to generate domain-specific prompts by both agreement and
disagreement losses. The agreement loss maximally preserves the generalization
of a pre-trained model to new domains, and the disagreement one guards the
exclusiveness of the generated hidden states for each domain. Remarkably,
prompts by the hypernetwork alleviate the domain identity when fine-tuning and
promote knowledge transfer across domains. Our method achieved improvements of
3.57% and 3.4% on two real-world datasets (including domain shift and temporal
shift), respectively, demonstrating its efficacy. |
---|---|
DOI: | 10.48550/arxiv.2310.13024 |