Few Adjustable Parameters Prediction Model Based on Lightweight Prefix-Tuning: Learning Session Dropout Prediction Model Based on Parameter-Efficient Prefix-Tuning

In response to the challenge of low predictive accuracy in scenarios with limited data, we propose a few adjustable parameters prediction model based on lightweight prefix-tuning (FAP-Prefix). Prefix-tuning is an efficient fine-tuning method that only adjusts prefix vectors while keeping the model’s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied sciences 2024-11, Vol.14 (23), p.10772
Hauptverfasser: Lu, Yuantong, Wang, Zhanquan
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In response to the challenge of low predictive accuracy in scenarios with limited data, we propose a few adjustable parameters prediction model based on lightweight prefix-tuning (FAP-Prefix). Prefix-tuning is an efficient fine-tuning method that only adjusts prefix vectors while keeping the model’s original parameters frozen. In each transformer layer, the prefix vectors are connected with the internal key-value pair of the transformer structure. By training on the synthesized sequence of the prefix and original input with masked learning, the transformer model learns the features of individual learning behaviors. In addition, it can also discover hidden connections of continuous learning behaviors. During fine-tuning, all parameters of the pre-trained model are frozen, and downstream task learning is accomplished by adjusting the prefix parameters. Continuous trainable prefix vectors can influence subsequent vector representations, leading to the generation of session dropout prediction results. The experiments show that FAP-Prefix significantly outperforms traditional methods in data-limited settings, with AUC improvements of +4.58%, +3.53%, and +8.49% under 30%, 10%, and 1% data conditions, respectively. It also surpasses state-of-the-art models in prediction performance (AUC +5.42%, ACC +5.3%, F1 score +5.68%).
ISSN:2076-3417
2076-3417
DOI:10.3390/app142310772