Enhancing performance of transformer-based models in natural language understanding through word importance embedding
Transformer-based models have achieved state-of-the-art performance on natural language understanding (NLU) tasks by learning important token relationships through the attention mechanism. However, we observe that attention can become overly distributed during fine-tuning, failing to preserve the de...
Gespeichert in:
Veröffentlicht in: | Knowledge-based systems 2024-11, Vol.304, p.112404, Article 112404 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer-based models have achieved state-of-the-art performance on natural language understanding (NLU) tasks by learning important token relationships through the attention mechanism. However, we observe that attention can become overly distributed during fine-tuning, failing to preserve the dependencies between meaningful tokens adequately. This phenomenon negatively affects the learning of token relationships in sentences. To overcome this issue, we propose a methodology that embeds the feature of word importance (WI) in the transformer-based models as a new layer, weighting the words according to their importance. Our simple yet powerful approach offers a general technique to boost transformer model capabilities on NLU tasks by mitigating the risk of attention dispersion during fine-tuning. Through extensive experiments on GLUE, SuperGLUE, and SQuAD benchmarks for pre-trained models (BERT, RoBERTa, ELECTRA, and DeBERTa), and MMLU, Big Bench Hard, and DROP benchmarks for the large language model, Llama2, we validate the effectiveness of our method in consistently enhancing performance across models with negligible overhead. Furthermore, we validate that our WI layer better preserves the dependencies between important tokens than standard fine-tuning by introducing a model classifying dependent tokens from the learned attention weights. The code is available at https://github.com/bigbases/WordImportance. |
---|---|
ISSN: | 0950-7051 |
DOI: | 10.1016/j.knosys.2024.112404 |