AMR Parsing with Instruction Fine-tuned Pre-trained Language Models
Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks. However, a majority of standard parsing tasks including abstract meaning representation (AMR), universal depend...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instruction fine-tuned language models on a collection of instruction
annotated datasets (FLAN) have shown highly effective to improve model
performance and generalization to unseen tasks. However, a majority of standard
parsing tasks including abstract meaning representation (AMR), universal
dependency (UD), semantic role labeling (SRL) has been excluded from the FLAN
collections for both model training and evaluations. In this paper, we take one
of such instruction fine-tuned pre-trained language models, i.e. FLAN-T5, and
fine-tune them for AMR parsing. Our extensive experiments on various AMR
parsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5
fine-tuned models out-perform previous state-of-the-art models across all
tasks. In addition, full fine-tuning followed by the parameter efficient
fine-tuning, LoRA, further improves the model performances, setting new
state-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3). |
---|---|
DOI: | 10.48550/arxiv.2304.12272 |