Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient Framework for Multi-level Implicit Discourse Relation Recognition
Multi-level implicit discourse relation recognition (MIDRR) aims at identifying hierarchical discourse relations among arguments. Previous methods achieve the promotion through fine-tuning PLMs. However, due to the data scarcity and the task gap, the pre-trained feature space cannot be accurately tu...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-level implicit discourse relation recognition (MIDRR) aims at
identifying hierarchical discourse relations among arguments. Previous methods
achieve the promotion through fine-tuning PLMs. However, due to the data
scarcity and the task gap, the pre-trained feature space cannot be accurately
tuned to the task-specific space, which even aggravates the collapse of the
vanilla space. Besides, the comprehension of hierarchical semantics for MIDRR
makes the conversion much harder. In this paper, we propose a prompt-based
Parameter-Efficient Multi-level IDRR (PEMI) framework to solve the above
problems. First, we leverage parameter-efficient prompt tuning to drive the
inputted arguments to match the pre-trained space and realize the approximation
with few parameters. Furthermore, we propose a hierarchical label refining
(HLR) method for the prompt verbalizer to deeply integrate hierarchical
guidance into the prompt tuning. Finally, our model achieves comparable results
on PDTB 2.0 and 3.0 using about 0.1% trainable parameters compared with
baselines and the visualization demonstrates the effectiveness of our HLR
method. |
---|---|
DOI: | 10.48550/arxiv.2402.15080 |