EDDA: A Encoder-Decoder Data Augmentation Framework for Zero-Shot Stance Detection
Stance detection aims to determine the attitude expressed in text towards a given target. Zero-shot stance detection (ZSSD) has emerged to classify stances towards unseen targets during inference. Recent data augmentation techniques for ZSSD increase transferable knowledge between targets through te...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Stance detection aims to determine the attitude expressed in text towards a
given target. Zero-shot stance detection (ZSSD) has emerged to classify stances
towards unseen targets during inference. Recent data augmentation techniques
for ZSSD increase transferable knowledge between targets through text or target
augmentation. However, these methods exhibit limitations. Target augmentation
lacks logical connections between generated targets and source text, while text
augmentation relies solely on training data, resulting in insufficient
generalization. To address these issues, we propose an encoder-decoder data
augmentation (EDDA) framework. The encoder leverages large language models and
chain-of-thought prompting to summarize texts into target-specific if-then
rationales, establishing logical relationships. The decoder generates new
samples based on these expressions using a semantic correlation word
replacement strategy to increase syntactic diversity. We also analyze the
generated expressions to develop a rationale-enhanced network that fully
utilizes the augmented data. Experiments on benchmark datasets demonstrate our
approach substantially improves over state-of-the-art ZSSD techniques. The
proposed EDDA framework increases semantic relevance and syntactic variety in
augmented texts while enabling interpretable rationale-based learning. |
---|---|
DOI: | 10.48550/arxiv.2403.15715 |