ExtraPhrase: Efficient Data Augmentation for Abstractive Summarization

Neural models trained on large amounts of parallel data have achieved impressive performance in abstractive summarization tasks. However, constructing large-scale parallel corpora can be expensive and challenging. In this work, we introduce a low-cost and effective strategy called ExtraPhrase to aug...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of Natural Language Processing 2023, Vol.30(2), pp.489-506
Hauptverfasser: Loem, Mengsay, Takase, Sho, Kaneko, Masahiro, Okazaki, Naoaki
Format: Artikel
Sprache:eng ; jpn
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Neural models trained on large amounts of parallel data have achieved impressive performance in abstractive summarization tasks. However, constructing large-scale parallel corpora can be expensive and challenging. In this work, we introduce a low-cost and effective strategy called ExtraPhrase to augment training data for abstractive summarization tasks. ExtraPhrase constructs pseudo training data with two modules: sentence compression and paraphrasing. We extract major parts of an input text with sentence compression and obtain its diverse expressions with paraphrasing. Through experiments, we show that ExtraPhrase improves the performance of abstractive summarization tasks by more than 0.50 points in ROUGE scores compared to the setting without data augmentation. ExtraPhrase also outperforms existing methods such as back-translation and self-training. We also show that ExtraPhrase is significantly effective when the amount of genuine training data is remarkably small, i.e., in a low-resource setting. Moreover, ExtraPhrase is more cost-efficient than existing approaches.
ISSN:1340-7619
2185-8314
DOI:10.5715/jnlp.30.489