Learning Cluster Patterns for Abstractive Summarization

Nowadays, pre-trained sequence-to-sequence models such as PEGASUS and BART have shown state-of-the-art results in abstractive summarization. In these models, during fine-tuning, the encoder transforms sentences to context vectors in the latent space and the decoder learns the summary generation task...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.146065-146075
Hauptverfasser: Jo, Sung-Guk, Park, Seung-Hyeok, Kim, Jeong-Jae, On, Byung-Won
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Nowadays, pre-trained sequence-to-sequence models such as PEGASUS and BART have shown state-of-the-art results in abstractive summarization. In these models, during fine-tuning, the encoder transforms sentences to context vectors in the latent space and the decoder learns the summary generation task based on the context vectors. In our approach, we consider two clusters of salient and non-salient context vectors, using which the decoder can attend more over salient context vectors for summary generation. For this, we propose a novel cluster generator layer between the encoder and the decoder, which first generates two clusters of salient and non-salient vectors, and then normalizes and shrinks the clusters to make them apart in the latent space. Our experimental results show that the proposed model outperforms the state-of-the-art models such as BART and PEGASUS by learning these distinct cluster patterns, improving up to 2~30% in ROUGE and 0.1~0.8% in BERTScore in CNN/DailyMail and XSUM data sets.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3346911