CO2Sum:Contrastive Learning for Factual-Consistent Abstractive Summarization
Generating factual-consistent summaries is a challenging task for abstractive summarization. Previous works mainly encode factual information or perform post-correct/rank after decoding. In this paper, we provide a factual-consistent solution from the perspective of contrastive learning, which is a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generating factual-consistent summaries is a challenging task for abstractive
summarization. Previous works mainly encode factual information or perform
post-correct/rank after decoding. In this paper, we provide a
factual-consistent solution from the perspective of contrastive learning, which
is a natural extension of previous works. We propose CO2Sum (Contrastive for
Consistency), a contrastive learning scheme that can be easily applied on
sequence-to-sequence models for factual-consistent abstractive summarization,
proving that the model can be fact-aware without modifying the architecture.
CO2Sum applies contrastive learning on the encoder, which can help the model be
aware of the factual information contained in the input article, or performs
contrastive learning on the decoder, which makes the model to generate
factual-correct output summary. What's more, these two schemes are orthogonal
and can be combined to further improve faithfulness. Comprehensive experiments
on public benchmarks demonstrate that CO2Sum improves the faithfulness on large
pre-trained language models and reaches competitive results compared to other
strong factual-consistent summarization baselines. |
---|---|
DOI: | 10.48550/arxiv.2112.01147 |