Efficient Few-Shot Fine-Tuning for Opinion Summarization
Abstractive summarization models are typically pre-trained on large amounts of generic texts, then fine-tuned on tens or hundreds of thousands of annotated samples. However, in opinion summarization, large annotated datasets of reviews paired with reference summaries are not available and would be e...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Abstractive summarization models are typically pre-trained on large amounts
of generic texts, then fine-tuned on tens or hundreds of thousands of annotated
samples. However, in opinion summarization, large annotated datasets of reviews
paired with reference summaries are not available and would be expensive to
create. This calls for fine-tuning methods robust to overfitting on small
datasets. In addition, generically pre-trained models are often not accustomed
to the specifics of customer reviews and, after fine-tuning, yield summaries
with disfluencies and semantic mistakes. To address these problems, we utilize
an efficient few-shot method based on adapters which, as we show, can easily
store in-domain knowledge. Instead of fine-tuning the entire model, we add
adapters and pre-train them in a task-specific way on a large corpus of
unannotated customer reviews, using held-out reviews as pseudo summaries. Then,
fine-tune the adapters on the small available human-annotated dataset. We show
that this self-supervised adapter pre-training improves summary quality over
standard fine-tuning by 2.0 and 1.3 ROUGE-L points on the Amazon and Yelp
datasets, respectively. Finally, for summary personalization, we condition on
aspect keyword queries, automatically created from generic datasets. In the
same vein, we pre-train the adapters in a query-based manner on customer
reviews and then fine-tune them on annotated datasets. This results in
better-organized summary content reflected in improved coherence and fewer
redundancies. |
---|---|
DOI: | 10.48550/arxiv.2205.02170 |