Improving Summarization with Human Edits
Recent work has shown the promise of learning with human feedback paradigms to produce human-determined high-quality text. Existing works use human feedback to train large language models (LLMs) in general domain abstractive summarization and have obtained summary quality exceeding traditional likel...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent work has shown the promise of learning with human feedback paradigms
to produce human-determined high-quality text. Existing works use human
feedback to train large language models (LLMs) in general domain abstractive
summarization and have obtained summary quality exceeding traditional
likelihood training. In this paper, we focus on a less explored form of human
feedback -- Human Edits. We propose Sequence Alignment (un)Likelihood Training
(SALT), a novel technique to use both the human-edited and model-generated data
together in the training loop. In addition, we demonstrate simulating Human
Edits with ground truth summaries coming from existing training data --
Imitation edits, along with the model-generated summaries obtained after the
training, to reduce the need for expensive human-edit data. In our experiments,
we extend human feedback exploration from general domain summarization to
medical domain summarization. Our results demonstrate the effectiveness of SALT
in improving the summary quality with Human and Imitation Edits. Through
additional experiments, we show that SALT outperforms the conventional RLHF
method (designed for human preferences) -- DPO, when applied to human-edit
data. We hope the evidence in our paper prompts researchers to explore,
collect, and better use different human feedback approaches scalably. |
---|---|
DOI: | 10.48550/arxiv.2310.05857 |