Few-Shot Table-to-Text Generation with Prompt Planning and Knowledge Memorization
Pre-trained language models (PLM) have achieved remarkable advancement in table-to-text generation tasks. However, the lack of labeled domain-specific knowledge and the topology gap between tabular data and text make it difficult for PLMs to yield faithful text. Low-resource generation likewise face...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Pre-trained language models (PLM) have achieved remarkable advancement in
table-to-text generation tasks. However, the lack of labeled domain-specific
knowledge and the topology gap between tabular data and text make it difficult
for PLMs to yield faithful text. Low-resource generation likewise faces unique
challenges in this domain. Inspired by how humans descript tabular data with
prior knowledge, we suggest a new framework: PromptMize, which targets
table-to-text generation under few-shot settings. The design of our framework
consists of two aspects: a prompt planner and a knowledge adapter. The prompt
planner aims to generate a prompt signal that provides instance guidance for
PLMs to bridge the topology gap between tabular data and text. Moreover, the
knowledge adapter memorizes domain-specific knowledge from the unlabelled
corpus to supply essential information during generation. Extensive experiments
and analyses are investigated on three open domain few-shot NLG datasets:
human, song, and book. Compared with previous state-of-the-art approaches, our
model achieves remarkable performance in generating quality as judged by human
and automatic evaluations. |
---|---|
DOI: | 10.48550/arxiv.2302.04415 |