Bidirectional Language Models Are Also Few-shot Learners
Large language models such as GPT-3 (Brown et al., 2020) can perform arbitrary tasks without undergoing fine-tuning after being prompted with only a few labeled examples. An arbitrary task can be reformulated as a natural language prompt, and a language model can be asked to generate the completion,...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models such as GPT-3 (Brown et al., 2020) can perform
arbitrary tasks without undergoing fine-tuning after being prompted with only a
few labeled examples. An arbitrary task can be reformulated as a natural
language prompt, and a language model can be asked to generate the completion,
indirectly performing the task in a paradigm known as prompt-based learning. To
date, emergent prompt-based learning capabilities have mainly been demonstrated
for unidirectional language models. However, bidirectional language models
pre-trained on denoising objectives such as masked language modeling produce
stronger learned representations for transfer learning. This motivates the
possibility of prompting bidirectional models, but their pre-training
objectives have made them largely incompatible with the existing prompting
paradigm. We present SAP (Sequential Autoregressive Prompting), a technique
that enables the prompting of bidirectional models. Utilizing the machine
translation task as a case study, we prompt the bidirectional mT5 model (Xue et
al., 2021) with SAP and demonstrate its few-shot and zero-shot translations
outperform the few-shot translations of unidirectional models like GPT-3 and
XGLM (Lin et al., 2021), despite mT5's approximately 50% fewer parameters. We
further show SAP is effective on question answering and summarization. For the
first time, our results demonstrate prompt-based learning is an emergent
property of a broader class of language models, rather than only unidirectional
models. |
---|---|
DOI: | 10.48550/arxiv.2209.14500 |