Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges
Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whe...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have demonstrated impressive zero shot
performance on a wide range of NLP tasks, demonstrating the ability to reason
and apply commonsense. A relevant application is to use them for creating high
quality synthetic datasets for downstream tasks. In this work, we probe whether
GPT-4 can be used to augment existing extractive reading comprehension
datasets. Automating data annotation processes has the potential to save large
amounts of time, money and effort that goes into manually labelling datasets.
In this paper, we evaluate the performance of GPT-4 as a replacement for human
annotators for low resource reading comprehension tasks, by comparing
performance after fine tuning, and the cost associated with annotation. This
work serves to be the first analysis of LLMs as synthetic data augmenters for
QA systems, highlighting the unique opportunities and challenges. Additionally,
we release augmented versions of low resource datasets, that will allow the
research community to create further benchmarks for evaluation of generated
datasets. |
---|---|
DOI: | 10.48550/arxiv.2309.12426 |