Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges

Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Samuel, Vinay, Aynaou, Houda, Arijit Ghosh Chowdhury, Ramanan, Karthik Venkat, Chadha, Aman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Samuel, Vinay
Aynaou, Houda
Arijit Ghosh Chowdhury
Ramanan, Karthik Venkat
Chadha, Aman
description Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money and effort that goes into manually labelling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low resource reading comprehension tasks, by comparing performance after fine tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2868473931</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2868473931</sourcerecordid><originalsourceid>FETCH-proquest_journals_28684739313</originalsourceid><addsrcrecordid>eNqNjcsKwjAQAIMgKNp_WPBcaBP78CQSFQ8VQTx5KcGutVI3tZvg79uDH-BpDjMwIzGVSsVhvpRyIgLmZxRFMs1kkqipuGpDUBRHho2vX0gOCvsJz8jW9zeEM5qqoRq0fXU9PpC4sQRb4wyj4zWcus72zlPjGmQwVIF-mLZFqpHnYnw3LWPw40ws9ruLPoRdb98e2ZXPYUKDKmWe5stMrVSs_qu-6aZCgQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2868473931</pqid></control><display><type>article</type><title>Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges</title><source>Free E- Journals</source><creator>Samuel, Vinay ; Aynaou, Houda ; Arijit Ghosh Chowdhury ; Ramanan, Karthik Venkat ; Chadha, Aman</creator><creatorcontrib>Samuel, Vinay ; Aynaou, Houda ; Arijit Ghosh Chowdhury ; Ramanan, Karthik Venkat ; Chadha, Aman</creatorcontrib><description>Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money and effort that goes into manually labelling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low resource reading comprehension tasks, by comparing performance after fine tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Annotations ; Cost analysis ; Datasets ; Large language models ; Performance evaluation ; Quality assurance ; Reading comprehension ; Synthetic data</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/publicdomain/zero/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Samuel, Vinay</creatorcontrib><creatorcontrib>Aynaou, Houda</creatorcontrib><creatorcontrib>Arijit Ghosh Chowdhury</creatorcontrib><creatorcontrib>Ramanan, Karthik Venkat</creatorcontrib><creatorcontrib>Chadha, Aman</creatorcontrib><title>Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges</title><title>arXiv.org</title><description>Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money and effort that goes into manually labelling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low resource reading comprehension tasks, by comparing performance after fine tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets.</description><subject>Annotations</subject><subject>Cost analysis</subject><subject>Datasets</subject><subject>Large language models</subject><subject>Performance evaluation</subject><subject>Quality assurance</subject><subject>Reading comprehension</subject><subject>Synthetic data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjcsKwjAQAIMgKNp_WPBcaBP78CQSFQ8VQTx5KcGutVI3tZvg79uDH-BpDjMwIzGVSsVhvpRyIgLmZxRFMs1kkqipuGpDUBRHho2vX0gOCvsJz8jW9zeEM5qqoRq0fXU9PpC4sQRb4wyj4zWcus72zlPjGmQwVIF-mLZFqpHnYnw3LWPw40ws9ruLPoRdb98e2ZXPYUKDKmWe5stMrVSs_qu-6aZCgQ</recordid><startdate>20240710</startdate><enddate>20240710</enddate><creator>Samuel, Vinay</creator><creator>Aynaou, Houda</creator><creator>Arijit Ghosh Chowdhury</creator><creator>Ramanan, Karthik Venkat</creator><creator>Chadha, Aman</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240710</creationdate><title>Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges</title><author>Samuel, Vinay ; Aynaou, Houda ; Arijit Ghosh Chowdhury ; Ramanan, Karthik Venkat ; Chadha, Aman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28684739313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Annotations</topic><topic>Cost analysis</topic><topic>Datasets</topic><topic>Large language models</topic><topic>Performance evaluation</topic><topic>Quality assurance</topic><topic>Reading comprehension</topic><topic>Synthetic data</topic><toplevel>online_resources</toplevel><creatorcontrib>Samuel, Vinay</creatorcontrib><creatorcontrib>Aynaou, Houda</creatorcontrib><creatorcontrib>Arijit Ghosh Chowdhury</creatorcontrib><creatorcontrib>Ramanan, Karthik Venkat</creatorcontrib><creatorcontrib>Chadha, Aman</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Samuel, Vinay</au><au>Aynaou, Houda</au><au>Arijit Ghosh Chowdhury</au><au>Ramanan, Karthik Venkat</au><au>Chadha, Aman</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges</atitle><jtitle>arXiv.org</jtitle><date>2024-07-10</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large Language Models (LLMs) have demonstrated impressive zero shot performance on a wide range of NLP tasks, demonstrating the ability to reason and apply commonsense. A relevant application is to use them for creating high quality synthetic datasets for downstream tasks. In this work, we probe whether GPT-4 can be used to augment existing extractive reading comprehension datasets. Automating data annotation processes has the potential to save large amounts of time, money and effort that goes into manually labelling datasets. In this paper, we evaluate the performance of GPT-4 as a replacement for human annotators for low resource reading comprehension tasks, by comparing performance after fine tuning, and the cost associated with annotation. This work serves to be the first analysis of LLMs as synthetic data augmenters for QA systems, highlighting the unique opportunities and challenges. Additionally, we release augmented versions of low resource datasets, that will allow the research community to create further benchmarks for evaluation of generated datasets.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_2868473931
source Free E- Journals
subjects Annotations
Cost analysis
Datasets
Large language models
Performance evaluation
Quality assurance
Reading comprehension
Synthetic data
title Can LLMs Augment Low-Resource Reading Comprehension Datasets? Opportunities and Challenges
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T19%3A47%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Can%20LLMs%20Augment%20Low-Resource%20Reading%20Comprehension%20Datasets?%20Opportunities%20and%20Challenges&rft.jtitle=arXiv.org&rft.au=Samuel,%20Vinay&rft.date=2024-07-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2868473931%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2868473931&rft_id=info:pmid/&rfr_iscdi=true