RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science

Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into work processes. In this study, we adopt a system...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Farr, David, Manzonelli, Nico, Cruickshank, Iain, West, Jevin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Farr, David
Manzonelli, Nico
Cruickshank, Iain
West, Jevin
description Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into work processes. In this study, we adopt a systems design approach to employing LLMs as imperfect data annotators for downstream supervised learning tasks, introducing novel system intervention measures aimed at improving classification performance. Our methodology outperforms LLM-generated labels in seven of eight tests, demonstrating an effective strategy for incorporating LLMs into the design and deployment of specialized, supervised learning models present in many industry use cases.
doi_str_mv 10.48550/arxiv.2408.08217
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_08217</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_08217</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_082173</originalsourceid><addsrcrecordid>eNqFjjFug0AQRbdJYSU-gKvMBSCYgILSRUDkwm4MqdEEBjLSsoN211Y4gO_tGKV39aSv_6Sn1GYbhUmWptEL2l8-h3ESZWGUxdu3lbocyyLI63f4gGp2nkYHBTkeDBzI_0gnWoYZerHw5dgMsN8fAo3fpKmDAj2CF6gtsgE0fwtNWmYou4Eg1-gc90zWLX4u43Ty6FkMaqik5RtaJtPSk3roUTta__NRPX-Wdb4LluBmsjyinZtbeLOEv95_XAF3004c</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science</title><source>arXiv.org</source><creator>Farr, David ; Manzonelli, Nico ; Cruickshank, Iain ; West, Jevin</creator><creatorcontrib>Farr, David ; Manzonelli, Nico ; Cruickshank, Iain ; West, Jevin</creatorcontrib><description>Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into work processes. In this study, we adopt a systems design approach to employing LLMs as imperfect data annotators for downstream supervised learning tasks, introducing novel system intervention measures aimed at improving classification performance. Our methodology outperforms LLM-generated labels in seven of eight tests, demonstrating an effective strategy for incorporating LLMs into the design and deployment of specialized, supervised learning models present in many industry use cases.</description><identifier>DOI: 10.48550/arxiv.2408.08217</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Social and Information Networks</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.08217$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.08217$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Farr, David</creatorcontrib><creatorcontrib>Manzonelli, Nico</creatorcontrib><creatorcontrib>Cruickshank, Iain</creatorcontrib><creatorcontrib>West, Jevin</creatorcontrib><title>RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science</title><description>Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into work processes. In this study, we adopt a systems design approach to employing LLMs as imperfect data annotators for downstream supervised learning tasks, introducing novel system intervention measures aimed at improving classification performance. Our methodology outperforms LLM-generated labels in seven of eight tests, demonstrating an effective strategy for incorporating LLMs into the design and deployment of specialized, supervised learning models present in many industry use cases.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Social and Information Networks</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjjFug0AQRbdJYSU-gKvMBSCYgILSRUDkwm4MqdEEBjLSsoN211Y4gO_tGKV39aSv_6Sn1GYbhUmWptEL2l8-h3ESZWGUxdu3lbocyyLI63f4gGp2nkYHBTkeDBzI_0gnWoYZerHw5dgMsN8fAo3fpKmDAj2CF6gtsgE0fwtNWmYou4Eg1-gc90zWLX4u43Ty6FkMaqik5RtaJtPSk3roUTta__NRPX-Wdb4LluBmsjyinZtbeLOEv95_XAF3004c</recordid><startdate>20240815</startdate><enddate>20240815</enddate><creator>Farr, David</creator><creator>Manzonelli, Nico</creator><creator>Cruickshank, Iain</creator><creator>West, Jevin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240815</creationdate><title>RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science</title><author>Farr, David ; Manzonelli, Nico ; Cruickshank, Iain ; West, Jevin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_082173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Social and Information Networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Farr, David</creatorcontrib><creatorcontrib>Manzonelli, Nico</creatorcontrib><creatorcontrib>Cruickshank, Iain</creatorcontrib><creatorcontrib>West, Jevin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Farr, David</au><au>Manzonelli, Nico</au><au>Cruickshank, Iain</au><au>West, Jevin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science</atitle><date>2024-08-15</date><risdate>2024</risdate><abstract>Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into work processes. In this study, we adopt a systems design approach to employing LLMs as imperfect data annotators for downstream supervised learning tasks, introducing novel system intervention measures aimed at improving classification performance. Our methodology outperforms LLM-generated labels in seven of eight tests, demonstrating an effective strategy for incorporating LLMs into the design and deployment of specialized, supervised learning models present in many industry use cases.</abstract><doi>10.48550/arxiv.2408.08217</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.08217
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_08217
source arXiv.org
subjects Computer Science - Learning
Computer Science - Social and Information Networks
title RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T14%3A15%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=RED-CT:%20A%20Systems%20Design%20Methodology%20for%20Using%20LLM-labeled%20Data%20to%20Train%20and%20Deploy%20Edge%20Classifiers%20for%20Computational%20Social%20Science&rft.au=Farr,%20David&rft.date=2024-08-15&rft_id=info:doi/10.48550/arxiv.2408.08217&rft_dat=%3Carxiv_GOX%3E2408_08217%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true