RED-CT: A Systems Design Methodology for Using LLM-labeled Data to Train and Deploy Edge Classifiers for Computational Social Science
Large language models (LLMs) have enhanced our ability to rapidly analyze and classify unstructured natural language data. However, concerns regarding cost, network limitations, and security constraints have posed challenges for their integration into work processes. In this study, we adopt a system...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have enhanced our ability to rapidly analyze and
classify unstructured natural language data. However, concerns regarding cost,
network limitations, and security constraints have posed challenges for their
integration into work processes. In this study, we adopt a systems design
approach to employing LLMs as imperfect data annotators for downstream
supervised learning tasks, introducing novel system intervention measures aimed
at improving classification performance. Our methodology outperforms
LLM-generated labels in seven of eight tests, demonstrating an effective
strategy for incorporating LLMs into the design and deployment of specialized,
supervised learning models present in many industry use cases. |
---|---|
DOI: | 10.48550/arxiv.2408.08217 |