Leveraging Large Language Models for Structure Learning in Prompted Weak Supervision
Prompted weak supervision (PromptedWS) applies pre-trained large language models (LLMs) as the basis for labeling functions (LFs) in a weak supervision framework to obtain large labeled datasets. We further extend the use of LLMs in the loop to address one of the key challenges in weak supervision:...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Prompted weak supervision (PromptedWS) applies pre-trained large language
models (LLMs) as the basis for labeling functions (LFs) in a weak supervision
framework to obtain large labeled datasets. We further extend the use of LLMs
in the loop to address one of the key challenges in weak supervision: learning
the statistical dependency structure among supervision sources. In this work,
we ask the LLM how similar are these prompted LFs. We propose a Structure
Refining Module, a simple yet effective first approach based on the
similarities of the prompts by taking advantage of the intrinsic structure in
the embedding space. At the core of Structure Refining Module are Labeling
Function Removal (LaRe) and Correlation Structure Generation (CosGen). Compared
to previous methods that learn the dependencies from weak labels, our method
finds the dependencies which are intrinsic to the LFs and less dependent on the
data. We show that our Structure Refining Module improves the PromptedWS
pipeline by up to 12.7 points on the benchmark tasks. We also explore the
trade-offs between efficiency and performance with comprehensive ablation
experiments and analysis. Code for this project can be found in
https://github.com/BatsResearch/su-bigdata23-code. |
---|---|
DOI: | 10.48550/arxiv.2402.01867 |