Gaps Between Research and Practice When Measuring Representational Harms Caused by LLM-Based Systems
To facilitate the measurement of representational harms caused by large language model (LLM)-based systems, the NLP research community has produced and made publicly available numerous measurement instruments, including tools, datasets, metrics, benchmarks, annotation instructions, and other techniq...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To facilitate the measurement of representational harms caused by large
language model (LLM)-based systems, the NLP research community has produced and
made publicly available numerous measurement instruments, including tools,
datasets, metrics, benchmarks, annotation instructions, and other techniques.
However, the research community lacks clarity about whether and to what extent
these instruments meet the needs of practitioners tasked with developing and
deploying LLM-based systems in the real world, and how these instruments could
be improved. Via a series of semi-structured interviews with practitioners in a
variety of roles in different organizations, we identify four types of
challenges that prevent practitioners from effectively using publicly available
instruments for measuring representational harms caused by LLM-based systems:
(1) challenges related to using publicly available measurement instruments; (2)
challenges related to doing measurement in practice; (3) challenges arising
from measurement tasks involving LLM-based systems; and (4) challenges specific
to measuring representational harms. Our goal is to advance the development of
instruments for measuring representational harms that are well-suited to
practitioner needs, thus better facilitating the responsible development and
deployment of LLM-based systems. |
---|---|
DOI: | 10.48550/arxiv.2411.15662 |