Case Law Grounding: Using Precedents to Align Decision-Making for Humans and AI
Communities and groups often need to make decisions grounded by social norms and preferences, such as when moderating content or providing judgments for aligning AI systems. Prevailing approaches to provide this grounding have primarily centered around constructing high-level guidelines and criteria...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Communities and groups often need to make decisions grounded by social norms
and preferences, such as when moderating content or providing judgments for
aligning AI systems. Prevailing approaches to provide this grounding have
primarily centered around constructing high-level guidelines and criteria,
similar to legal ``constitutions''. However, it can be challenging to specify
social norms and preferences consistently and accurately through constitutions
alone. In this work, we take inspiration from legal systems and introduce
``case law grounding'' (CLG) -- a novel approach for grounding decision-making
that uses past cases and decisions (precedents) to ground future decisions in a
way that can be utilized by human-led processes or implemented through
prompting large language models (LLMs). We evaluate how accurately CLG grounds
decisions with five groups and communities spread across two decision task
domains, comparing against a traditional constitutional grounding approach, and
find that in 4 out of 5 groups, decisions produced with CLG were significantly
more accurately aligned to ground truth: 16.0--23.3 %-points higher accuracy
using the human-led process, and 20.8--32.9 %-points higher when prompting
LLMs. We also evaluate the impact of different configurations of CLG, such as
the case retrieval window size and whether to enforce binding decisions based
on selected precedents, showing support for using binding decisions and
preferring larger retrieval windows. Finally, we discuss the limitations of our
case-based approach as well as how it may be best used to augment existing
constitutional approaches when it comes to aligning human and AI decisions. |
---|---|
DOI: | 10.48550/arxiv.2310.07019 |