(A)I Am Not a Lawyer, But...: Engaging Legal Experts towards Responsible LLM Policies for Legal Advice
Large language models (LLMs) are increasingly capable of providing users with advice in a wide range of professional domains, including legal advice. However, relying on LLMs for legal queries raises concerns due to the significant expertise required and the potential real-world consequences of the...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) are increasingly capable of providing users with
advice in a wide range of professional domains, including legal advice.
However, relying on LLMs for legal queries raises concerns due to the
significant expertise required and the potential real-world consequences of the
advice. To explore \textit{when} and \textit{why} LLMs should or should not
provide advice to users, we conducted workshops with 20 legal experts using
methods inspired by case-based reasoning. The provided realistic queries
("cases") allowed experts to examine granular, situation-specific concerns and
overarching technical and legal constraints, producing a concrete set of
contextual considerations for LLM developers. By synthesizing the factors that
impacted LLM response appropriateness, we present a 4-dimension framework: (1)
User attributes and behaviors, (2) Nature of queries, (3) AI capabilities, and
(4) Social impacts. We share experts' recommendations for LLM response
strategies, which center around helping users identify `right questions to ask'
and relevant information rather than providing definitive legal judgments. Our
findings reveal novel legal considerations, such as unauthorized practice of
law, confidentiality, and liability for inaccurate advice, that have been
overlooked in the literature. The case-based deliberation method enabled us to
elicit fine-grained, practice-informed insights that surpass those from
de-contextualized surveys or speculative principles. These findings underscore
the applicability of our method for translating domain-specific professional
knowledge and practices into policies that can guide LLM behavior in a more
responsible direction. |
---|---|
DOI: | 10.48550/arxiv.2402.01864 |