Adaptable Moral Stances of Large Language Models on Sexist Content: Implications for Society and Gender Discourse
This work provides an explanatory view of how LLMs can apply moral reasoning to both criticize and defend sexist language. We assessed eight large language models, all of which demonstrated the capability to provide explanations grounded in varying moral perspectives for both critiquing and endorsin...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work provides an explanatory view of how LLMs can apply moral reasoning
to both criticize and defend sexist language. We assessed eight large language
models, all of which demonstrated the capability to provide explanations
grounded in varying moral perspectives for both critiquing and endorsing views
that reflect sexist assumptions. With both human and automatic evaluation, we
show that all eight models produce comprehensible and contextually relevant
text, which is helpful in understanding diverse views on how sexism is
perceived. Also, through analysis of moral foundations cited by LLMs in their
arguments, we uncover the diverse ideological perspectives in models' outputs,
with some models aligning more with progressive or conservative views on gender
roles and sexism. Based on our observations, we caution against the potential
misuse of LLMs to justify sexist language. We also highlight that LLMs can
serve as tools for understanding the roots of sexist beliefs and designing
well-informed interventions. Given this dual capacity, it is crucial to monitor
LLMs and design safety mechanisms for their use in applications that involve
sensitive societal topics, such as sexism. |
---|---|
DOI: | 10.48550/arxiv.2410.00175 |