Large Language Models Reflect the Ideology of their Creators
Large language models (LLMs) are trained on vast amounts of data to generate natural language, enabling them to perform tasks like text summarization and question answering. These models have become popular in artificial intelligence (AI) assistants like ChatGPT and already play an influential role...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) are trained on vast amounts of data to generate
natural language, enabling them to perform tasks like text summarization and
question answering. These models have become popular in artificial intelligence
(AI) assistants like ChatGPT and already play an influential role in how humans
access information. However, the behavior of LLMs varies depending on their
design, training, and use.
In this paper, we uncover notable diversity in the ideological stance
exhibited across different LLMs and languages in which they are accessed. We do
this by prompting a diverse panel of popular LLMs to describe a large number of
prominent and controversial personalities from recent world history, both in
English and in Chinese. By identifying and analyzing moral assessments
reflected in the generated descriptions, we find consistent normative
differences between how the same LLM responds in Chinese compared to English.
Similarly, we identify normative disagreements between Western and non-Western
LLMs about prominent actors in geopolitical conflicts. Furthermore, popularly
hypothesized disparities in political goals among Western models are reflected
in significant normative differences related to inclusion, social inequality,
and political scandals.
Our results show that the ideological stance of an LLM often reflects the
worldview of its creators. This raises important concerns around technological
and regulatory efforts with the stated aim of making LLMs ideologically
`unbiased', and it poses risks for political instrumentalization. |
---|