ChatGPT vs Social Surveys: Probing the Objective and Subjective Human Society

The extent to which Large Language Models (LLMs) can simulate the data-generating process for social surveys remains unclear. Current research has not thoroughly assessed potential biases in the sociodemographic population represented within the language model's framework. Additionally, the sub...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhou, Muzhi, Yu, Lu, Geng, Xiaomin, Luo, Lan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The extent to which Large Language Models (LLMs) can simulate the data-generating process for social surveys remains unclear. Current research has not thoroughly assessed potential biases in the sociodemographic population represented within the language model's framework. Additionally, the subjective worlds of LLMs often show inconsistencies in how closely their responses match those of groups of human respondents. In this paper, we used ChatGPT-3.5 to simulate the sampling process and generated six socioeconomic characteristics from the 2020 US population. We also analyzed responses to questions about income inequality and gender roles to explore GPT's subjective attitudes. By using repeated random sampling, we created a sampling distribution to identify the parameters of the GPT-generated population and compared these with Census data. Our findings show some alignment in gender and age means with the actual 2020 US population, but we also found mismatches in the distributions of racial and educational groups. Furthermore, there were significant differences between the distribution of GPT's responses and human self-reported attitudes. While the overall point estimates of GPT's income attitudinal responses seem to align with the mean of the population occasionally, their response distributions follow a normal distribution that diverges from human responses. In terms of gender relations, GPT's answers tend to cluster in the most frequently answered category, demonstrating a deterministic pattern. We conclude by emphasizing the distinct design philosophies of LLMs and social surveys: LLMs aim to predict the most suitable answers, while social surveys seek to reveal the heterogeneity among social groups.
DOI:10.48550/arxiv.2409.02601