The impact of ChatGPT on human data collection: A case study involving typicality norming data
Tools like ChatGPT, which allow people to unlock the potential of large language models (LLMs), have taken the world by storm. ChatGPT’s ability to produce written output of remarkable quality has inspired, or forced, academics to consider its consequences for both research and education. In particu...
Gespeichert in:
Veröffentlicht in: | Behavior research methods 2023-10, Vol.56 (5), p.4974-4981 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tools like ChatGPT, which allow people to unlock the potential of large language models (LLMs), have taken the world by storm. ChatGPT’s ability to produce written output of remarkable quality has inspired, or forced, academics to consider its consequences for both research and education. In particular, the question of what constitutes authorship, and how to evaluate (scientific) contributions has received a lot of attention. However, its impact on (online) human data collection has mostly flown under the radar. The current paper examines how ChatGPT can be (mis)used in the context of generating norming data. We found that ChatGPT is able to produce sensible output, resembling that of human participants, for a typicality rating task. Moreover, the test–retest reliability of ChatGPT’s ratings was similar to that of human participants tested 1 day apart. We discuss the relevance of these findings in the context of (online) human data collection, focusing both on opportunities (e.g., (risk-)free pilot data) and challenges (e.g., data fabrication). |
---|---|
ISSN: | 1554-3528 1554-351X 1554-3528 |
DOI: | 10.3758/s13428-023-02235-w |