Benchmarking ChatGPT for prototyping theories: Experimental studies using the technology acceptance model

•We explore the paradigm of leveraging ChatGPT as a bench tool for theory prototyping in business research.•We conducted two experimental studies using the classical technology acceptance model (TAM) to assess ChatGPT's capability of comprehending theoretical concepts, discriminating between co...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:BenchCouncil Transactions on Benchmarks, Standards and Evaluations Standards and Evaluations, 2023-12, Vol.3 (4), p.100153, Article 100153
Hauptverfasser: Goh, Tiong-Thye, Dai, Xin, Yang, Yanwu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We explore the paradigm of leveraging ChatGPT as a bench tool for theory prototyping in business research.•We conducted two experimental studies using the classical technology acceptance model (TAM) to assess ChatGPT's capability of comprehending theoretical concepts, discriminating between constructs and generating meaningful responses.•Results showed that ChatGPT can generate responses aligned with the TAM theory and constructs. This paper explores the paradigm of leveraging ChatGPT as a benchmark tool for theory prototyping in conceptual research. Specifically, we conducted two experimental studies using the classical technology acceptance model (TAM) to demonstrate and evaluate ChatGPT's capability of comprehending theoretical concepts, discriminating between constructs, and generating meaningful responses. Results of the two studies indicate that ChatGPT can generate responses aligned with the TAM theory and constructs. Key metrics including the factors loading, internal consistency reliability, and convergence reliability of the measurement model surpass the minimum threshold, thus confirming the validity of TAM constructs. Moreover, supported hypotheses provide an evidence for the nomological validity of TAM constructs. However, both of the two studies show a high Heterotrait–Monotrait ratio of correlations (HTMT) among TAM constructs, suggesting a concern about discriminant validity. Furthermore, high duplicated response rates were identified and potential biases regarding gender, usage experiences, perceived usefulness, and behavioural intention were revealed in ChatGPT-generated samples. Therefore, it calls for additional efforts in LLM to address performance metrics related to duplicated responses, the strength of discriminant validity, the impact of prompt design, and the generalizability of findings across contexts.
ISSN:2772-4859
2772-4859
DOI:10.1016/j.tbench.2024.100153