Analyzing Large language models chatbots: An experimental approach using a probability test
This study consists of qualitative empirical research, conducted through exploratory tests with two different Large Language Models (LLMs) chatbots: ChatGPT and Gemini. The methodological procedure involved exploratory tests based on prompts designed with a probability question. The "Linda Prob...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This study consists of qualitative empirical research, conducted through
exploratory tests with two different Large Language Models (LLMs) chatbots:
ChatGPT and Gemini. The methodological procedure involved exploratory tests
based on prompts designed with a probability question. The "Linda Problem",
widely recognized in cognitive psychology, was used as a basis to create the
tests, along with the development of a new problem specifically for this
experiment, the "Mary Problem". The object of analysis is the dataset with the
outputs provided by each chatbot interaction. The purpose of the analysis is to
verify whether the chatbots mainly employ logical reasoning that aligns with
probability theory or if they are more frequently affected by the stereotypical
textual descriptions in the prompts. The findings provide insights about the
approach each chatbot employs in handling logic and textual constructions,
suggesting that, while the analyzed chatbots perform satisfactorily on a
well-known probabilistic problem, they exhibit significantly lower performance
on new tests that require direct application of probabilistic logic. |
---|---|
DOI: | 10.48550/arxiv.2407.12862 |