ChatGPT's Astonishing Fabrications About Percy Ludgate

Since its release in November 2022, OpenAI's artificial intelligence (AI) chatbot ChatGPT has aroused great interest because of its very impressive ability to provide well-formulated and detailed natural language responses to queries about a huge variety of topics. These responses are based on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE annals of the history of computing 2023-04, Vol.45 (2), p.71-72
Hauptverfasser: Randell, Brian, Coghlan, Brian, Hemmendinger, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Since its release in November 2022, OpenAI's artificial intelligence (AI) chatbot ChatGPT has aroused great interest because of its very impressive ability to provide well-formulated and detailed natural language responses to queries about a huge variety of topics. These responses are based on an immense set of training data, obtained from the Internet in 2021, and on information gained from interactions with its users. However, ChatGPT's users soon found that the answers they received to their queries were not always trustworthy. Indeed, OpenAI itself lists as one of ChatGPT's limitations that it “sometimes writes plausible-sounding but incorrect or nonsensical answers, [i.e.,] confident responses that cannot be grounded in any of its training data.” [4] (The term “hallucination” has come into use by the AI community for such responses, which are not unique to ChatGPT [5].)
ISSN:1058-6180
1934-1547
DOI:10.1109/MAHC.2023.3272989