Assessing ChatGPT Responses to Common Patient Questions Regarding Total Hip Arthroplasty

The contemporary patient has access to numerous resources on common orthopaedic procedures before ever presenting for a clinical evaluation. Recently, artificial intelligence (AI)-driven chatbots have become mainstream, allowing patients to engage with interfaces that supply convincing, human-like r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of bone and joint surgery. American volume 2023-10, Vol.105 (19), p.1519-1526
Hauptverfasser: Mika, Aleksander P., Martin, J. Ryan, Engstrom, Stephen M., Polkowski, Gregory G., Wilson, Jacob M.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The contemporary patient has access to numerous resources on common orthopaedic procedures before ever presenting for a clinical evaluation. Recently, artificial intelligence (AI)-driven chatbots have become mainstream, allowing patients to engage with interfaces that supply convincing, human-like responses to prompts. ChatGPT (OpenAI), a recently developed AI-based chat technology, is one such application that has garnered rapid growth in popularity. Given the likelihood that patients may soon call on this technology for preoperative education, we sought to determine whether ChatGPT could appropriately answer frequently asked questions regarding total hip arthroplasty (THA). Ten frequently asked questions regarding total hip arthroplasty were posed to the chatbot during a conversation thread, with no follow-up questions or repetition. Each response was analyzed for accuracy with use of an evidence-based approach. Responses were rated as "excellent response not requiring clarification," "satisfactory requiring minimal clarification," "satisfactory requiring moderate clarification," or "unsatisfactory requiring substantial clarification." Of the responses given by the chatbot, only 1 received an "unsatisfactory" rating; 2 did not require any correction, and the majority required either minimal (4 of 10) or moderate (3 of 10) clarification. Although several responses required nuanced clarification, the chatbot's responses were generally unbiased and evidence-based, even for controversial topics. The chatbot effectively provided evidence-based responses to questions commonly asked by patients prior to THA. The chatbot presented information in a way that most patients would be able to understand. This resource may serve as a valuable clinical tool for patient education and understanding prior to orthopaedic consultation in the future.
ISSN:0021-9355
1535-1386
DOI:10.2106/JBJS.23.00209