Benchmarking four large language models’ performance of addressing Chinese patients' inquiries about dry eye disease: A two-phase study

To evaluate the performance of four large language models (LLMs)—GPT-4, PaLM 2, Qwen, and Baichuan 2—in generating responses to inquiries from Chinese patients about dry eye disease (DED). Two-phase study, including a cross-sectional test in the first phase and a real-world clinical assessment in th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Heliyon 2024-07, Vol.10 (14), p.e34391, Article e34391
Hauptverfasser: Shi, Runhan, Liu, Steven, Xu, Xinwei, Ye, Zhengqiang, Yang, Jin, Le, Qihua, Qiu, Jini, Tian, Lijia, Wei, Anji, Shan, Kun, Zhao, Chen, Sun, Xinghuai, Zhou, Xingtao, Hong, Jiaxu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To evaluate the performance of four large language models (LLMs)—GPT-4, PaLM 2, Qwen, and Baichuan 2—in generating responses to inquiries from Chinese patients about dry eye disease (DED). Two-phase study, including a cross-sectional test in the first phase and a real-world clinical assessment in the second phase. Eight board-certified ophthalmologists and 46 patients with DED. The chatbots' responses to Chinese patients' inquiries about DED were assessed by the evaluation. In the first phase, six senior ophthalmologists subjectively rated the chatbots’ responses using a 5-point Likert scale across five domains: correctness, completeness, readability, helpfulness, and safety. Objective readability analysis was performed using a Chinese readability analysis platform. In the second phase, 46 representative patients with DED asked the two language models (GPT-4 and Baichuan 2) that performed best in the in the first phase questions and then rated the answers for satisfaction and readability. Two senior ophthalmologists then assessed the responses across the five domains. Subjective scores for the five domains and objective readability scores in the first phase. The patient satisfaction, readability scores, and subjective scores for the five-domains in the second phase. In the first phase, GPT-4 exhibited superior performance across the five domains (correctness: 4.47; completeness: 4.39; readability: 4.47; helpfulness: 4.49; safety: 4.47, p 
ISSN:2405-8440
2405-8440
DOI:10.1016/j.heliyon.2024.e34391