Structured clinical reasoning prompt enhances LLM's diagnostic capabilities in diagnosis please quiz cases

Large Language Models (LLMs) show promise in medical diagnosis, but their performance varies with prompting. Recent studies suggest that modifying prompts may enhance diagnostic capabilities. This study aimed to test whether a prompting approach that aligns with general clinical reasoning methodolog...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Japanese journal of radiology 2024-12
Hauptverfasser: Sonoda, Yuki, Kurokawa, Ryo, Hagiwara, Akifumi, Asari, Yusuke, Fukushima, Takahiro, Kanzawa, Jun, Gonoi, Wataru, Abe, Osamu
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Large Language Models (LLMs) show promise in medical diagnosis, but their performance varies with prompting. Recent studies suggest that modifying prompts may enhance diagnostic capabilities. This study aimed to test whether a prompting approach that aligns with general clinical reasoning methodology-specifically, using a standardized template to first organize clinical information into predefined categories (patient information, history, symptoms, examinations, etc.) before making diagnoses, instead of one-step processing-can enhance the LLM's medical diagnostic capabilities. Three hundred twenty two quiz questions from Radiology's Diagnosis Please cases (1998-2023) were used. We employed Claude 3.5 Sonnet, a state-of-the-art LLM, to compare three approaches: (1) Baseline: conventional zero-shot chain-of-thought prompt, (2) two-step approach: structured two-step approach: first, the LLM systematically organizes clinical information into two distinct categories (patient history and imaging findings), then separately analyzes this organized information to provide diagnoses, and (3) Summary-only approach: using only the LLM-generated summary for diagnoses. The two-step approach significantly outperformed the both baseline and summary-only approaches in diagnostic accuracy, as determined by McNemar's test. Primary diagnostic accuracy was 60.6% for the two-step approach, compared to 56.5% for baseline (p = 0.042) and 56.3% for summary-only (p = 0.035). For the top three diagnoses, accuracy was 70.5, 66.5, and 65.5% respectively (p = 0.005 for baseline, p = 0.008 for summary-only). No significant differences were observed between the baseline and summary-only approaches. Our results indicate that a structured clinical reasoning approach enhances LLM's diagnostic accuracy. This method shows potential as a valuable tool for deriving diagnoses from free-text clinical information. The approach aligns well with established clinical reasoning processes, suggesting its potential applicability in real-world clinical settings.
ISSN:1867-108X
1867-108X
DOI:10.1007/s11604-024-01712-2