Editorial Commentary: Large Language Models Like ChatGPT Show Promise, but Clinical Use of Artificial Intelligence Requires Physician Partnership
Forcing ChatGPT and other large language models to perform roles reserved for physicians and other health care professionals—namely evaluation, management, and triage—poses a threat from regulatory, risk management, and professional perspectives. The clinical practice of medicine would benefit treme...
Gespeichert in:
Veröffentlicht in: | Arthroscopy 2024-08 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Forcing ChatGPT and other large language models to perform roles reserved for physicians and other health care professionals—namely evaluation, management, and triage—poses a threat from regulatory, risk management, and professional perspectives. The clinical practice of medicine would benefit tremendously from automated administrative support with systems-based transparency and fluidity—not substitution for clinical diagnostics and decision making. ChatGPT and other large language models are not intended or authorized for clinical use, let alone to be tested or rubber stamped for this application. The best clinical use cases of artificial intelligence require physician partnership to enable personal care, minimize administrative burden, maximize efficiency, and minimize risk—without substitution of core physician tasks. |
---|---|
ISSN: | 0749-8063 1526-3231 1526-3231 |
DOI: | 10.1016/j.arthro.2024.08.029 |