Medical large language models are vulnerable to data-poisoning attacks

The adoption of large language models (LLMs) in healthcare demands a careful analysis of their potential to spread false medical knowledge. Because LLMs ingest massive volumes of data from the open Internet during training, they are potentially exposed to unverified medical knowledge that may includ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature medicine 2025-01
Hauptverfasser: Alber, Daniel Alexander, Yang, Zihao, Alyakin, Anton, Yang, Eunice, Rai, Sumedha, Valliani, Aly A, Zhang, Jeff, Rosenbaum, Gabriel R, Amend-Thomas, Ashley K, Kurland, David B, Kremer, Caroline M, Eremiev, Alexander, Negash, Bruck, Wiggan, Daniel D, Nakatsuka, Michelle A, Sangwon, Karl L, Neifert, Sean N, Khan, Hammad A, Save, Akshay Vinod, Palla, Adhith, Grin, Eric A, Hedman, Monika, Nasir-Moin, Mustafa, Liu, Xujin Chris, Jiang, Lavender Yao, Mankowski, Michal A, Segev, Dorry L, Aphinyanaphongs, Yindalon, Riina, Howard A, Golfinos, John G, Orringer, Daniel A, Kondziolka, Douglas, Oermann, Eric Karl
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!