Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research

The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitiga...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature machine intelligence 2024-11, Vol.6 (12), p.1435-1442
Hauptverfasser: Trotsyuk, Artem A., Waeiss, Quinn, Bhatia, Raina Talwar, Aponte, Brandon J., Heffernan, Isabella M. L., Madgavkar, Devika, Felder, Ryan Marshall, Lehmann, Lisa Soleymani, Palmer, Megan J., Greely, Hank, Wald, Russell, Goetz, Lea, Trengove, Markus, Vandersluis, Robert, Lin, Herbert, Cho, Mildred K., Altman, Russ B., Endy, Drew, Relman, David A., Levi, Margaret, Satz, Debra, Magnus, David
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence. The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.
ISSN:2522-5839
2522-5839
DOI:10.1038/s42256-024-00926-3