Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research
The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitiga...
Gespeichert in:
Veröffentlicht in: | Nature machine intelligence 2024-11, Vol.6 (12), p.1435-1442 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1442 |
---|---|
container_issue | 12 |
container_start_page | 1435 |
container_title | Nature machine intelligence |
container_volume | 6 |
creator | Trotsyuk, Artem A. Waeiss, Quinn Bhatia, Raina Talwar Aponte, Brandon J. Heffernan, Isabella M. L. Madgavkar, Devika Felder, Ryan Marshall Lehmann, Lisa Soleymani Palmer, Megan J. Greely, Hank Wald, Russell Goetz, Lea Trengove, Markus Vandersluis, Robert Lin, Herbert Cho, Mildred K. Altman, Russ B. Endy, Drew Relman, David A. Levi, Margaret Satz, Debra Magnus, David |
description | The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.
The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions. |
doi_str_mv | 10.1038/s42256-024-00926-3 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3145910042</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3145910042</sourcerecordid><originalsourceid>FETCH-LOGICAL-c200t-6d6b7c145630be2da44dfddda669c6b9db51baad19719ae4c2ae55b844d52063</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMoWGr_gKuA69GbZztLKb5AcNN9yCR3atqZSU2mFP-9qRV05ermnnznXDiEXDO4ZSAWd1lyrnQFXFYANdeVOCMTrjiv1ELU53_el2SW8wYAOJNSgZyQbhUPNnlqaZtsj4eYtrSNiaaQt7QPY1jbMcSBxpbu4ojDGGxX9LzPeNRsGkMb3FEMw4hdF9Y4OCwLbULs0QdXvhJmtMm9X5GL1nYZZz9zSlaPD6vlc_X69vSyvH-tHAcYK-11M3dMKi2gQe6tlL713luta6eb2jeKNdZ6Vs9ZbVE6blGpZlEwxUGLKbk5xe5S_NhjHs0m7tNQLhpRUmsGIHmh-IlyKeacsDW7FHqbPg0Dc-zVnHo1pVfz3asRxSROplzgYY3pN_of1xci5nz5</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3145910042</pqid></control><display><type>article</type><title>Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research</title><source>Nature Journals Online</source><source>SpringerLink Journals - AutoHoldings</source><creator>Trotsyuk, Artem A. ; Waeiss, Quinn ; Bhatia, Raina Talwar ; Aponte, Brandon J. ; Heffernan, Isabella M. L. ; Madgavkar, Devika ; Felder, Ryan Marshall ; Lehmann, Lisa Soleymani ; Palmer, Megan J. ; Greely, Hank ; Wald, Russell ; Goetz, Lea ; Trengove, Markus ; Vandersluis, Robert ; Lin, Herbert ; Cho, Mildred K. ; Altman, Russ B. ; Endy, Drew ; Relman, David A. ; Levi, Margaret ; Satz, Debra ; Magnus, David</creator><creatorcontrib>Trotsyuk, Artem A. ; Waeiss, Quinn ; Bhatia, Raina Talwar ; Aponte, Brandon J. ; Heffernan, Isabella M. L. ; Madgavkar, Devika ; Felder, Ryan Marshall ; Lehmann, Lisa Soleymani ; Palmer, Megan J. ; Greely, Hank ; Wald, Russell ; Goetz, Lea ; Trengove, Markus ; Vandersluis, Robert ; Lin, Herbert ; Cho, Mildred K. ; Altman, Russ B. ; Endy, Drew ; Relman, David A. ; Levi, Margaret ; Satz, Debra ; Magnus, David</creatorcontrib><description>The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.
The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.</description><identifier>ISSN: 2522-5839</identifier><identifier>EISSN: 2522-5839</identifier><identifier>DOI: 10.1038/s42256-024-00926-3</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>706/648/179 ; 706/648/453 ; Ambient intelligence ; Artificial intelligence ; Bioengineering ; Biomedical data ; Biomedical research ; Drug development ; Engineering ; Ethics ; Perspective ; R&D ; Research & development ; Researchers ; Synthetic data ; Systems development</subject><ispartof>Nature machine intelligence, 2024-11, Vol.6 (12), p.1435-1442</ispartof><rights>Springer Nature Limited 2024 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><rights>Copyright Nature Publishing Group Dec 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c200t-6d6b7c145630be2da44dfddda669c6b9db51baad19719ae4c2ae55b844d52063</cites><orcidid>0000-0002-4880-3043 ; 0000-0003-1306-4946 ; 0000-0003-3859-2905 ; 0000-0001-8331-1354</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1038/s42256-024-00926-3$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1038/s42256-024-00926-3$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Trotsyuk, Artem A.</creatorcontrib><creatorcontrib>Waeiss, Quinn</creatorcontrib><creatorcontrib>Bhatia, Raina Talwar</creatorcontrib><creatorcontrib>Aponte, Brandon J.</creatorcontrib><creatorcontrib>Heffernan, Isabella M. L.</creatorcontrib><creatorcontrib>Madgavkar, Devika</creatorcontrib><creatorcontrib>Felder, Ryan Marshall</creatorcontrib><creatorcontrib>Lehmann, Lisa Soleymani</creatorcontrib><creatorcontrib>Palmer, Megan J.</creatorcontrib><creatorcontrib>Greely, Hank</creatorcontrib><creatorcontrib>Wald, Russell</creatorcontrib><creatorcontrib>Goetz, Lea</creatorcontrib><creatorcontrib>Trengove, Markus</creatorcontrib><creatorcontrib>Vandersluis, Robert</creatorcontrib><creatorcontrib>Lin, Herbert</creatorcontrib><creatorcontrib>Cho, Mildred K.</creatorcontrib><creatorcontrib>Altman, Russ B.</creatorcontrib><creatorcontrib>Endy, Drew</creatorcontrib><creatorcontrib>Relman, David A.</creatorcontrib><creatorcontrib>Levi, Margaret</creatorcontrib><creatorcontrib>Satz, Debra</creatorcontrib><creatorcontrib>Magnus, David</creatorcontrib><title>Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research</title><title>Nature machine intelligence</title><addtitle>Nat Mach Intell</addtitle><description>The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.
The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.</description><subject>706/648/179</subject><subject>706/648/453</subject><subject>Ambient intelligence</subject><subject>Artificial intelligence</subject><subject>Bioengineering</subject><subject>Biomedical data</subject><subject>Biomedical research</subject><subject>Drug development</subject><subject>Engineering</subject><subject>Ethics</subject><subject>Perspective</subject><subject>R&D</subject><subject>Research & development</subject><subject>Researchers</subject><subject>Synthetic data</subject><subject>Systems development</subject><issn>2522-5839</issn><issn>2522-5839</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLAzEUhYMoWGr_gKuA69GbZztLKb5AcNN9yCR3atqZSU2mFP-9qRV05ermnnznXDiEXDO4ZSAWd1lyrnQFXFYANdeVOCMTrjiv1ELU53_el2SW8wYAOJNSgZyQbhUPNnlqaZtsj4eYtrSNiaaQt7QPY1jbMcSBxpbu4ojDGGxX9LzPeNRsGkMb3FEMw4hdF9Y4OCwLbULs0QdXvhJmtMm9X5GL1nYZZz9zSlaPD6vlc_X69vSyvH-tHAcYK-11M3dMKi2gQe6tlL713luta6eb2jeKNdZ6Vs9ZbVE6blGpZlEwxUGLKbk5xe5S_NhjHs0m7tNQLhpRUmsGIHmh-IlyKeacsDW7FHqbPg0Dc-zVnHo1pVfz3asRxSROplzgYY3pN_of1xci5nz5</recordid><startdate>20241126</startdate><enddate>20241126</enddate><creator>Trotsyuk, Artem A.</creator><creator>Waeiss, Quinn</creator><creator>Bhatia, Raina Talwar</creator><creator>Aponte, Brandon J.</creator><creator>Heffernan, Isabella M. L.</creator><creator>Madgavkar, Devika</creator><creator>Felder, Ryan Marshall</creator><creator>Lehmann, Lisa Soleymani</creator><creator>Palmer, Megan J.</creator><creator>Greely, Hank</creator><creator>Wald, Russell</creator><creator>Goetz, Lea</creator><creator>Trengove, Markus</creator><creator>Vandersluis, Robert</creator><creator>Lin, Herbert</creator><creator>Cho, Mildred K.</creator><creator>Altman, Russ B.</creator><creator>Endy, Drew</creator><creator>Relman, David A.</creator><creator>Levi, Margaret</creator><creator>Satz, Debra</creator><creator>Magnus, David</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-4880-3043</orcidid><orcidid>https://orcid.org/0000-0003-1306-4946</orcidid><orcidid>https://orcid.org/0000-0003-3859-2905</orcidid><orcidid>https://orcid.org/0000-0001-8331-1354</orcidid></search><sort><creationdate>20241126</creationdate><title>Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research</title><author>Trotsyuk, Artem A. ; Waeiss, Quinn ; Bhatia, Raina Talwar ; Aponte, Brandon J. ; Heffernan, Isabella M. L. ; Madgavkar, Devika ; Felder, Ryan Marshall ; Lehmann, Lisa Soleymani ; Palmer, Megan J. ; Greely, Hank ; Wald, Russell ; Goetz, Lea ; Trengove, Markus ; Vandersluis, Robert ; Lin, Herbert ; Cho, Mildred K. ; Altman, Russ B. ; Endy, Drew ; Relman, David A. ; Levi, Margaret ; Satz, Debra ; Magnus, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c200t-6d6b7c145630be2da44dfddda669c6b9db51baad19719ae4c2ae55b844d52063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>706/648/179</topic><topic>706/648/453</topic><topic>Ambient intelligence</topic><topic>Artificial intelligence</topic><topic>Bioengineering</topic><topic>Biomedical data</topic><topic>Biomedical research</topic><topic>Drug development</topic><topic>Engineering</topic><topic>Ethics</topic><topic>Perspective</topic><topic>R&D</topic><topic>Research & development</topic><topic>Researchers</topic><topic>Synthetic data</topic><topic>Systems development</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Trotsyuk, Artem A.</creatorcontrib><creatorcontrib>Waeiss, Quinn</creatorcontrib><creatorcontrib>Bhatia, Raina Talwar</creatorcontrib><creatorcontrib>Aponte, Brandon J.</creatorcontrib><creatorcontrib>Heffernan, Isabella M. L.</creatorcontrib><creatorcontrib>Madgavkar, Devika</creatorcontrib><creatorcontrib>Felder, Ryan Marshall</creatorcontrib><creatorcontrib>Lehmann, Lisa Soleymani</creatorcontrib><creatorcontrib>Palmer, Megan J.</creatorcontrib><creatorcontrib>Greely, Hank</creatorcontrib><creatorcontrib>Wald, Russell</creatorcontrib><creatorcontrib>Goetz, Lea</creatorcontrib><creatorcontrib>Trengove, Markus</creatorcontrib><creatorcontrib>Vandersluis, Robert</creatorcontrib><creatorcontrib>Lin, Herbert</creatorcontrib><creatorcontrib>Cho, Mildred K.</creatorcontrib><creatorcontrib>Altman, Russ B.</creatorcontrib><creatorcontrib>Endy, Drew</creatorcontrib><creatorcontrib>Relman, David A.</creatorcontrib><creatorcontrib>Levi, Margaret</creatorcontrib><creatorcontrib>Satz, Debra</creatorcontrib><creatorcontrib>Magnus, David</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Nature machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Trotsyuk, Artem A.</au><au>Waeiss, Quinn</au><au>Bhatia, Raina Talwar</au><au>Aponte, Brandon J.</au><au>Heffernan, Isabella M. L.</au><au>Madgavkar, Devika</au><au>Felder, Ryan Marshall</au><au>Lehmann, Lisa Soleymani</au><au>Palmer, Megan J.</au><au>Greely, Hank</au><au>Wald, Russell</au><au>Goetz, Lea</au><au>Trengove, Markus</au><au>Vandersluis, Robert</au><au>Lin, Herbert</au><au>Cho, Mildred K.</au><au>Altman, Russ B.</au><au>Endy, Drew</au><au>Relman, David A.</au><au>Levi, Margaret</au><au>Satz, Debra</au><au>Magnus, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research</atitle><jtitle>Nature machine intelligence</jtitle><stitle>Nat Mach Intell</stitle><date>2024-11-26</date><risdate>2024</risdate><volume>6</volume><issue>12</issue><spage>1435</spage><epage>1442</epage><pages>1435-1442</pages><issn>2522-5839</issn><eissn>2522-5839</eissn><abstract>The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.
The wide adoption of AI in biomedical research raises concerns about misuse risks. Trotsyuk, Waeiss et al. propose a framework that provides a starting point for researchers to consider how risks specific to their work could be mitigated, using existing ethical frameworks, regulatory measures and off-the-shelf AI solutions.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><doi>10.1038/s42256-024-00926-3</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-4880-3043</orcidid><orcidid>https://orcid.org/0000-0003-1306-4946</orcidid><orcidid>https://orcid.org/0000-0003-3859-2905</orcidid><orcidid>https://orcid.org/0000-0001-8331-1354</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2522-5839 |
ispartof | Nature machine intelligence, 2024-11, Vol.6 (12), p.1435-1442 |
issn | 2522-5839 2522-5839 |
language | eng |
recordid | cdi_proquest_journals_3145910042 |
source | Nature Journals Online; SpringerLink Journals - AutoHoldings |
subjects | 706/648/179 706/648/453 Ambient intelligence Artificial intelligence Bioengineering Biomedical data Biomedical research Drug development Engineering Ethics Perspective R&D Research & development Researchers Synthetic data Systems development |
title | Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T20%3A57%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Toward%20a%20framework%20for%20risk%20mitigation%20of%20potential%20misuse%20of%20artificial%20intelligence%20in%20biomedical%20research&rft.jtitle=Nature%20machine%20intelligence&rft.au=Trotsyuk,%20Artem%20A.&rft.date=2024-11-26&rft.volume=6&rft.issue=12&rft.spage=1435&rft.epage=1442&rft.pages=1435-1442&rft.issn=2522-5839&rft.eissn=2522-5839&rft_id=info:doi/10.1038/s42256-024-00926-3&rft_dat=%3Cproquest_cross%3E3145910042%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3145910042&rft_id=info:pmid/&rfr_iscdi=true |