Adversarial attacks on medical machine learning

Emerging vulnerabilities demand new conversations With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vuln...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Science (American Association for the Advancement of Science) 2019-03, Vol.363 (6433), p.1287-1289
Hauptverfasser: Finlayson, Samuel G., Bowers, John D., Ito, Joichi, Zittrain, Jonathan L., Beam, Andrew L., Kohane, Isaac S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1289
container_issue 6433
container_start_page 1287
container_title Science (American Association for the Advancement of Science)
container_volume 363
creator Finlayson, Samuel G.
Bowers, John D.
Ito, Joichi
Zittrain, Jonathan L.
Beam, Andrew L.
Kohane, Isaac S.
description Emerging vulnerabilities demand new conversations With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers ( 1 ). However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems' outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.
doi_str_mv 10.1126/science.aaw4399
format Article
fullrecord <record><control><sourceid>jstor_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_2196521794</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><jstor_id>26630588</jstor_id><sourcerecordid>26630588</sourcerecordid><originalsourceid>FETCH-LOGICAL-c388t-d03cbbfc8d005da85fbbbeb76f8cea92b7946890c6f75dc7b8c086292f96d68d3</originalsourceid><addsrcrecordid>eNpdkLtPwzAQhy0EoqUwM4EisbCkOduNY49VxUuqxAKz5VcgJY9iJyD-e1w1dGA4nXT33U-nD6FLDHOMCcuCqVxr3Fyp7wUV4ghNMYg8FQToMZoCUJZyKPIJOgthAxB3gp6iCQUuuCB0irKl_XI-KF-pOlF9r8xHSLo2aZytTBw1yrxXrUtqp3xbtW_n6KRUdXAXY5-h1_u7l9Vjun5-eFot16mhnPepBWq0Lg23ALlVPC-11k4XrOTGKUF0IRaMCzCsLHJrCs0NcEYEKQWzjFs6Q7f73K3vPgcXetlUwbi6Vq3rhiAJFiwnOMZE9OYfuukG38bvJIkeMI-1o7I9ZXwXgnel3PqqUf5HYpA7l3J0KUeX8eJ6zB101HHg_-RF4GoPbELf-cOeMEYh55z-AiHUetw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2203180314</pqid></control><display><type>article</type><title>Adversarial attacks on medical machine learning</title><source>American Association for the Advancement of Science</source><source>MEDLINE</source><creator>Finlayson, Samuel G. ; Bowers, John D. ; Ito, Joichi ; Zittrain, Jonathan L. ; Beam, Andrew L. ; Kohane, Isaac S.</creator><creatorcontrib>Finlayson, Samuel G. ; Bowers, John D. ; Ito, Joichi ; Zittrain, Jonathan L. ; Beam, Andrew L. ; Kohane, Isaac S.</creatorcontrib><description>Emerging vulnerabilities demand new conversations With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers ( 1 ). However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems' outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.</description><identifier>ISSN: 0036-8075</identifier><identifier>ISSN: 1095-9203</identifier><identifier>EISSN: 1095-9203</identifier><identifier>DOI: 10.1126/science.aaw4399</identifier><identifier>PMID: 30898923</identifier><language>eng</language><publisher>United States: American Association for the Advancement of Science</publisher><subject>Artificial intelligence ; Fraud ; Health care ; Humans ; Innovations ; Insurance Claim Review ; Learning algorithms ; Machine Learning ; Medical innovations ; POLICY FORUM</subject><ispartof>Science (American Association for the Advancement of Science), 2019-03, Vol.363 (6433), p.1287-1289</ispartof><rights>Copyright © 2019, American Association for the Advancement of Science</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c388t-d03cbbfc8d005da85fbbbeb76f8cea92b7946890c6f75dc7b8c086292f96d68d3</citedby><cites>FETCH-LOGICAL-c388t-d03cbbfc8d005da85fbbbeb76f8cea92b7946890c6f75dc7b8c086292f96d68d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,2871,2872,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30898923$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Finlayson, Samuel G.</creatorcontrib><creatorcontrib>Bowers, John D.</creatorcontrib><creatorcontrib>Ito, Joichi</creatorcontrib><creatorcontrib>Zittrain, Jonathan L.</creatorcontrib><creatorcontrib>Beam, Andrew L.</creatorcontrib><creatorcontrib>Kohane, Isaac S.</creatorcontrib><title>Adversarial attacks on medical machine learning</title><title>Science (American Association for the Advancement of Science)</title><addtitle>Science</addtitle><description>Emerging vulnerabilities demand new conversations With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers ( 1 ). However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems' outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.</description><subject>Artificial intelligence</subject><subject>Fraud</subject><subject>Health care</subject><subject>Humans</subject><subject>Innovations</subject><subject>Insurance Claim Review</subject><subject>Learning algorithms</subject><subject>Machine Learning</subject><subject>Medical innovations</subject><subject>POLICY FORUM</subject><issn>0036-8075</issn><issn>1095-9203</issn><issn>1095-9203</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNpdkLtPwzAQhy0EoqUwM4EisbCkOduNY49VxUuqxAKz5VcgJY9iJyD-e1w1dGA4nXT33U-nD6FLDHOMCcuCqVxr3Fyp7wUV4ghNMYg8FQToMZoCUJZyKPIJOgthAxB3gp6iCQUuuCB0irKl_XI-KF-pOlF9r8xHSLo2aZytTBw1yrxXrUtqp3xbtW_n6KRUdXAXY5-h1_u7l9Vjun5-eFot16mhnPepBWq0Lg23ALlVPC-11k4XrOTGKUF0IRaMCzCsLHJrCs0NcEYEKQWzjFs6Q7f73K3vPgcXetlUwbi6Vq3rhiAJFiwnOMZE9OYfuukG38bvJIkeMI-1o7I9ZXwXgnel3PqqUf5HYpA7l3J0KUeX8eJ6zB101HHg_-RF4GoPbELf-cOeMEYh55z-AiHUetw</recordid><startdate>20190322</startdate><enddate>20190322</enddate><creator>Finlayson, Samuel G.</creator><creator>Bowers, John D.</creator><creator>Ito, Joichi</creator><creator>Zittrain, Jonathan L.</creator><creator>Beam, Andrew L.</creator><creator>Kohane, Isaac S.</creator><general>American Association for the Advancement of Science</general><general>The American Association for the Advancement of Science</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QG</scope><scope>7QL</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SN</scope><scope>7SP</scope><scope>7SR</scope><scope>7SS</scope><scope>7T7</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7TM</scope><scope>7U5</scope><scope>7U9</scope><scope>8BQ</scope><scope>8FD</scope><scope>C1K</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>H8G</scope><scope>H94</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7N</scope><scope>P64</scope><scope>RC3</scope><scope>7X8</scope></search><sort><creationdate>20190322</creationdate><title>Adversarial attacks on medical machine learning</title><author>Finlayson, Samuel G. ; Bowers, John D. ; Ito, Joichi ; Zittrain, Jonathan L. ; Beam, Andrew L. ; Kohane, Isaac S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c388t-d03cbbfc8d005da85fbbbeb76f8cea92b7946890c6f75dc7b8c086292f96d68d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial intelligence</topic><topic>Fraud</topic><topic>Health care</topic><topic>Humans</topic><topic>Innovations</topic><topic>Insurance Claim Review</topic><topic>Learning algorithms</topic><topic>Machine Learning</topic><topic>Medical innovations</topic><topic>POLICY FORUM</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Finlayson, Samuel G.</creatorcontrib><creatorcontrib>Bowers, John D.</creatorcontrib><creatorcontrib>Ito, Joichi</creatorcontrib><creatorcontrib>Zittrain, Jonathan L.</creatorcontrib><creatorcontrib>Beam, Andrew L.</creatorcontrib><creatorcontrib>Kohane, Isaac S.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Animal Behavior Abstracts</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Ecology Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Industrial and Applied Microbiology Abstracts (Microbiology A)</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Nucleic Acids Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Algology Mycology and Protozoology Abstracts (Microbiology C)</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Genetics Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Science (American Association for the Advancement of Science)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Finlayson, Samuel G.</au><au>Bowers, John D.</au><au>Ito, Joichi</au><au>Zittrain, Jonathan L.</au><au>Beam, Andrew L.</au><au>Kohane, Isaac S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial attacks on medical machine learning</atitle><jtitle>Science (American Association for the Advancement of Science)</jtitle><addtitle>Science</addtitle><date>2019-03-22</date><risdate>2019</risdate><volume>363</volume><issue>6433</issue><spage>1287</spage><epage>1289</epage><pages>1287-1289</pages><issn>0036-8075</issn><issn>1095-9203</issn><eissn>1095-9203</eissn><abstract>Emerging vulnerabilities demand new conversations With public and academic attention increasingly focused on the new role of machine learning in the health information economy, an unusual and no-longer-esoteric category of vulnerabilities in machine-learning systems could prove important. These vulnerabilities allow a small, carefully designed change in how inputs are presented to a system to completely alter its output, causing it to confidently arrive at manifestly wrong conclusions. These advanced techniques to subvert otherwise-reliable machine-learning systems—so-called adversarial attacks—have, to date, been of interest primarily to computer science researchers ( 1 ). However, the landscape of often-competing interests within health care, and billions of dollars at stake in systems' outputs, implies considerable problems. We outline motivations that various players in the health care system may have to use adversarial attacks and begin a discussion of what to do about them. Far from discouraging continued innovation with medical machine learning, we call for active engagement of medical, technical, legal, and ethical experts in pursuit of efficient, broadly available, and effective health care that machine learning will enable.</abstract><cop>United States</cop><pub>American Association for the Advancement of Science</pub><pmid>30898923</pmid><doi>10.1126/science.aaw4399</doi><tpages>3</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0036-8075
ispartof Science (American Association for the Advancement of Science), 2019-03, Vol.363 (6433), p.1287-1289
issn 0036-8075
1095-9203
1095-9203
language eng
recordid cdi_proquest_miscellaneous_2196521794
source American Association for the Advancement of Science; MEDLINE
subjects Artificial intelligence
Fraud
Health care
Humans
Innovations
Insurance Claim Review
Learning algorithms
Machine Learning
Medical innovations
POLICY FORUM
title Adversarial attacks on medical machine learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T01%3A46%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-jstor_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20attacks%20on%20medical%20machine%20learning&rft.jtitle=Science%20(American%20Association%20for%20the%20Advancement%20of%20Science)&rft.au=Finlayson,%20Samuel%20G.&rft.date=2019-03-22&rft.volume=363&rft.issue=6433&rft.spage=1287&rft.epage=1289&rft.pages=1287-1289&rft.issn=0036-8075&rft.eissn=1095-9203&rft_id=info:doi/10.1126/science.aaw4399&rft_dat=%3Cjstor_proqu%3E26630588%3C/jstor_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2203180314&rft_id=info:pmid/30898923&rft_jstor_id=26630588&rfr_iscdi=true