Opening the black box of AI‐Medicine

One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pic...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of gastroenterology and hepatology 2021-03, Vol.36 (3), p.581-584
Hauptverfasser: Poon, Aaron I F, Sung, Joseph J Y
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 584
container_issue 3
container_start_page 581
container_title Journal of gastroenterology and hepatology
container_volume 36
creator Poon, Aaron I F
Sung, Joseph J Y
description One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.
doi_str_mv 10.1111/jgh.15384
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2501259558</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2501259558</sourcerecordid><originalsourceid>FETCH-LOGICAL-c4544-d9e9366db0b4cda615ddbe527d9b075d780cbb408a972b4036388f560f6373143</originalsourceid><addsrcrecordid>eNp10M9OwkAQBvCN0QiiB1_ANDExeijMdv9090iIAgbDRc-bbncLxdJiF6LcfASf0SdxsejBxLnM5ZdvJh9C5xi62E9vMZt3MSOCHqA2phRCHFN-iNogMAslwbKFTpxbAACFmB2jFiExSA6yja6mK1vm5SxYz22giyR9DnT1FlRZ0B9_vn88WJOneWlP0VGWFM6e7XcHPd3dPg5G4WQ6HA_6kzCljNLQSCsJ50aDpqlJOGbGaMui2EjtL5tYQKo1BZHIOPKbcCJExjhknMQEU9JB103uqq5eNtat1TJ3qS2KpLTVxqmIAY6YZEx4evmHLqpNXfrvdgqw4FhIr24aldaVc7XN1KrOl0m9VRjUrjzly1Pf5Xl7sU_c6KU1v_KnLQ96DXjNC7v9P0ndD0dN5BdN4XVq</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2500186189</pqid></control><display><type>article</type><title>Opening the black box of AI‐Medicine</title><source>Wiley Online Library Journals Frontfile Complete</source><creator>Poon, Aaron I F ; Sung, Joseph J Y</creator><creatorcontrib>Poon, Aaron I F ; Sung, Joseph J Y</creatorcontrib><description>One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.</description><identifier>ISSN: 0815-9319</identifier><identifier>EISSN: 1440-1746</identifier><identifier>DOI: 10.1111/jgh.15384</identifier><identifier>PMID: 33709609</identifier><language>eng</language><publisher>Australia: Wiley Subscription Services, Inc</publisher><subject>Algorithms ; Artificial intelligence ; black box ; Decision making ; gastroenterology ; Human error ; Learning algorithms ; Machine learning ; Medicine</subject><ispartof>Journal of gastroenterology and hepatology, 2021-03, Vol.36 (3), p.581-584</ispartof><rights>2021 Journal of Gastroenterology and Hepatology Foundation and John Wiley &amp; Sons Australia, Ltd</rights><rights>2021 Journal of Gastroenterology and Hepatology Foundation and John Wiley &amp; Sons Australia, Ltd.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c4544-d9e9366db0b4cda615ddbe527d9b075d780cbb408a972b4036388f560f6373143</citedby><cites>FETCH-LOGICAL-c4544-d9e9366db0b4cda615ddbe527d9b075d780cbb408a972b4036388f560f6373143</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fjgh.15384$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fjgh.15384$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33709609$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Poon, Aaron I F</creatorcontrib><creatorcontrib>Sung, Joseph J Y</creatorcontrib><title>Opening the black box of AI‐Medicine</title><title>Journal of gastroenterology and hepatology</title><addtitle>J Gastroenterol Hepatol</addtitle><description>One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.</description><subject>Algorithms</subject><subject>Artificial intelligence</subject><subject>black box</subject><subject>Decision making</subject><subject>gastroenterology</subject><subject>Human error</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Medicine</subject><issn>0815-9319</issn><issn>1440-1746</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp10M9OwkAQBvCN0QiiB1_ANDExeijMdv9090iIAgbDRc-bbncLxdJiF6LcfASf0SdxsejBxLnM5ZdvJh9C5xi62E9vMZt3MSOCHqA2phRCHFN-iNogMAslwbKFTpxbAACFmB2jFiExSA6yja6mK1vm5SxYz22giyR9DnT1FlRZ0B9_vn88WJOneWlP0VGWFM6e7XcHPd3dPg5G4WQ6HA_6kzCljNLQSCsJ50aDpqlJOGbGaMui2EjtL5tYQKo1BZHIOPKbcCJExjhknMQEU9JB103uqq5eNtat1TJ3qS2KpLTVxqmIAY6YZEx4evmHLqpNXfrvdgqw4FhIr24aldaVc7XN1KrOl0m9VRjUrjzly1Pf5Xl7sU_c6KU1v_KnLQ96DXjNC7v9P0ndD0dN5BdN4XVq</recordid><startdate>202103</startdate><enddate>202103</enddate><creator>Poon, Aaron I F</creator><creator>Sung, Joseph J Y</creator><general>Wiley Subscription Services, Inc</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7T5</scope><scope>7U9</scope><scope>H94</scope><scope>7X8</scope></search><sort><creationdate>202103</creationdate><title>Opening the black box of AI‐Medicine</title><author>Poon, Aaron I F ; Sung, Joseph J Y</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c4544-d9e9366db0b4cda615ddbe527d9b075d780cbb408a972b4036388f560f6373143</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial intelligence</topic><topic>black box</topic><topic>Decision making</topic><topic>gastroenterology</topic><topic>Human error</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Medicine</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Poon, Aaron I F</creatorcontrib><creatorcontrib>Sung, Joseph J Y</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Immunology Abstracts</collection><collection>Virology and AIDS Abstracts</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of gastroenterology and hepatology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Poon, Aaron I F</au><au>Sung, Joseph J Y</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Opening the black box of AI‐Medicine</atitle><jtitle>Journal of gastroenterology and hepatology</jtitle><addtitle>J Gastroenterol Hepatol</addtitle><date>2021-03</date><risdate>2021</risdate><volume>36</volume><issue>3</issue><spage>581</spage><epage>584</epage><pages>581-584</pages><issn>0815-9319</issn><eissn>1440-1746</eissn><abstract>One of the biggest challenges of utilizing artificial intelligence (AI) in medicine is that physicians are reluctant to trust and adopt something that they do not fully understand and regarded as a “black box.” Machine Learning (ML) can assist in reading radiological, endoscopic and histological pictures, suggesting diagnosis and predict disease outcome, and even recommending therapy and surgical decisions. However, clinical adoption of these AI tools has been slow because of a lack of trust. Besides clinician's doubt, patients lacking confidence with AI‐powered technologies also hamper development. While they may accept the reality that human errors can occur, little tolerance of machine error is anticipated. In order to implement AI medicine successfully, interpretability of ML algorithm needs to improve. Opening the black box in AI medicine needs to take a stepwise approach. Small steps of biological explanation and clinical experience in ML algorithm can help to build trust and acceptance. AI software developers will have to clearly demonstrate that when the ML technologies are integrated into the clinical decision‐making process, they can actually help to improve clinical outcome. Enhancing interpretability of ML algorithm is a crucial step in adopting AI in medicine.</abstract><cop>Australia</cop><pub>Wiley Subscription Services, Inc</pub><pmid>33709609</pmid><doi>10.1111/jgh.15384</doi><tpages>4</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0815-9319
ispartof Journal of gastroenterology and hepatology, 2021-03, Vol.36 (3), p.581-584
issn 0815-9319
1440-1746
language eng
recordid cdi_proquest_miscellaneous_2501259558
source Wiley Online Library Journals Frontfile Complete
subjects Algorithms
Artificial intelligence
black box
Decision making
gastroenterology
Human error
Learning algorithms
Machine learning
Medicine
title Opening the black box of AI‐Medicine
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T21%3A04%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Opening%20the%20black%20box%20of%20AI%E2%80%90Medicine&rft.jtitle=Journal%20of%20gastroenterology%20and%20hepatology&rft.au=Poon,%20Aaron%20I%20F&rft.date=2021-03&rft.volume=36&rft.issue=3&rft.spage=581&rft.epage=584&rft.pages=581-584&rft.issn=0815-9319&rft.eissn=1440-1746&rft_id=info:doi/10.1111/jgh.15384&rft_dat=%3Cproquest_cross%3E2501259558%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2500186189&rft_id=info:pmid/33709609&rfr_iscdi=true