Medical Artificial Intelligence and Human Values
Key PointsMedical Artificial Intelligence and Human ValuesAs large language models and other artificial intelligence models are used more in medicine, ethical dilemmas can arise depending on how the model was trained. A user must understand how human decisions and values can shape model outputs. Med...
Gespeichert in:
Veröffentlicht in: | The New England journal of medicine 2024-05, Vol.390 (20), p.1895-1904 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1904 |
---|---|
container_issue | 20 |
container_start_page | 1895 |
container_title | The New England journal of medicine |
container_volume | 390 |
creator | Yu, Kun-Hsing Healey, Elizabeth Leong, Tze-Yun Kohane, Isaac S. Manrai, Arjun K. |
description | Key PointsMedical Artificial Intelligence and Human ValuesAs large language models and other artificial intelligence models are used more in medicine, ethical dilemmas can arise depending on how the model was trained. A user must understand how human decisions and values can shape model outputs. Medical decision analysis offers lessons on measuring human values.A large language model will respond differently depending on the exact way a query is worded and how the model was directed by its makers and users. Caution is advised when considering the use of model output in decision making. |
doi_str_mv | 10.1056/NEJMra2214183 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3062531768</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3061984025</sourcerecordid><originalsourceid>FETCH-LOGICAL-c281t-6677c8d4356c47fc4d99a61caf2c958a856896f61b8d68ae914f687c276ebd393</originalsourceid><addsrcrecordid>eNp10E1LAzEQBuAgiq3Vo1cpiOBlNbPJzk6OUqqttHpRr0uazcqW_ajJ7sF_b0qroOBcMoeHN8PL2DnwG-AJ3j5NH5dOxzFIIHHAhpAIEUnJ8ZANOY8pkqkSA3bi_ZqHAamO2UAQAQfCIeNLm5dGV-M715VFacqwzpvOVlX5bhtjx7rJx7O-1s34TVe99afsqNCVt2f7d8Re76cvk1m0eH6YT-4WkYkJuggxTQ3lUiRoZFoYmSulEYwuYqMS0pQgKSwQVpQjaatAFkipiVO0q1woMWLXu9yNaz_Cv11Wl96Eu3Rj295ngmOcCEiRAr38Q9dt75pw3VaBIskDHbFop4xrvXe2yDaurLX7zIBn2yqzX1UGf7FP7Ve1zX_0d3cBXO1AXfussev6n6Ava2B3UA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3061984025</pqid></control><display><type>article</type><title>Medical Artificial Intelligence and Human Values</title><source>MEDLINE</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>New England Journal of Medicine</source><creator>Yu, Kun-Hsing ; Healey, Elizabeth ; Leong, Tze-Yun ; Kohane, Isaac S. ; Manrai, Arjun K.</creator><contributor>Drazen, Jeffrey M.</contributor><creatorcontrib>Yu, Kun-Hsing ; Healey, Elizabeth ; Leong, Tze-Yun ; Kohane, Isaac S. ; Manrai, Arjun K. ; Drazen, Jeffrey M.</creatorcontrib><description>Key PointsMedical Artificial Intelligence and Human ValuesAs large language models and other artificial intelligence models are used more in medicine, ethical dilemmas can arise depending on how the model was trained. A user must understand how human decisions and values can shape model outputs. Medical decision analysis offers lessons on measuring human values.A large language model will respond differently depending on the exact way a query is worded and how the model was directed by its makers and users. Caution is advised when considering the use of model output in decision making.</description><identifier>ISSN: 0028-4793</identifier><identifier>ISSN: 1533-4406</identifier><identifier>EISSN: 1533-4406</identifier><identifier>DOI: 10.1056/NEJMra2214183</identifier><identifier>PMID: 38810186</identifier><language>eng</language><publisher>United States: Massachusetts Medical Society</publisher><subject>Accuracy ; and Education ; and Education General ; Artificial intelligence ; Artificial Intelligence - ethics ; Artificial Intelligence - standards ; Bias ; Chronic Kidney Disease ; Clinical Decision-Making - ethics ; Clinical Reasoning ; Creatinine ; Decision making ; Emergency Medicine ; Emergency Medicine General ; Growth and Development ; Growth hormones ; Health Care Delivery ; Health IT ; Health Policy ; Humans ; Kidney Transplantation ; Language ; Medical Ethics ; Medical Practice ; Nephrology ; Nephrology General ; Pediatrics ; Pediatrics General ; Pharmaceutical industry ; Prescription drugs ; Quality of Care ; Risk ; Social Values ; Surgery ; Surgery General ; Training ; Transplantation</subject><ispartof>The New England journal of medicine, 2024-05, Vol.390 (20), p.1895-1904</ispartof><rights>Copyright © 2024 Massachusetts Medical Society. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c281t-6677c8d4356c47fc4d99a61caf2c958a856896f61b8d68ae914f687c276ebd393</citedby><cites>FETCH-LOGICAL-c281t-6677c8d4356c47fc4d99a61caf2c958a856896f61b8d68ae914f687c276ebd393</cites><orcidid>0000-0001-9892-8218 ; 0000-0001-9657-9800</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.nejm.org/doi/pdf/10.1056/NEJMra2214183$$EPDF$$P50$$Gmms$$H</linktopdf><linktohtml>$$Uhttps://www.nejm.org/doi/full/10.1056/NEJMra2214183$$EHTML$$P50$$Gmms$$H</linktohtml><link.rule.ids>314,776,780,2746,2747,26080,27901,27902,52357,54039</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38810186$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Drazen, Jeffrey M.</contributor><creatorcontrib>Yu, Kun-Hsing</creatorcontrib><creatorcontrib>Healey, Elizabeth</creatorcontrib><creatorcontrib>Leong, Tze-Yun</creatorcontrib><creatorcontrib>Kohane, Isaac S.</creatorcontrib><creatorcontrib>Manrai, Arjun K.</creatorcontrib><title>Medical Artificial Intelligence and Human Values</title><title>The New England journal of medicine</title><addtitle>N Engl J Med</addtitle><description>Key PointsMedical Artificial Intelligence and Human ValuesAs large language models and other artificial intelligence models are used more in medicine, ethical dilemmas can arise depending on how the model was trained. A user must understand how human decisions and values can shape model outputs. Medical decision analysis offers lessons on measuring human values.A large language model will respond differently depending on the exact way a query is worded and how the model was directed by its makers and users. Caution is advised when considering the use of model output in decision making.</description><subject>Accuracy</subject><subject>and Education</subject><subject>and Education General</subject><subject>Artificial intelligence</subject><subject>Artificial Intelligence - ethics</subject><subject>Artificial Intelligence - standards</subject><subject>Bias</subject><subject>Chronic Kidney Disease</subject><subject>Clinical Decision-Making - ethics</subject><subject>Clinical Reasoning</subject><subject>Creatinine</subject><subject>Decision making</subject><subject>Emergency Medicine</subject><subject>Emergency Medicine General</subject><subject>Growth and Development</subject><subject>Growth hormones</subject><subject>Health Care Delivery</subject><subject>Health IT</subject><subject>Health Policy</subject><subject>Humans</subject><subject>Kidney Transplantation</subject><subject>Language</subject><subject>Medical Ethics</subject><subject>Medical Practice</subject><subject>Nephrology</subject><subject>Nephrology General</subject><subject>Pediatrics</subject><subject>Pediatrics General</subject><subject>Pharmaceutical industry</subject><subject>Prescription drugs</subject><subject>Quality of Care</subject><subject>Risk</subject><subject>Social Values</subject><subject>Surgery</subject><subject>Surgery General</subject><subject>Training</subject><subject>Transplantation</subject><issn>0028-4793</issn><issn>1533-4406</issn><issn>1533-4406</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>BEC</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp10E1LAzEQBuAgiq3Vo1cpiOBlNbPJzk6OUqqttHpRr0uazcqW_ajJ7sF_b0qroOBcMoeHN8PL2DnwG-AJ3j5NH5dOxzFIIHHAhpAIEUnJ8ZANOY8pkqkSA3bi_ZqHAamO2UAQAQfCIeNLm5dGV-M715VFacqwzpvOVlX5bhtjx7rJx7O-1s34TVe99afsqNCVt2f7d8Re76cvk1m0eH6YT-4WkYkJuggxTQ3lUiRoZFoYmSulEYwuYqMS0pQgKSwQVpQjaatAFkipiVO0q1woMWLXu9yNaz_Cv11Wl96Eu3Rj295ngmOcCEiRAr38Q9dt75pw3VaBIskDHbFop4xrvXe2yDaurLX7zIBn2yqzX1UGf7FP7Ve1zX_0d3cBXO1AXfussev6n6Ava2B3UA</recordid><startdate>20240530</startdate><enddate>20240530</enddate><creator>Yu, Kun-Hsing</creator><creator>Healey, Elizabeth</creator><creator>Leong, Tze-Yun</creator><creator>Kohane, Isaac S.</creator><creator>Manrai, Arjun K.</creator><general>Massachusetts Medical Society</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0TZ</scope><scope>7RV</scope><scope>7X7</scope><scope>7XB</scope><scope>8AO</scope><scope>8C1</scope><scope>8FE</scope><scope>8FH</scope><scope>8FI</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AN0</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BEC</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>K0Y</scope><scope>LK8</scope><scope>M0R</scope><scope>M0T</scope><scope>M1P</scope><scope>M2M</scope><scope>M2O</scope><scope>M2P</scope><scope>M7P</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-9892-8218</orcidid><orcidid>https://orcid.org/0000-0001-9657-9800</orcidid></search><sort><creationdate>20240530</creationdate><title>Medical Artificial Intelligence and Human Values</title><author>Yu, Kun-Hsing ; Healey, Elizabeth ; Leong, Tze-Yun ; Kohane, Isaac S. ; Manrai, Arjun K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c281t-6677c8d4356c47fc4d99a61caf2c958a856896f61b8d68ae914f687c276ebd393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>and Education</topic><topic>and Education General</topic><topic>Artificial intelligence</topic><topic>Artificial Intelligence - ethics</topic><topic>Artificial Intelligence - standards</topic><topic>Bias</topic><topic>Chronic Kidney Disease</topic><topic>Clinical Decision-Making - ethics</topic><topic>Clinical Reasoning</topic><topic>Creatinine</topic><topic>Decision making</topic><topic>Emergency Medicine</topic><topic>Emergency Medicine General</topic><topic>Growth and Development</topic><topic>Growth hormones</topic><topic>Health Care Delivery</topic><topic>Health IT</topic><topic>Health Policy</topic><topic>Humans</topic><topic>Kidney Transplantation</topic><topic>Language</topic><topic>Medical Ethics</topic><topic>Medical Practice</topic><topic>Nephrology</topic><topic>Nephrology General</topic><topic>Pediatrics</topic><topic>Pediatrics General</topic><topic>Pharmaceutical industry</topic><topic>Prescription drugs</topic><topic>Quality of Care</topic><topic>Risk</topic><topic>Social Values</topic><topic>Surgery</topic><topic>Surgery General</topic><topic>Training</topic><topic>Transplantation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yu, Kun-Hsing</creatorcontrib><creatorcontrib>Healey, Elizabeth</creatorcontrib><creatorcontrib>Leong, Tze-Yun</creatorcontrib><creatorcontrib>Kohane, Isaac S.</creatorcontrib><creatorcontrib>Manrai, Arjun K.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Pharma and Biotech Premium PRO</collection><collection>Nursing & Allied Health Database</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ProQuest Pharma Collection</collection><collection>Public Health Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>British Nursing Database</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>eLibrary</collection><collection>ProQuest Central</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>New England Journal of Medicine</collection><collection>ProQuest Biological Science Collection</collection><collection>Consumer Health Database</collection><collection>Healthcare Administration Database</collection><collection>Medical Database</collection><collection>ProQuest Psychology</collection><collection>Research Library</collection><collection>Science Database</collection><collection>Biological Science Database</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>The New England journal of medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu, Kun-Hsing</au><au>Healey, Elizabeth</au><au>Leong, Tze-Yun</au><au>Kohane, Isaac S.</au><au>Manrai, Arjun K.</au><au>Drazen, Jeffrey M.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Medical Artificial Intelligence and Human Values</atitle><jtitle>The New England journal of medicine</jtitle><addtitle>N Engl J Med</addtitle><date>2024-05-30</date><risdate>2024</risdate><volume>390</volume><issue>20</issue><spage>1895</spage><epage>1904</epage><pages>1895-1904</pages><issn>0028-4793</issn><issn>1533-4406</issn><eissn>1533-4406</eissn><abstract>Key PointsMedical Artificial Intelligence and Human ValuesAs large language models and other artificial intelligence models are used more in medicine, ethical dilemmas can arise depending on how the model was trained. A user must understand how human decisions and values can shape model outputs. Medical decision analysis offers lessons on measuring human values.A large language model will respond differently depending on the exact way a query is worded and how the model was directed by its makers and users. Caution is advised when considering the use of model output in decision making.</abstract><cop>United States</cop><pub>Massachusetts Medical Society</pub><pmid>38810186</pmid><doi>10.1056/NEJMra2214183</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0001-9892-8218</orcidid><orcidid>https://orcid.org/0000-0001-9657-9800</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0028-4793 |
ispartof | The New England journal of medicine, 2024-05, Vol.390 (20), p.1895-1904 |
issn | 0028-4793 1533-4406 1533-4406 |
language | eng |
recordid | cdi_proquest_miscellaneous_3062531768 |
source | MEDLINE; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; New England Journal of Medicine |
subjects | Accuracy and Education and Education General Artificial intelligence Artificial Intelligence - ethics Artificial Intelligence - standards Bias Chronic Kidney Disease Clinical Decision-Making - ethics Clinical Reasoning Creatinine Decision making Emergency Medicine Emergency Medicine General Growth and Development Growth hormones Health Care Delivery Health IT Health Policy Humans Kidney Transplantation Language Medical Ethics Medical Practice Nephrology Nephrology General Pediatrics Pediatrics General Pharmaceutical industry Prescription drugs Quality of Care Risk Social Values Surgery Surgery General Training Transplantation |
title | Medical Artificial Intelligence and Human Values |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T05%3A10%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Medical%20Artificial%20Intelligence%20and%20Human%20Values&rft.jtitle=The%20New%20England%20journal%20of%20medicine&rft.au=Yu,%20Kun-Hsing&rft.date=2024-05-30&rft.volume=390&rft.issue=20&rft.spage=1895&rft.epage=1904&rft.pages=1895-1904&rft.issn=0028-4793&rft.eissn=1533-4406&rft_id=info:doi/10.1056/NEJMra2214183&rft_dat=%3Cproquest_cross%3E3061984025%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3061984025&rft_id=info:pmid/38810186&rfr_iscdi=true |