Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions
•Implicit bias is well documented in medicine, specifically surgery.•Implicit bias, specifically gender inequities, have been previously seen in letters of recommendation.•Artificial intelligence large language models, such as chatGPT, echo existing gender inequities when asked to write letters of r...
Gespeichert in:
Veröffentlicht in: | Journal of surgical education 2024-11, Vol.81 (11), p.1553-1557 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1557 |
---|---|
container_issue | 11 |
container_start_page | 1553 |
container_title | Journal of surgical education |
container_volume | 81 |
creator | Desai, Pooja Wang, Hao Davis, Lindy Ullmann, Timothy M. DiBrito, Sandra R. |
description | •Implicit bias is well documented in medicine, specifically surgery.•Implicit bias, specifically gender inequities, have been previously seen in letters of recommendation.•Artificial intelligence large language models, such as chatGPT, echo existing gender inequities when asked to write letters of recommendation for academic promotion in surgery.
Gender inequities persist in academic surgery with implicit bias impacting hiring and promotion at all levels. We hypothesized that creating letters of recommendation for both female and male candidates for academic promotion in surgery using an AI platform, ChatGPT, would elucidate the entrained gender biases already present in the promotion process.
Using ChatGPT, we generated 6 letters of recommendation for “a phenomenal surgeon applying for job promotion to associate professor position”, specifying “female” or “male” before surgeon in the prompt. We compared 3 “female” letters to 3 “male” letters for differences in length, language, and tone.
The letters written for females averaged 298 words compared to 314 for males. Female letters more frequently referred to “compassion”, “empathy”, and “inclusivity”; whereas male letters referred to “respect”, “reputation”, and “skill”.
These findings highlight the gender bias present in promotion letters generated by ChatGPT, reiterating existing literature regarding real letters of recommendation in academic surgery. Our study suggests that surgeons should use AI tools, such as ChatGPT, with caution when writing LORs for academic surgery faculty promotion. |
doi_str_mv | 10.1016/j.jsurg.2024.07.023 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3100914155</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1931720424003465</els_id><sourcerecordid>3100914155</sourcerecordid><originalsourceid>FETCH-LOGICAL-c239t-9538e47fdd6fcc9e126588b36d8ab209d28d7cc5bc2ad2a7c7e9f411392f66053</originalsourceid><addsrcrecordid>eNp9kFtLwzAUgIMobk5_gSB59KU1l7ZpBB906BwMHDh9DVlyqhlruyWtsH9v66aPPp3D4Tu3D6FLSmJKaHazileh9R8xIyyJiYgJ40doSHORRyJJ2XGXS04jwUgyQGchrAhJE8nkKRpwyTjjhA_R-4PTAc_Bb6BpdQMB94VbPP7UzWS-wDPQvgp4ApUFj6cVbFvXuA5zFb432kLpDH7tzgC_w3Nfl3Xj6iqco5NCrwNcHOIIvT09LsbP0exlMh3fzyLDuGwimfIcElFYmxXGSKAsS_N8yTOb6yUj0rLcCmPSpWHaMi2MAFkklHb3F1lGUj5C1_u5G19vWwiNKl0wsF7rCuo2KE4JkTShaY_yPWp8HYKHQm28K7XfKUpUL1St1I9Q1QtVRKhOaNd1dVjQLkuwfz2_Bjvgbg9A9-aXA6-CcVAZsM6DaZSt3b8LvgFTlYfU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3100914155</pqid></control><display><type>article</type><title>Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions</title><source>MEDLINE</source><source>ScienceDirect Journals (5 years ago - present)</source><creator>Desai, Pooja ; Wang, Hao ; Davis, Lindy ; Ullmann, Timothy M. ; DiBrito, Sandra R.</creator><creatorcontrib>Desai, Pooja ; Wang, Hao ; Davis, Lindy ; Ullmann, Timothy M. ; DiBrito, Sandra R.</creatorcontrib><description>•Implicit bias is well documented in medicine, specifically surgery.•Implicit bias, specifically gender inequities, have been previously seen in letters of recommendation.•Artificial intelligence large language models, such as chatGPT, echo existing gender inequities when asked to write letters of recommendation for academic promotion in surgery.
Gender inequities persist in academic surgery with implicit bias impacting hiring and promotion at all levels. We hypothesized that creating letters of recommendation for both female and male candidates for academic promotion in surgery using an AI platform, ChatGPT, would elucidate the entrained gender biases already present in the promotion process.
Using ChatGPT, we generated 6 letters of recommendation for “a phenomenal surgeon applying for job promotion to associate professor position”, specifying “female” or “male” before surgeon in the prompt. We compared 3 “female” letters to 3 “male” letters for differences in length, language, and tone.
The letters written for females averaged 298 words compared to 314 for males. Female letters more frequently referred to “compassion”, “empathy”, and “inclusivity”; whereas male letters referred to “respect”, “reputation”, and “skill”.
These findings highlight the gender bias present in promotion letters generated by ChatGPT, reiterating existing literature regarding real letters of recommendation in academic surgery. Our study suggests that surgeons should use AI tools, such as ChatGPT, with caution when writing LORs for academic surgery faculty promotion.</description><identifier>ISSN: 1931-7204</identifier><identifier>ISSN: 1878-7452</identifier><identifier>EISSN: 1878-7452</identifier><identifier>DOI: 10.1016/j.jsurg.2024.07.023</identifier><identifier>PMID: 39232303</identifier><language>eng</language><publisher>United States: Elsevier Inc</publisher><subject>academic promotions ; artificial intelligence ; Career Mobility ; ChatGPT ; Correspondence as Topic ; Faculty, Medical ; Female ; Gender disparities in medicine ; General Surgery - education ; Humans ; implicit bias ; letters of recommendation ; Male ; Personnel Selection ; Sexism</subject><ispartof>Journal of surgical education, 2024-11, Vol.81 (11), p.1553-1557</ispartof><rights>2024 Association of Program Directors in Surgery</rights><rights>Copyright © 2024 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c239t-9538e47fdd6fcc9e126588b36d8ab209d28d7cc5bc2ad2a7c7e9f411392f66053</cites><orcidid>0000-0003-4252-7065 ; 0000-0002-5708-2022 ; 0000-0002-6509-789X ; 0000-0002-3327-2294 ; 0009-0008-0937-7978</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.jsurg.2024.07.023$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,780,784,3548,27923,27924,45994</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39232303$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Desai, Pooja</creatorcontrib><creatorcontrib>Wang, Hao</creatorcontrib><creatorcontrib>Davis, Lindy</creatorcontrib><creatorcontrib>Ullmann, Timothy M.</creatorcontrib><creatorcontrib>DiBrito, Sandra R.</creatorcontrib><title>Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions</title><title>Journal of surgical education</title><addtitle>J Surg Educ</addtitle><description>•Implicit bias is well documented in medicine, specifically surgery.•Implicit bias, specifically gender inequities, have been previously seen in letters of recommendation.•Artificial intelligence large language models, such as chatGPT, echo existing gender inequities when asked to write letters of recommendation for academic promotion in surgery.
Gender inequities persist in academic surgery with implicit bias impacting hiring and promotion at all levels. We hypothesized that creating letters of recommendation for both female and male candidates for academic promotion in surgery using an AI platform, ChatGPT, would elucidate the entrained gender biases already present in the promotion process.
Using ChatGPT, we generated 6 letters of recommendation for “a phenomenal surgeon applying for job promotion to associate professor position”, specifying “female” or “male” before surgeon in the prompt. We compared 3 “female” letters to 3 “male” letters for differences in length, language, and tone.
The letters written for females averaged 298 words compared to 314 for males. Female letters more frequently referred to “compassion”, “empathy”, and “inclusivity”; whereas male letters referred to “respect”, “reputation”, and “skill”.
These findings highlight the gender bias present in promotion letters generated by ChatGPT, reiterating existing literature regarding real letters of recommendation in academic surgery. Our study suggests that surgeons should use AI tools, such as ChatGPT, with caution when writing LORs for academic surgery faculty promotion.</description><subject>academic promotions</subject><subject>artificial intelligence</subject><subject>Career Mobility</subject><subject>ChatGPT</subject><subject>Correspondence as Topic</subject><subject>Faculty, Medical</subject><subject>Female</subject><subject>Gender disparities in medicine</subject><subject>General Surgery - education</subject><subject>Humans</subject><subject>implicit bias</subject><subject>letters of recommendation</subject><subject>Male</subject><subject>Personnel Selection</subject><subject>Sexism</subject><issn>1931-7204</issn><issn>1878-7452</issn><issn>1878-7452</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kFtLwzAUgIMobk5_gSB59KU1l7ZpBB906BwMHDh9DVlyqhlruyWtsH9v66aPPp3D4Tu3D6FLSmJKaHazileh9R8xIyyJiYgJ40doSHORRyJJ2XGXS04jwUgyQGchrAhJE8nkKRpwyTjjhA_R-4PTAc_Bb6BpdQMB94VbPP7UzWS-wDPQvgp4ApUFj6cVbFvXuA5zFb432kLpDH7tzgC_w3Nfl3Xj6iqco5NCrwNcHOIIvT09LsbP0exlMh3fzyLDuGwimfIcElFYmxXGSKAsS_N8yTOb6yUj0rLcCmPSpWHaMi2MAFkklHb3F1lGUj5C1_u5G19vWwiNKl0wsF7rCuo2KE4JkTShaY_yPWp8HYKHQm28K7XfKUpUL1St1I9Q1QtVRKhOaNd1dVjQLkuwfz2_Bjvgbg9A9-aXA6-CcVAZsM6DaZSt3b8LvgFTlYfU</recordid><startdate>202411</startdate><enddate>202411</enddate><creator>Desai, Pooja</creator><creator>Wang, Hao</creator><creator>Davis, Lindy</creator><creator>Ullmann, Timothy M.</creator><creator>DiBrito, Sandra R.</creator><general>Elsevier Inc</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-4252-7065</orcidid><orcidid>https://orcid.org/0000-0002-5708-2022</orcidid><orcidid>https://orcid.org/0000-0002-6509-789X</orcidid><orcidid>https://orcid.org/0000-0002-3327-2294</orcidid><orcidid>https://orcid.org/0009-0008-0937-7978</orcidid></search><sort><creationdate>202411</creationdate><title>Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions</title><author>Desai, Pooja ; Wang, Hao ; Davis, Lindy ; Ullmann, Timothy M. ; DiBrito, Sandra R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c239t-9538e47fdd6fcc9e126588b36d8ab209d28d7cc5bc2ad2a7c7e9f411392f66053</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>academic promotions</topic><topic>artificial intelligence</topic><topic>Career Mobility</topic><topic>ChatGPT</topic><topic>Correspondence as Topic</topic><topic>Faculty, Medical</topic><topic>Female</topic><topic>Gender disparities in medicine</topic><topic>General Surgery - education</topic><topic>Humans</topic><topic>implicit bias</topic><topic>letters of recommendation</topic><topic>Male</topic><topic>Personnel Selection</topic><topic>Sexism</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Desai, Pooja</creatorcontrib><creatorcontrib>Wang, Hao</creatorcontrib><creatorcontrib>Davis, Lindy</creatorcontrib><creatorcontrib>Ullmann, Timothy M.</creatorcontrib><creatorcontrib>DiBrito, Sandra R.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of surgical education</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Desai, Pooja</au><au>Wang, Hao</au><au>Davis, Lindy</au><au>Ullmann, Timothy M.</au><au>DiBrito, Sandra R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions</atitle><jtitle>Journal of surgical education</jtitle><addtitle>J Surg Educ</addtitle><date>2024-11</date><risdate>2024</risdate><volume>81</volume><issue>11</issue><spage>1553</spage><epage>1557</epage><pages>1553-1557</pages><issn>1931-7204</issn><issn>1878-7452</issn><eissn>1878-7452</eissn><abstract>•Implicit bias is well documented in medicine, specifically surgery.•Implicit bias, specifically gender inequities, have been previously seen in letters of recommendation.•Artificial intelligence large language models, such as chatGPT, echo existing gender inequities when asked to write letters of recommendation for academic promotion in surgery.
Gender inequities persist in academic surgery with implicit bias impacting hiring and promotion at all levels. We hypothesized that creating letters of recommendation for both female and male candidates for academic promotion in surgery using an AI platform, ChatGPT, would elucidate the entrained gender biases already present in the promotion process.
Using ChatGPT, we generated 6 letters of recommendation for “a phenomenal surgeon applying for job promotion to associate professor position”, specifying “female” or “male” before surgeon in the prompt. We compared 3 “female” letters to 3 “male” letters for differences in length, language, and tone.
The letters written for females averaged 298 words compared to 314 for males. Female letters more frequently referred to “compassion”, “empathy”, and “inclusivity”; whereas male letters referred to “respect”, “reputation”, and “skill”.
These findings highlight the gender bias present in promotion letters generated by ChatGPT, reiterating existing literature regarding real letters of recommendation in academic surgery. Our study suggests that surgeons should use AI tools, such as ChatGPT, with caution when writing LORs for academic surgery faculty promotion.</abstract><cop>United States</cop><pub>Elsevier Inc</pub><pmid>39232303</pmid><doi>10.1016/j.jsurg.2024.07.023</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0003-4252-7065</orcidid><orcidid>https://orcid.org/0000-0002-5708-2022</orcidid><orcidid>https://orcid.org/0000-0002-6509-789X</orcidid><orcidid>https://orcid.org/0000-0002-3327-2294</orcidid><orcidid>https://orcid.org/0009-0008-0937-7978</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1931-7204 |
ispartof | Journal of surgical education, 2024-11, Vol.81 (11), p.1553-1557 |
issn | 1931-7204 1878-7452 1878-7452 |
language | eng |
recordid | cdi_proquest_miscellaneous_3100914155 |
source | MEDLINE; ScienceDirect Journals (5 years ago - present) |
subjects | academic promotions artificial intelligence Career Mobility ChatGPT Correspondence as Topic Faculty, Medical Female Gender disparities in medicine General Surgery - education Humans implicit bias letters of recommendation Male Personnel Selection Sexism |
title | Bias Perpetuates Bias: ChatGPT Learns Gender Inequities in Academic Surgery Promotions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T12%3A34%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bias%20Perpetuates%20Bias:%20ChatGPT%20Learns%20Gender%20Inequities%20in%20Academic%20Surgery%20Promotions&rft.jtitle=Journal%20of%20surgical%20education&rft.au=Desai,%20Pooja&rft.date=2024-11&rft.volume=81&rft.issue=11&rft.spage=1553&rft.epage=1557&rft.pages=1553-1557&rft.issn=1931-7204&rft.eissn=1878-7452&rft_id=info:doi/10.1016/j.jsurg.2024.07.023&rft_dat=%3Cproquest_cross%3E3100914155%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3100914155&rft_id=info:pmid/39232303&rft_els_id=S1931720424003465&rfr_iscdi=true |