Towards Fairness in Classifying Medical Conversations into SOAP Sections

As machine learning algorithms are more widely deployed in healthcare, the question of algorithmic fairness becomes more critical to examine. Our work seeks to identify and understand disparities in a deployed model that classifies doctor-patient conversations into sections of a medical SOAP note. W...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ferracane, Elisa, Konam, Sandeep
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ferracane, Elisa
Konam, Sandeep
description As machine learning algorithms are more widely deployed in healthcare, the question of algorithmic fairness becomes more critical to examine. Our work seeks to identify and understand disparities in a deployed model that classifies doctor-patient conversations into sections of a medical SOAP note. We employ several metrics to measure disparities in the classifier performance, and find small differences in a portion of the disadvantaged groups. A deeper analysis of the language in these conversations and further stratifying the groups suggests these differences are related to and often attributable to the type of medical appointment (e.g., psychiatric vs. internist). Our findings stress the importance of understanding the disparities that may exist in the data itself and how that affects a model's ability to equally distribute benefits.
doi_str_mv 10.48550/arxiv.2012.07749
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2012_07749</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2012_07749</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-4dcf3a8094f568a0acdeff5c315365188d0850017659384539ae6851575859ea3</originalsourceid><addsrcrecordid>eNotz8tKAzEYBeBsupDWB3BlXmDGpMmfy7IMrRUqFTr74ScXCYyZkpRq3146ujpwOBz4CHnirJUGgL1g-UnXds34umVaS_tA9v30jcVXusNUcqiVpky7EWtN8ZbyJ30PPjkcaTflaygVL2nK99Floqfj5oOegpurFVlEHGt4_M8l6Xfbvts3h-PrW7c5NKi0baR3UaBhVkZQBhk6H2IEJzgIBdwYzwwwxrUCK4wEYTEoAxw0GLABxZI8_93OlOFc0heW23AnDTNJ_AJw80WX</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards Fairness in Classifying Medical Conversations into SOAP Sections</title><source>arXiv.org</source><creator>Ferracane, Elisa ; Konam, Sandeep</creator><creatorcontrib>Ferracane, Elisa ; Konam, Sandeep</creatorcontrib><description>As machine learning algorithms are more widely deployed in healthcare, the question of algorithmic fairness becomes more critical to examine. Our work seeks to identify and understand disparities in a deployed model that classifies doctor-patient conversations into sections of a medical SOAP note. We employ several metrics to measure disparities in the classifier performance, and find small differences in a portion of the disadvantaged groups. A deeper analysis of the language in these conversations and further stratifying the groups suggests these differences are related to and often attributable to the type of medical appointment (e.g., psychiatric vs. internist). Our findings stress the importance of understanding the disparities that may exist in the data itself and how that affects a model's ability to equally distribute benefits.</description><identifier>DOI: 10.48550/arxiv.2012.07749</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Computers and Society</subject><creationdate>2020-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2012.07749$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2012.07749$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ferracane, Elisa</creatorcontrib><creatorcontrib>Konam, Sandeep</creatorcontrib><title>Towards Fairness in Classifying Medical Conversations into SOAP Sections</title><description>As machine learning algorithms are more widely deployed in healthcare, the question of algorithmic fairness becomes more critical to examine. Our work seeks to identify and understand disparities in a deployed model that classifies doctor-patient conversations into sections of a medical SOAP note. We employ several metrics to measure disparities in the classifier performance, and find small differences in a portion of the disadvantaged groups. A deeper analysis of the language in these conversations and further stratifying the groups suggests these differences are related to and often attributable to the type of medical appointment (e.g., psychiatric vs. internist). Our findings stress the importance of understanding the disparities that may exist in the data itself and how that affects a model's ability to equally distribute benefits.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Computers and Society</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tKAzEYBeBsupDWB3BlXmDGpMmfy7IMrRUqFTr74ScXCYyZkpRq3146ujpwOBz4CHnirJUGgL1g-UnXds34umVaS_tA9v30jcVXusNUcqiVpky7EWtN8ZbyJ30PPjkcaTflaygVL2nK99Floqfj5oOegpurFVlEHGt4_M8l6Xfbvts3h-PrW7c5NKi0baR3UaBhVkZQBhk6H2IEJzgIBdwYzwwwxrUCK4wEYTEoAxw0GLABxZI8_93OlOFc0heW23AnDTNJ_AJw80WX</recordid><startdate>20201202</startdate><enddate>20201202</enddate><creator>Ferracane, Elisa</creator><creator>Konam, Sandeep</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201202</creationdate><title>Towards Fairness in Classifying Medical Conversations into SOAP Sections</title><author>Ferracane, Elisa ; Konam, Sandeep</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-4dcf3a8094f568a0acdeff5c315365188d0850017659384539ae6851575859ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Computers and Society</topic><toplevel>online_resources</toplevel><creatorcontrib>Ferracane, Elisa</creatorcontrib><creatorcontrib>Konam, Sandeep</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ferracane, Elisa</au><au>Konam, Sandeep</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards Fairness in Classifying Medical Conversations into SOAP Sections</atitle><date>2020-12-02</date><risdate>2020</risdate><abstract>As machine learning algorithms are more widely deployed in healthcare, the question of algorithmic fairness becomes more critical to examine. Our work seeks to identify and understand disparities in a deployed model that classifies doctor-patient conversations into sections of a medical SOAP note. We employ several metrics to measure disparities in the classifier performance, and find small differences in a portion of the disadvantaged groups. A deeper analysis of the language in these conversations and further stratifying the groups suggests these differences are related to and often attributable to the type of medical appointment (e.g., psychiatric vs. internist). Our findings stress the importance of understanding the disparities that may exist in the data itself and how that affects a model's ability to equally distribute benefits.</abstract><doi>10.48550/arxiv.2012.07749</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2012.07749
ispartof
issn
language eng
recordid cdi_arxiv_primary_2012_07749
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Computers and Society
title Towards Fairness in Classifying Medical Conversations into SOAP Sections
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T09%3A54%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20Fairness%20in%20Classifying%20Medical%20Conversations%20into%20SOAP%20Sections&rft.au=Ferracane,%20Elisa&rft.date=2020-12-02&rft_id=info:doi/10.48550/arxiv.2012.07749&rft_dat=%3Carxiv_GOX%3E2012_07749%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true