Melting contestation: insurance fairness and machine learning
With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insur...
Gespeichert in:
Veröffentlicht in: | Ethics and information technology 2023-12, Vol.25 (4), p.49, Article 49 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 4 |
container_start_page | 49 |
container_title | Ethics and information technology |
container_volume | 25 |
creator | Barry, Laurence Charpentier, Arthur |
description | With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization. |
doi_str_mv | 10.1007/s10676-023-09720-y |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2866558586</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2866558586</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-65232048e8fb9f2dd3e9128c34a275a366bab71b2e5ce5959b44a2afc78cbae13</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wNOC5-gk2XwJHqRYFSpe9Byy6Wzd0mZrsj303xtdwZunGZj3mRkeQi4ZXDMAfZMZKK0ocEHBag70cEQmTGpOTS3scemFMZRZqU_JWc5rAJCa6Qm5e8HN0MVVFfo4YB780PXxtupi3icfA1at71LEnCsfl9XWh48uYrVBn2KhzslJ6zcZL37rlLzPH95mT3Tx-vg8u1_QwDUMVEkuONQGTdvYli-XAi3jJojacy29UKrxjWYNRxlQWmmbukx8G7QJjUcmpuRq3LtL_ee-vOnW_T7FctJxo5SURhpVUnxMhdTnnLB1u9RtfTo4Bu5bkxs1uaLJ_WhyhwKJEcolHFeY_lb_Q30BdUJrjg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2866558586</pqid></control><display><type>article</type><title>Melting contestation: insurance fairness and machine learning</title><source>SpringerLink Journals - AutoHoldings</source><creator>Barry, Laurence ; Charpentier, Arthur</creator><creatorcontrib>Barry, Laurence ; Charpentier, Arthur</creatorcontrib><description>With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.</description><identifier>ISSN: 1388-1957</identifier><identifier>EISSN: 1572-8439</identifier><identifier>DOI: 10.1007/s10676-023-09720-y</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Algorithms ; Bias ; Big Data ; Computer Science ; Ethics ; Innovation/Technology Management ; Insurance ; Library Science ; Machine learning ; Management of Computing and Information Systems ; Original Paper ; Stereotypes ; User Interfaces and Human Computer Interaction</subject><ispartof>Ethics and information technology, 2023-12, Vol.25 (4), p.49, Article 49</ispartof><rights>The Author(s), under exclusive licence to Springer Nature B.V. 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-65232048e8fb9f2dd3e9128c34a275a366bab71b2e5ce5959b44a2afc78cbae13</cites><orcidid>0000-0002-4771-2588 ; 0000-0003-3654-6286</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10676-023-09720-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10676-023-09720-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Barry, Laurence</creatorcontrib><creatorcontrib>Charpentier, Arthur</creatorcontrib><title>Melting contestation: insurance fairness and machine learning</title><title>Ethics and information technology</title><addtitle>Ethics Inf Technol</addtitle><description>With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.</description><subject>Algorithms</subject><subject>Bias</subject><subject>Big Data</subject><subject>Computer Science</subject><subject>Ethics</subject><subject>Innovation/Technology Management</subject><subject>Insurance</subject><subject>Library Science</subject><subject>Machine learning</subject><subject>Management of Computing and Information Systems</subject><subject>Original Paper</subject><subject>Stereotypes</subject><subject>User Interfaces and Human Computer Interaction</subject><issn>1388-1957</issn><issn>1572-8439</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AVQMV</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>K50</sourceid><sourceid>M1D</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wNOC5-gk2XwJHqRYFSpe9Byy6Wzd0mZrsj303xtdwZunGZj3mRkeQi4ZXDMAfZMZKK0ocEHBag70cEQmTGpOTS3scemFMZRZqU_JWc5rAJCa6Qm5e8HN0MVVFfo4YB780PXxtupi3icfA1at71LEnCsfl9XWh48uYrVBn2KhzslJ6zcZL37rlLzPH95mT3Tx-vg8u1_QwDUMVEkuONQGTdvYli-XAi3jJojacy29UKrxjWYNRxlQWmmbukx8G7QJjUcmpuRq3LtL_ee-vOnW_T7FctJxo5SURhpVUnxMhdTnnLB1u9RtfTo4Bu5bkxs1uaLJ_WhyhwKJEcolHFeY_lb_Q30BdUJrjg</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Barry, Laurence</creator><creator>Charpentier, Arthur</creator><general>Springer Netherlands</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>AABKS</scope><scope>ABSDQ</scope><scope>ABUWG</scope><scope>AEUYN</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>ARAPS</scope><scope>AVQMV</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CNYFK</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>K50</scope><scope>K60</scope><scope>K6~</scope><scope>L.-</scope><scope>M0C</scope><scope>M1D</scope><scope>M1O</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-4771-2588</orcidid><orcidid>https://orcid.org/0000-0003-3654-6286</orcidid></search><sort><creationdate>20231201</creationdate><title>Melting contestation: insurance fairness and machine learning</title><author>Barry, Laurence ; Charpentier, Arthur</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-65232048e8fb9f2dd3e9128c34a275a366bab71b2e5ce5959b44a2afc78cbae13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Bias</topic><topic>Big Data</topic><topic>Computer Science</topic><topic>Ethics</topic><topic>Innovation/Technology Management</topic><topic>Insurance</topic><topic>Library Science</topic><topic>Machine learning</topic><topic>Management of Computing and Information Systems</topic><topic>Original Paper</topic><topic>Stereotypes</topic><topic>User Interfaces and Human Computer Interaction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Barry, Laurence</creatorcontrib><creatorcontrib>Charpentier, Arthur</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>Philosophy Collection</collection><collection>Philosophy Database</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest One Sustainability</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>Arts Premium Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Library & Information Science Collection</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>Art, Design & Architecture Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ABI/INFORM Global</collection><collection>Arts & Humanities Database</collection><collection>Library Science Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Ethics and information technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Barry, Laurence</au><au>Charpentier, Arthur</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Melting contestation: insurance fairness and machine learning</atitle><jtitle>Ethics and information technology</jtitle><stitle>Ethics Inf Technol</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>25</volume><issue>4</issue><spage>49</spage><pages>49-</pages><artnum>49</artnum><issn>1388-1957</issn><eissn>1572-8439</eissn><abstract>With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized along three types of bias: pure stereotypes, non-causal correlations, or causal effects that a society chooses to protect against, are thus the main sources of dispute. The lens of this typology then allows us to look anew at the potential biases in insurance pricing implied by big data and machine learning, showing that despite utopic claims, social stereotypes continue to plague data, thus threaten to unconsciously reproduce these discriminations in insurance. To counter these effects, algorithmic fairness attempts to define mathematical indicators of non-bias. We argue that this may prove insufficient, since as it assumes the existence of specific protected groups, which could only be made visible through public debate and contestation. These are less likely if the right to explanation is realized through personalized algorithms, which could reinforce the individualized perception of the social that blocks rather than encourages collective mobilization.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10676-023-09720-y</doi><orcidid>https://orcid.org/0000-0002-4771-2588</orcidid><orcidid>https://orcid.org/0000-0003-3654-6286</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1388-1957 |
ispartof | Ethics and information technology, 2023-12, Vol.25 (4), p.49, Article 49 |
issn | 1388-1957 1572-8439 |
language | eng |
recordid | cdi_proquest_journals_2866558586 |
source | SpringerLink Journals - AutoHoldings |
subjects | Algorithms Bias Big Data Computer Science Ethics Innovation/Technology Management Insurance Library Science Machine learning Management of Computing and Information Systems Original Paper Stereotypes User Interfaces and Human Computer Interaction |
title | Melting contestation: insurance fairness and machine learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T10%3A27%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Melting%20contestation:%20insurance%20fairness%20and%20machine%20learning&rft.jtitle=Ethics%20and%20information%20technology&rft.au=Barry,%20Laurence&rft.date=2023-12-01&rft.volume=25&rft.issue=4&rft.spage=49&rft.pages=49-&rft.artnum=49&rft.issn=1388-1957&rft.eissn=1572-8439&rft_id=info:doi/10.1007/s10676-023-09720-y&rft_dat=%3Cproquest_cross%3E2866558586%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2866558586&rft_id=info:pmid/&rfr_iscdi=true |