Image fairness in deep learning: problems, models, and challenges
In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most studies on fairness protection have used traditional machine learning methods to enforce fairne...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2022-08, Vol.34 (15), p.12875-12893 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12893 |
---|---|
container_issue | 15 |
container_start_page | 12875 |
container_title | Neural computing & applications |
container_volume | 34 |
creator | Tian, Huan Zhu, Tianqing Liu, Wei Zhou, Wanlei |
description | In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most studies on fairness protection have used traditional machine learning methods to enforce fairness. However, these studies focus on low dimensional inputs, such as numerical inputs, whereas more recent deep learning technologies have encouraged fairness protection with image inputs through deep model methods. These approaches involve various object functions and structural designs that break the spurious correlations between targets and sensitive features. With these connections broken, we are left with fairer predictions. To better understand the proposed methods and encourage further development in the field, this paper summarizes fairness protection methods in terms of three aspects: the problem settings, the models, and the challenges. Through this survey, we hope to reveal research trends in the field, discover the fundamentals of enforcing fairness, and summarize the main challenges to producing fairer models. |
doi_str_mv | 10.1007/s00521-022-07136-1 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2693177876</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2693177876</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-a9f218707022ef4618f52fd2cef87e5a3106d37a77c332fc3ce0965005de53773</originalsourceid><addsrcrecordid>eNp9ULtOAzEQtBBIhMAPUFmi5WDtPdt3dFHEI1IkGqgt41uHi-4R7KTg7zEcEh3VFDszOzOMXQq4EQDmNgEoKQqQsgAjUBfiiM1EiVggqOqYzaAu81mXeMrOUtoCQKkrNWOLVe82xINr40Ap8XbgDdGOd-Ti0A6bO76L41tHfbrm_dhQl9ENDffvruto2FA6ZyfBdYkufnHOXh_uX5ZPxfr5cbVcrAuPGveFq4MUlQGTM1IotaiCkqGRnkJlSDkUoBs0zhiPKINHT1BrlXs1pNAYnLOryTcH-jhQ2tvteIhDfmmlrlEYUxmdWXJi-TimFCnYXWx7Fz-tAPs9lZ2msjmG_ZnKiizCSZQyOXeKf9b_qL4AEEpp9g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2693177876</pqid></control><display><type>article</type><title>Image fairness in deep learning: problems, models, and challenges</title><source>Springer Online Journals Complete</source><creator>Tian, Huan ; Zhu, Tianqing ; Liu, Wei ; Zhou, Wanlei</creator><creatorcontrib>Tian, Huan ; Zhu, Tianqing ; Liu, Wei ; Zhou, Wanlei</creatorcontrib><description>In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most studies on fairness protection have used traditional machine learning methods to enforce fairness. However, these studies focus on low dimensional inputs, such as numerical inputs, whereas more recent deep learning technologies have encouraged fairness protection with image inputs through deep model methods. These approaches involve various object functions and structural designs that break the spurious correlations between targets and sensitive features. With these connections broken, we are left with fairer predictions. To better understand the proposed methods and encourage further development in the field, this paper summarizes fairness protection methods in terms of three aspects: the problem settings, the models, and the challenges. Through this survey, we hope to reveal research trends in the field, discover the fundamentals of enforcing fairness, and summarize the main challenges to producing fairer models.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-022-07136-1</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Artificial Intelligence ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Deep learning ; Image Processing and Computer Vision ; Machine learning ; Original Article ; Probability and Statistics in Computer Science</subject><ispartof>Neural computing & applications, 2022-08, Vol.34 (15), p.12875-12893</ispartof><rights>The Author(s) 2022</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-a9f218707022ef4618f52fd2cef87e5a3106d37a77c332fc3ce0965005de53773</citedby><cites>FETCH-LOGICAL-c363t-a9f218707022ef4618f52fd2cef87e5a3106d37a77c332fc3ce0965005de53773</cites><orcidid>0000-0003-3411-7947 ; 0000-0003-2763-8314</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-022-07136-1$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-022-07136-1$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Tian, Huan</creatorcontrib><creatorcontrib>Zhu, Tianqing</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><creatorcontrib>Zhou, Wanlei</creatorcontrib><title>Image fairness in deep learning: problems, models, and challenges</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most studies on fairness protection have used traditional machine learning methods to enforce fairness. However, these studies focus on low dimensional inputs, such as numerical inputs, whereas more recent deep learning technologies have encouraged fairness protection with image inputs through deep model methods. These approaches involve various object functions and structural designs that break the spurious correlations between targets and sensitive features. With these connections broken, we are left with fairer predictions. To better understand the proposed methods and encourage further development in the field, this paper summarizes fairness protection methods in terms of three aspects: the problem settings, the models, and the challenges. Through this survey, we hope to reveal research trends in the field, discover the fundamentals of enforcing fairness, and summarize the main challenges to producing fairer models.</description><subject>Artificial Intelligence</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Deep learning</subject><subject>Image Processing and Computer Vision</subject><subject>Machine learning</subject><subject>Original Article</subject><subject>Probability and Statistics in Computer Science</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>AFKRA</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNp9ULtOAzEQtBBIhMAPUFmi5WDtPdt3dFHEI1IkGqgt41uHi-4R7KTg7zEcEh3VFDszOzOMXQq4EQDmNgEoKQqQsgAjUBfiiM1EiVggqOqYzaAu81mXeMrOUtoCQKkrNWOLVe82xINr40Ap8XbgDdGOd-Ti0A6bO76L41tHfbrm_dhQl9ENDffvruto2FA6ZyfBdYkufnHOXh_uX5ZPxfr5cbVcrAuPGveFq4MUlQGTM1IotaiCkqGRnkJlSDkUoBs0zhiPKINHT1BrlXs1pNAYnLOryTcH-jhQ2tvteIhDfmmlrlEYUxmdWXJi-TimFCnYXWx7Fz-tAPs9lZ2msjmG_ZnKiizCSZQyOXeKf9b_qL4AEEpp9g</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Tian, Huan</creator><creator>Zhu, Tianqing</creator><creator>Liu, Wei</creator><creator>Zhou, Wanlei</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-3411-7947</orcidid><orcidid>https://orcid.org/0000-0003-2763-8314</orcidid></search><sort><creationdate>20220801</creationdate><title>Image fairness in deep learning: problems, models, and challenges</title><author>Tian, Huan ; Zhu, Tianqing ; Liu, Wei ; Zhou, Wanlei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-a9f218707022ef4618f52fd2cef87e5a3106d37a77c332fc3ce0965005de53773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Deep learning</topic><topic>Image Processing and Computer Vision</topic><topic>Machine learning</topic><topic>Original Article</topic><topic>Probability and Statistics in Computer Science</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tian, Huan</creatorcontrib><creatorcontrib>Zhu, Tianqing</creatorcontrib><creatorcontrib>Liu, Wei</creatorcontrib><creatorcontrib>Zhou, Wanlei</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tian, Huan</au><au>Zhu, Tianqing</au><au>Liu, Wei</au><au>Zhou, Wanlei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image fairness in deep learning: problems, models, and challenges</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2022-08-01</date><risdate>2022</risdate><volume>34</volume><issue>15</issue><spage>12875</spage><epage>12893</epage><pages>12875-12893</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most studies on fairness protection have used traditional machine learning methods to enforce fairness. However, these studies focus on low dimensional inputs, such as numerical inputs, whereas more recent deep learning technologies have encouraged fairness protection with image inputs through deep model methods. These approaches involve various object functions and structural designs that break the spurious correlations between targets and sensitive features. With these connections broken, we are left with fairer predictions. To better understand the proposed methods and encourage further development in the field, this paper summarizes fairness protection methods in terms of three aspects: the problem settings, the models, and the challenges. Through this survey, we hope to reveal research trends in the field, discover the fundamentals of enforcing fairness, and summarize the main challenges to producing fairer models.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-022-07136-1</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0003-3411-7947</orcidid><orcidid>https://orcid.org/0000-0003-2763-8314</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2022-08, Vol.34 (15), p.12875-12893 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_2693177876 |
source | Springer Online Journals Complete |
subjects | Artificial Intelligence Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Deep learning Image Processing and Computer Vision Machine learning Original Article Probability and Statistics in Computer Science |
title | Image fairness in deep learning: problems, models, and challenges |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T17%3A01%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20fairness%20in%20deep%20learning:%20problems,%20models,%20and%20challenges&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Tian,%20Huan&rft.date=2022-08-01&rft.volume=34&rft.issue=15&rft.spage=12875&rft.epage=12893&rft.pages=12875-12893&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-022-07136-1&rft_dat=%3Cproquest_cross%3E2693177876%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2693177876&rft_id=info:pmid/&rfr_iscdi=true |