Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria

As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuni...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2023-09, Vol.34 (9), p.5557-5569
Hauptverfasser: Zhang, Tao, Zhu, Tianqing, Gao, Kun, Zhou, Wanlei, Yu, Philip S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5569
container_issue 9
container_start_page 5557
container_title IEEE transaction on neural networks and learning systems
container_volume 34
creator Zhang, Tao
Zhu, Tianqing
Gao, Kun
Zhou, Wanlei
Yu, Philip S.
description As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.
doi_str_mv 10.1109/TNNLS.2021.3129592
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_2608453383</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9642428</ieee_id><sourcerecordid>2859712052</sourcerecordid><originalsourceid>FETCH-LOGICAL-c328t-9801035d02048c0f87639ec2d5b39b53efb92ccbd9c6491d13fdcc1c8481c9153</originalsourceid><addsrcrecordid>eNpdkEFPAjEQhRujEYL8Ab1s4sUDYDvdLu0RCagJogkQvTXdbldLll1sFxP-vV0hHJzLTCbvTd58CF0TPCAEi_vlfD5bDAADGVACggk4Q20gCfSBcn5-mocfLdT1fo1DJZglsbhELRrzIRcct9HqQRWq1Lb8jGZGubIZXqrMFNGbsz9K73vRVFlXGu97kSqzaKT1zoV99G7rr2iiXLGPFnW13TbOsbO1cVZdoYtcFd50j72DVtPJcvzUn70-Po9Hs76mwOt-SEAwZRkGHHONcz5MqDAaMpZSkTJq8lSA1mkmdMhNMkLzTGuiecyJFoTRDro73N266ntnfC031mtThJdMtfMSEsxjRimnQXr7T7qudq4M6SRwJoYEMIOggoNKu8p7Z3K5dXaj3F4SLBvu8o-7bLjLI_dgujmYrDHmZBBJDDFw-gs5_3sq</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2859712052</pqid></control><display><type>article</type><title>Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Tao ; Zhu, Tianqing ; Gao, Kun ; Zhou, Wanlei ; Yu, Philip S.</creator><creatorcontrib>Zhang, Tao ; Zhu, Tianqing ; Gao, Kun ; Zhou, Wanlei ; Yu, Philip S.</creatorcontrib><description>As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2021.3129592</identifier><identifier>PMID: 34878980</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Accuracy ; Analytical models ; Artificial neural networks ; Costs ; Criteria ; Deep learning ; differential privacy (DP) ; early stopping criteria ; Empirical analysis ; Machine learning ; machine learning fairness ; Neural networks ; Privacy ; Stability criteria ; stochastic gradient descent ; Stochastic processes ; Stochasticity ; Tradeoffs ; Training</subject><ispartof>IEEE transaction on neural networks and learning systems, 2023-09, Vol.34 (9), p.5557-5569</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c328t-9801035d02048c0f87639ec2d5b39b53efb92ccbd9c6491d13fdcc1c8481c9153</citedby><cites>FETCH-LOGICAL-c328t-9801035d02048c0f87639ec2d5b39b53efb92ccbd9c6491d13fdcc1c8481c9153</cites><orcidid>0000-0003-3411-7947 ; 0000-0003-4696-641X ; 0000-0002-3491-5968</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9642428$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9642428$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Tao</creatorcontrib><creatorcontrib>Zhu, Tianqing</creatorcontrib><creatorcontrib>Gao, Kun</creatorcontrib><creatorcontrib>Zhou, Wanlei</creatorcontrib><creatorcontrib>Yu, Philip S.</creatorcontrib><title>Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><description>As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.</description><subject>Accuracy</subject><subject>Analytical models</subject><subject>Artificial neural networks</subject><subject>Costs</subject><subject>Criteria</subject><subject>Deep learning</subject><subject>differential privacy (DP)</subject><subject>early stopping criteria</subject><subject>Empirical analysis</subject><subject>Machine learning</subject><subject>machine learning fairness</subject><subject>Neural networks</subject><subject>Privacy</subject><subject>Stability criteria</subject><subject>stochastic gradient descent</subject><subject>Stochastic processes</subject><subject>Stochasticity</subject><subject>Tradeoffs</subject><subject>Training</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkEFPAjEQhRujEYL8Ab1s4sUDYDvdLu0RCagJogkQvTXdbldLll1sFxP-vV0hHJzLTCbvTd58CF0TPCAEi_vlfD5bDAADGVACggk4Q20gCfSBcn5-mocfLdT1fo1DJZglsbhELRrzIRcct9HqQRWq1Lb8jGZGubIZXqrMFNGbsz9K73vRVFlXGu97kSqzaKT1zoV99G7rr2iiXLGPFnW13TbOsbO1cVZdoYtcFd50j72DVtPJcvzUn70-Po9Hs76mwOt-SEAwZRkGHHONcz5MqDAaMpZSkTJq8lSA1mkmdMhNMkLzTGuiecyJFoTRDro73N266ntnfC031mtThJdMtfMSEsxjRimnQXr7T7qudq4M6SRwJoYEMIOggoNKu8p7Z3K5dXaj3F4SLBvu8o-7bLjLI_dgujmYrDHmZBBJDDFw-gs5_3sq</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Zhang, Tao</creator><creator>Zhu, Tianqing</creator><creator>Gao, Kun</creator><creator>Zhou, Wanlei</creator><creator>Yu, Philip S.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3411-7947</orcidid><orcidid>https://orcid.org/0000-0003-4696-641X</orcidid><orcidid>https://orcid.org/0000-0002-3491-5968</orcidid></search><sort><creationdate>20230901</creationdate><title>Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria</title><author>Zhang, Tao ; Zhu, Tianqing ; Gao, Kun ; Zhou, Wanlei ; Yu, Philip S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c328t-9801035d02048c0f87639ec2d5b39b53efb92ccbd9c6491d13fdcc1c8481c9153</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Analytical models</topic><topic>Artificial neural networks</topic><topic>Costs</topic><topic>Criteria</topic><topic>Deep learning</topic><topic>differential privacy (DP)</topic><topic>early stopping criteria</topic><topic>Empirical analysis</topic><topic>Machine learning</topic><topic>machine learning fairness</topic><topic>Neural networks</topic><topic>Privacy</topic><topic>Stability criteria</topic><topic>stochastic gradient descent</topic><topic>Stochastic processes</topic><topic>Stochasticity</topic><topic>Tradeoffs</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Tao</creatorcontrib><creatorcontrib>Zhu, Tianqing</creatorcontrib><creatorcontrib>Gao, Kun</creatorcontrib><creatorcontrib>Zhou, Wanlei</creatorcontrib><creatorcontrib>Yu, Philip S.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Tao</au><au>Zhu, Tianqing</au><au>Gao, Kun</au><au>Zhou, Wanlei</au><au>Yu, Philip S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><date>2023-09-01</date><risdate>2023</risdate><volume>34</volume><issue>9</issue><spage>5557</spage><epage>5569</epage><pages>5557-5569</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.</abstract><cop>Piscataway</cop><pub>IEEE</pub><pmid>34878980</pmid><doi>10.1109/TNNLS.2021.3129592</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-3411-7947</orcidid><orcidid>https://orcid.org/0000-0003-4696-641X</orcidid><orcidid>https://orcid.org/0000-0002-3491-5968</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2023-09, Vol.34 (9), p.5557-5569
issn 2162-237X
2162-2388
language eng
recordid cdi_proquest_miscellaneous_2608453383
source IEEE Electronic Library (IEL)
subjects Accuracy
Analytical models
Artificial neural networks
Costs
Criteria
Deep learning
differential privacy (DP)
early stopping criteria
Empirical analysis
Machine learning
machine learning fairness
Neural networks
Privacy
Stability criteria
stochastic gradient descent
Stochastic processes
Stochasticity
Tradeoffs
Training
title Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T12%3A09%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Balancing%20Learning%20Model%20Privacy,%20Fairness,%20and%20Accuracy%20With%20Early%20Stopping%20Criteria&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Zhang,%20Tao&rft.date=2023-09-01&rft.volume=34&rft.issue=9&rft.spage=5557&rft.epage=5569&rft.pages=5557-5569&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2021.3129592&rft_dat=%3Cproquest_RIE%3E2859712052%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2859712052&rft_id=info:pmid/34878980&rft_ieee_id=9642428&rfr_iscdi=true