Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm

The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decis...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shao, Huiyang, Xu, Qianqian, Yang, Zhiyong, Bao, Shilong, Huang, Qingming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shao, Huiyang
Xu, Qianqian
Yang, Zhiyong
Bao, Shilong
Huang, Qingming
description The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave minimax regularized problem of instance-wise functions. On top of this, we employ an efficient solver enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the minimax reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method.
doi_str_mv 10.48550/arxiv.2210.03967
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2210_03967</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2210_03967</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-3c7fc9fc44420f91e83c0adb33b3895ede86340674c11768aa502767818e6a2a3</originalsourceid><addsrcrecordid>eNotz8tqwzAUBFBtuihpP6Cr6gecypYsydkZ00cgkFKctXMty8kF-YGsPpyvb5J2NTADA4eQh5gthU5T9gT-B7-WSXIuGM-kuiX7fJq7MQwBDTg3011fI0y2oet-CtAbG33jZOmHPXw68Hg6L-_gA4Kj-a6g2zFghycIOPQrWh7t4GcKfUNzdxg8hmN3R25acJO9_88FKV-ey-It2mxf10W-iUAqFXGjWpO1RgiRsDaLreaGQVNzXnOdpbaxWnLBpBImjpXUAClLlFQ61lZCAnxBHv9ur8Rq9NiBn6sLtbpS-S85wU9e</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm</title><source>arXiv.org</source><creator>Shao, Huiyang ; Xu, Qianqian ; Yang, Zhiyong ; Bao, Shilong ; Huang, Qingming</creator><creatorcontrib>Shao, Huiyang ; Xu, Qianqian ; Yang, Zhiyong ; Bao, Shilong ; Huang, Qingming</creatorcontrib><description>The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave minimax regularized problem of instance-wise functions. On top of this, we employ an efficient solver enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the minimax reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method.</description><identifier>DOI: 10.48550/arxiv.2210.03967</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2210.03967$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2210.03967$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shao, Huiyang</creatorcontrib><creatorcontrib>Xu, Qianqian</creatorcontrib><creatorcontrib>Yang, Zhiyong</creatorcontrib><creatorcontrib>Bao, Shilong</creatorcontrib><creatorcontrib>Huang, Qingming</creatorcontrib><title>Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm</title><description>The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave minimax regularized problem of instance-wise functions. On top of this, we employ an efficient solver enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the minimax reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8tqwzAUBFBtuihpP6Cr6gecypYsydkZ00cgkFKctXMty8kF-YGsPpyvb5J2NTADA4eQh5gthU5T9gT-B7-WSXIuGM-kuiX7fJq7MQwBDTg3011fI0y2oet-CtAbG33jZOmHPXw68Hg6L-_gA4Kj-a6g2zFghycIOPQrWh7t4GcKfUNzdxg8hmN3R25acJO9_88FKV-ey-It2mxf10W-iUAqFXGjWpO1RgiRsDaLreaGQVNzXnOdpbaxWnLBpBImjpXUAClLlFQ61lZCAnxBHv9ur8Rq9NiBn6sLtbpS-S85wU9e</recordid><startdate>20221008</startdate><enddate>20221008</enddate><creator>Shao, Huiyang</creator><creator>Xu, Qianqian</creator><creator>Yang, Zhiyong</creator><creator>Bao, Shilong</creator><creator>Huang, Qingming</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221008</creationdate><title>Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm</title><author>Shao, Huiyang ; Xu, Qianqian ; Yang, Zhiyong ; Bao, Shilong ; Huang, Qingming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-3c7fc9fc44420f91e83c0adb33b3895ede86340674c11768aa502767818e6a2a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shao, Huiyang</creatorcontrib><creatorcontrib>Xu, Qianqian</creatorcontrib><creatorcontrib>Yang, Zhiyong</creatorcontrib><creatorcontrib>Bao, Shilong</creatorcontrib><creatorcontrib>Huang, Qingming</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shao, Huiyang</au><au>Xu, Qianqian</au><au>Yang, Zhiyong</au><au>Bao, Shilong</au><au>Huang, Qingming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm</atitle><date>2022-10-08</date><risdate>2022</risdate><abstract>The Partial Area Under the ROC Curve (PAUC), typically including One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC), measures the average performance of a binary classifier within a specific false positive rate and/or true positive rate interval, which is a widely adopted measure when decision constraints must be considered. Consequently, PAUC optimization has naturally attracted increasing attention in the machine learning community within the last few years. Nonetheless, most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable. Fortunately, a recent work presents an unbiased formulation of the PAUC optimization problem via distributional robust optimization. However, it is based on the pair-wise formulation of AUC, which suffers from the limited scalability w.r.t. sample size and a slow convergence rate, especially for TPAUC. To address this issue, we present a simpler reformulation of the problem in an asymptotically unbiased and instance-wise manner. For both OPAUC and TPAUC, we come to a nonconvex strongly concave minimax regularized problem of instance-wise functions. On top of this, we employ an efficient solver enjoys a linear per-iteration computational complexity w.r.t. the sample size and a time-complexity of $O(\epsilon^{-1/3})$ to reach a $\epsilon$ stationary point. Furthermore, we find that the minimax reformulation also facilitates the theoretical analysis of generalization error as a byproduct. Compared with the existing results, we present new error bounds that are much easier to prove and could deal with hypotheses with real-valued outputs. Finally, extensive experiments on several benchmark datasets demonstrate the effectiveness of our method.</abstract><doi>10.48550/arxiv.2210.03967</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2210.03967
ispartof
issn
language eng
recordid cdi_arxiv_primary_2210_03967
source arXiv.org
subjects Computer Science - Learning
title Asymptotically Unbiased Instance-wise Regularized Partial AUC Optimization: Theory and Algorithm
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T11%3A08%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Asymptotically%20Unbiased%20Instance-wise%20Regularized%20Partial%20AUC%20Optimization:%20Theory%20and%20Algorithm&rft.au=Shao,%20Huiyang&rft.date=2022-10-08&rft_id=info:doi/10.48550/arxiv.2210.03967&rft_dat=%3Carxiv_GOX%3E2210_03967%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true