Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence
NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-02 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zi-Hao Qiu Hu, Quanqi Zhong, Yongjian Zhang, Lijun Yang, Tianbao |
description | NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-\(K\) variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-\(K\) NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee. Our proposed methods are implemented in the LibAUC library at https://libauc.org/. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2633110537</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2633110537</sourcerecordid><originalsourceid>FETCH-proquest_journals_26331105373</originalsourceid><addsrcrecordid>eNqNjs0KgkAURocgSMp3uNBasJnU9trPIiqovYxy1RGbazOjQU-fix6g1bc4h8M3Yx4XYhPstpwvmG9tG4YhjxMeRcJjxVmaGgNbyg7h7qhspHWqhGvv1FN9pFOkgSq4ZOkR7oMxVEuHFioykCH2cEZptNI1vJVr4GZolMWUSkmPOJV1iSs2r2Rn0f_tkq0P-0d6CnpDrwGty1sajJ5QzuPp6SaMRCL-s75QGUU8</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2633110537</pqid></control><display><type>article</type><title>Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence</title><source>Free E- Journals</source><creator>Zi-Hao Qiu ; Hu, Quanqi ; Zhong, Yongjian ; Zhang, Lijun ; Yang, Tianbao</creator><creatorcontrib>Zi-Hao Qiu ; Hu, Quanqi ; Zhong, Yongjian ; Zhang, Lijun ; Yang, Tianbao</creatorcontrib><description>NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-\(K\) variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-\(K\) NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee. Our proposed methods are implemented in the LibAUC library at https://libauc.org/.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Convergence ; Deep learning ; Information retrieval ; Iterative methods ; Machine learning ; Optimization ; Ranking</subject><ispartof>arXiv.org, 2023-02</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Zi-Hao Qiu</creatorcontrib><creatorcontrib>Hu, Quanqi</creatorcontrib><creatorcontrib>Zhong, Yongjian</creatorcontrib><creatorcontrib>Zhang, Lijun</creatorcontrib><creatorcontrib>Yang, Tianbao</creatorcontrib><title>Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence</title><title>arXiv.org</title><description>NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-\(K\) variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-\(K\) NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee. Our proposed methods are implemented in the LibAUC library at https://libauc.org/.</description><subject>Algorithms</subject><subject>Convergence</subject><subject>Deep learning</subject><subject>Information retrieval</subject><subject>Iterative methods</subject><subject>Machine learning</subject><subject>Optimization</subject><subject>Ranking</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjs0KgkAURocgSMp3uNBasJnU9trPIiqovYxy1RGbazOjQU-fix6g1bc4h8M3Yx4XYhPstpwvmG9tG4YhjxMeRcJjxVmaGgNbyg7h7qhspHWqhGvv1FN9pFOkgSq4ZOkR7oMxVEuHFioykCH2cEZptNI1vJVr4GZolMWUSkmPOJV1iSs2r2Rn0f_tkq0P-0d6CnpDrwGty1sajJ5QzuPp6SaMRCL-s75QGUU8</recordid><startdate>20230202</startdate><enddate>20230202</enddate><creator>Zi-Hao Qiu</creator><creator>Hu, Quanqi</creator><creator>Zhong, Yongjian</creator><creator>Zhang, Lijun</creator><creator>Yang, Tianbao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230202</creationdate><title>Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence</title><author>Zi-Hao Qiu ; Hu, Quanqi ; Zhong, Yongjian ; Zhang, Lijun ; Yang, Tianbao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26331105373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Convergence</topic><topic>Deep learning</topic><topic>Information retrieval</topic><topic>Iterative methods</topic><topic>Machine learning</topic><topic>Optimization</topic><topic>Ranking</topic><toplevel>online_resources</toplevel><creatorcontrib>Zi-Hao Qiu</creatorcontrib><creatorcontrib>Hu, Quanqi</creatorcontrib><creatorcontrib>Zhong, Yongjian</creatorcontrib><creatorcontrib>Zhang, Lijun</creatorcontrib><creatorcontrib>Yang, Tianbao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zi-Hao Qiu</au><au>Hu, Quanqi</au><au>Zhong, Yongjian</au><au>Zhang, Lijun</au><au>Yang, Tianbao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence</atitle><jtitle>arXiv.org</jtitle><date>2023-02-02</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking metric in information retrieval and machine learning. However, efficient and provable stochastic methods for maximizing NDCG are still lacking, especially for deep models. In this paper, we propose a principled approach to optimize NDCG and its top-\(K\) variant. First, we formulate a novel compositional optimization problem for optimizing the NDCG surrogate, and a novel bilevel compositional optimization problem for optimizing the top-\(K\) NDCG surrogate. Then, we develop efficient stochastic algorithms with provable convergence guarantees for the non-convex objectives. Different from existing NDCG optimization methods, the per-iteration complexity of our algorithms scales with the mini-batch size instead of the number of total items. To improve the effectiveness for deep learning, we further propose practical strategies by using initial warm-up and stop gradient operator. Experimental results on multiple datasets demonstrate that our methods outperform prior ranking approaches in terms of NDCG. To the best of our knowledge, this is the first time that stochastic algorithms are proposed to optimize NDCG with a provable convergence guarantee. Our proposed methods are implemented in the LibAUC library at https://libauc.org/.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-02 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2633110537 |
source | Free E- Journals |
subjects | Algorithms Convergence Deep learning Information retrieval Iterative methods Machine learning Optimization Ranking |
title | Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T00%3A51%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Large-scale%20Stochastic%20Optimization%20of%20NDCG%20Surrogates%20for%20Deep%20Learning%20with%20Provable%20Convergence&rft.jtitle=arXiv.org&rft.au=Zi-Hao%20Qiu&rft.date=2023-02-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2633110537%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2633110537&rft_id=info:pmid/&rfr_iscdi=true |