Learning bivariate scoring functions for ranking
State-of-the-art Learning-to-Rank algorithms, e.g., λ MART , rely on univariate scoring functions to score a list of items. Univariate scoring functions score each item independently, i.e., without considering the other available items in the list. Nevertheless, ranking deals with producing an effec...
Gespeichert in:
Veröffentlicht in: | Discover Computing 2024-09, Vol.27 (1), p.33, Article 33 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 1 |
container_start_page | 33 |
container_title | Discover Computing |
container_volume | 27 |
creator | Nardini, Franco Maria Trani, Roberto Venturini, Rossano |
description | State-of-the-art Learning-to-Rank algorithms, e.g.,
λ
MART
, rely on univariate scoring functions to score a list of items. Univariate scoring functions score each item independently, i.e., without considering the other available items in the list. Nevertheless, ranking deals with producing an effective ordering of the items and comparisons between items are helpful to achieve this task. Bivariate scoring functions allow the model to exploit dependencies between the items in the list as they work by scoring pairs of items. In this paper, we exploit item dependencies in a novel framework—we call it the
Lambda Bivariate
(
LB
) framework—that allows to learn effective bivariate scoring functions for ranking using gradient boosting trees. We discuss the three main ingredients of
LB
: (
i
) the invariance to permutations property, (
ii
) the function aggregating the scores of all pairs into the per-item scores, and (
iii
) the optimization process to learn bivariate scoring functions for ranking using any differentiable loss functions. We apply
LB
to the
λ
Rank
loss and we show that it results in learning a bivariate version of
λ
MART
—we call it
Bi-
λ
MART
—that significantly outperforms all neural-network-based and tree-based state-of-the-art algorithms for Learning-to-Rank. To show the generality of
LB
with respect to other loss functions, we also discuss its application to the
Softmax
loss. |
doi_str_mv | 10.1007/s10791-024-09444-7 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3110560825</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3110560825</sourcerecordid><originalsourceid>FETCH-LOGICAL-c244t-3e4cd8599b0dfd38e8b0081d6d581aa850144b4a8732d52f0da24560b30fc4633</originalsourceid><addsrcrecordid>eNp9kEtLxTAQhYMoeLneP-Cq4Do6SSZtspSLLyi40XVIm0Tqo61JK_jvTa2gK1czDOecmfkIOWVwzgCqi8Sg0owCRwoaEWl1QDZco6Jca374pz8mu5S6BqSoBC8BNgRqb2Pf9U9F033Y2NnJF6kd4jIJc99O3dCnIgyxiLZ_ydMTchTsa_K7n7olj9dXD_tbWt_f3O0va9pyxIkKj61TUusGXHBCedUAKOZKJxWzVklgiA1alQ9xkgdwlqMsoREQWiyF2JKzNXeMw_vs02Sehzn2eaURjEGWKi6ziq-qNg4pRR_MGLs3Gz8NA7PAMSsck-GYbzimyiaxmtK4_Onjb_Q_ri8-qmXT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3110560825</pqid></control><display><type>article</type><title>Learning bivariate scoring functions for ranking</title><source>Springer Nature - Complete Springer Journals</source><source>Alma/SFX Local Collection</source><creator>Nardini, Franco Maria ; Trani, Roberto ; Venturini, Rossano</creator><creatorcontrib>Nardini, Franco Maria ; Trani, Roberto ; Venturini, Rossano</creatorcontrib><description>State-of-the-art Learning-to-Rank algorithms, e.g.,
λ
MART
, rely on univariate scoring functions to score a list of items. Univariate scoring functions score each item independently, i.e., without considering the other available items in the list. Nevertheless, ranking deals with producing an effective ordering of the items and comparisons between items are helpful to achieve this task. Bivariate scoring functions allow the model to exploit dependencies between the items in the list as they work by scoring pairs of items. In this paper, we exploit item dependencies in a novel framework—we call it the
Lambda Bivariate
(
LB
) framework—that allows to learn effective bivariate scoring functions for ranking using gradient boosting trees. We discuss the three main ingredients of
LB
: (
i
) the invariance to permutations property, (
ii
) the function aggregating the scores of all pairs into the per-item scores, and (
iii
) the optimization process to learn bivariate scoring functions for ranking using any differentiable loss functions. We apply
LB
to the
λ
Rank
loss and we show that it results in learning a bivariate version of
λ
MART
—we call it
Bi-
λ
MART
—that significantly outperforms all neural-network-based and tree-based state-of-the-art algorithms for Learning-to-Rank. To show the generality of
LB
with respect to other loss functions, we also discuss its application to the
Softmax
loss.</description><identifier>ISSN: 2948-2992</identifier><identifier>ISSN: 1386-4564</identifier><identifier>EISSN: 2948-2992</identifier><identifier>EISSN: 1573-7659</identifier><identifier>DOI: 10.1007/s10791-024-09444-7</identifier><language>eng</language><publisher>Dordrecht: Springer Netherlands</publisher><subject>Algorithms ; Bivariate analysis ; Computer Science ; Data Mining and Knowledge Discovery ; Data Structures and Information Theory ; Information Storage and Retrieval ; Machine learning ; Natural Language Processing (NLP) ; Neural networks ; Pattern Recognition ; Permutations ; Ranking</subject><ispartof>Discover Computing, 2024-09, Vol.27 (1), p.33, Article 33</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c244t-3e4cd8599b0dfd38e8b0081d6d581aa850144b4a8732d52f0da24560b30fc4633</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10791-024-09444-7$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10791-024-09444-7$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Nardini, Franco Maria</creatorcontrib><creatorcontrib>Trani, Roberto</creatorcontrib><creatorcontrib>Venturini, Rossano</creatorcontrib><title>Learning bivariate scoring functions for ranking</title><title>Discover Computing</title><addtitle>Discov Computing</addtitle><description>State-of-the-art Learning-to-Rank algorithms, e.g.,
λ
MART
, rely on univariate scoring functions to score a list of items. Univariate scoring functions score each item independently, i.e., without considering the other available items in the list. Nevertheless, ranking deals with producing an effective ordering of the items and comparisons between items are helpful to achieve this task. Bivariate scoring functions allow the model to exploit dependencies between the items in the list as they work by scoring pairs of items. In this paper, we exploit item dependencies in a novel framework—we call it the
Lambda Bivariate
(
LB
) framework—that allows to learn effective bivariate scoring functions for ranking using gradient boosting trees. We discuss the three main ingredients of
LB
: (
i
) the invariance to permutations property, (
ii
) the function aggregating the scores of all pairs into the per-item scores, and (
iii
) the optimization process to learn bivariate scoring functions for ranking using any differentiable loss functions. We apply
LB
to the
λ
Rank
loss and we show that it results in learning a bivariate version of
λ
MART
—we call it
Bi-
λ
MART
—that significantly outperforms all neural-network-based and tree-based state-of-the-art algorithms for Learning-to-Rank. To show the generality of
LB
with respect to other loss functions, we also discuss its application to the
Softmax
loss.</description><subject>Algorithms</subject><subject>Bivariate analysis</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Data Structures and Information Theory</subject><subject>Information Storage and Retrieval</subject><subject>Machine learning</subject><subject>Natural Language Processing (NLP)</subject><subject>Neural networks</subject><subject>Pattern Recognition</subject><subject>Permutations</subject><subject>Ranking</subject><issn>2948-2992</issn><issn>1386-4564</issn><issn>2948-2992</issn><issn>1573-7659</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9kEtLxTAQhYMoeLneP-Cq4Do6SSZtspSLLyi40XVIm0Tqo61JK_jvTa2gK1czDOecmfkIOWVwzgCqi8Sg0owCRwoaEWl1QDZco6Jca374pz8mu5S6BqSoBC8BNgRqb2Pf9U9F033Y2NnJF6kd4jIJc99O3dCnIgyxiLZ_ydMTchTsa_K7n7olj9dXD_tbWt_f3O0va9pyxIkKj61TUusGXHBCedUAKOZKJxWzVklgiA1alQ9xkgdwlqMsoREQWiyF2JKzNXeMw_vs02Sehzn2eaURjEGWKi6ziq-qNg4pRR_MGLs3Gz8NA7PAMSsck-GYbzimyiaxmtK4_Onjb_Q_ri8-qmXT</recordid><startdate>20240927</startdate><enddate>20240927</enddate><creator>Nardini, Franco Maria</creator><creator>Trani, Roberto</creator><creator>Venturini, Rossano</creator><general>Springer Netherlands</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20240927</creationdate><title>Learning bivariate scoring functions for ranking</title><author>Nardini, Franco Maria ; Trani, Roberto ; Venturini, Rossano</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c244t-3e4cd8599b0dfd38e8b0081d6d581aa850144b4a8732d52f0da24560b30fc4633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Bivariate analysis</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Data Structures and Information Theory</topic><topic>Information Storage and Retrieval</topic><topic>Machine learning</topic><topic>Natural Language Processing (NLP)</topic><topic>Neural networks</topic><topic>Pattern Recognition</topic><topic>Permutations</topic><topic>Ranking</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nardini, Franco Maria</creatorcontrib><creatorcontrib>Trani, Roberto</creatorcontrib><creatorcontrib>Venturini, Rossano</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Discover Computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nardini, Franco Maria</au><au>Trani, Roberto</au><au>Venturini, Rossano</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning bivariate scoring functions for ranking</atitle><jtitle>Discover Computing</jtitle><stitle>Discov Computing</stitle><date>2024-09-27</date><risdate>2024</risdate><volume>27</volume><issue>1</issue><spage>33</spage><pages>33-</pages><artnum>33</artnum><issn>2948-2992</issn><issn>1386-4564</issn><eissn>2948-2992</eissn><eissn>1573-7659</eissn><abstract>State-of-the-art Learning-to-Rank algorithms, e.g.,
λ
MART
, rely on univariate scoring functions to score a list of items. Univariate scoring functions score each item independently, i.e., without considering the other available items in the list. Nevertheless, ranking deals with producing an effective ordering of the items and comparisons between items are helpful to achieve this task. Bivariate scoring functions allow the model to exploit dependencies between the items in the list as they work by scoring pairs of items. In this paper, we exploit item dependencies in a novel framework—we call it the
Lambda Bivariate
(
LB
) framework—that allows to learn effective bivariate scoring functions for ranking using gradient boosting trees. We discuss the three main ingredients of
LB
: (
i
) the invariance to permutations property, (
ii
) the function aggregating the scores of all pairs into the per-item scores, and (
iii
) the optimization process to learn bivariate scoring functions for ranking using any differentiable loss functions. We apply
LB
to the
λ
Rank
loss and we show that it results in learning a bivariate version of
λ
MART
—we call it
Bi-
λ
MART
—that significantly outperforms all neural-network-based and tree-based state-of-the-art algorithms for Learning-to-Rank. To show the generality of
LB
with respect to other loss functions, we also discuss its application to the
Softmax
loss.</abstract><cop>Dordrecht</cop><pub>Springer Netherlands</pub><doi>10.1007/s10791-024-09444-7</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2948-2992 |
ispartof | Discover Computing, 2024-09, Vol.27 (1), p.33, Article 33 |
issn | 2948-2992 1386-4564 2948-2992 1573-7659 |
language | eng |
recordid | cdi_proquest_journals_3110560825 |
source | Springer Nature - Complete Springer Journals; Alma/SFX Local Collection |
subjects | Algorithms Bivariate analysis Computer Science Data Mining and Knowledge Discovery Data Structures and Information Theory Information Storage and Retrieval Machine learning Natural Language Processing (NLP) Neural networks Pattern Recognition Permutations Ranking |
title | Learning bivariate scoring functions for ranking |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T03%3A19%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20bivariate%20scoring%20functions%20for%20ranking&rft.jtitle=Discover%20Computing&rft.au=Nardini,%20Franco%20Maria&rft.date=2024-09-27&rft.volume=27&rft.issue=1&rft.spage=33&rft.pages=33-&rft.artnum=33&rft.issn=2948-2992&rft.eissn=2948-2992&rft_id=info:doi/10.1007/s10791-024-09444-7&rft_dat=%3Cproquest_cross%3E3110560825%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3110560825&rft_id=info:pmid/&rfr_iscdi=true |