Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction
In machine learning systems, privileged features refer to the features that are available during offline training but inaccessible for online serving. Previous studies have recognized the importance of privileged features and explored ways to tackle online-offline discrepancies. A typical practice i...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Gui, Xiaoqiang Cheng, Yueyao Sheng, Xiang-Rong Zhao, Yunfeng Yu, Guoxian Han, Shuguang Jiang, Yuning Xu, Jian Zheng, Bo |
description | In machine learning systems, privileged features refer to the features that
are available during offline training but inaccessible for online serving.
Previous studies have recognized the importance of privileged features and
explored ways to tackle online-offline discrepancies. A typical practice is
privileged features distillation (PFD): train a teacher model using all
features (including privileged ones) and then distill the knowledge from the
teacher model using a student model (excluding the privileged features), which
is then employed for online serving. In practice, the pointwise cross-entropy
loss is often adopted for PFD. However, this loss is insufficient to distill
the ranking ability for CTR prediction. First, it does not consider the
non-i.i.d. characteristic of the data distribution, i.e., other items on the
same page significantly impact the click probability of the candidate item.
Second, it fails to consider the relative item order ranked by the teacher
model's predictions, which is essential to distill the ranking ability. To
address these issues, we first extend the pointwise-based PFD to the
listwise-based PFD. We then define the calibration-compatible property of
distillation loss and show that commonly used listwise losses do not satisfy
this property when employed as distillation loss, thus compromising the model's
calibration ability, which is another important measure for CTR prediction. To
tackle this dilemma, we propose Calibration-compatible LIstwise Distillation
(CLID), which employs carefully-designed listwise distillation loss to achieve
better ranking ability than the pointwise-based PFD while preserving the
model's calibration ability. We theoretically prove it is
calibration-compatible. Extensive experiments on public datasets and a
production dataset collected from the display advertising system of Alibaba
further demonstrate the effectiveness of CLID. |
doi_str_mv | 10.48550/arxiv.2312.08727 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_08727</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_08727</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-3f0dd9fb53c4e63769e3cb512bc8478e10e15514472a274161acac36f6e53a623</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QGvz31lKdVQoKFLclpv0Ri5k7JDWUd_eTnV1DpyPAx9jV6IqdW1MdQP5m46lVEKWVe2kO2dvDSTyGWYaP4ow7g9L8wl5S9P8RRPyu6VQSivAx8hfMh0p4TsOfIcwf2aceBwzb7rXZcOBwom8YGcR0oSX_7lh3e6-ax6L9vnhqbltC7DOFSpWw7CN3qig0Spnt6iCN0L6UGtXo6hQGCO0dhKk08IKCBCUjRaNAivVhl3_3a5i_SHTHvJPfxLsV0H1C60IS9I</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction</title><source>arXiv.org</source><creator>Gui, Xiaoqiang ; Cheng, Yueyao ; Sheng, Xiang-Rong ; Zhao, Yunfeng ; Yu, Guoxian ; Han, Shuguang ; Jiang, Yuning ; Xu, Jian ; Zheng, Bo</creator><creatorcontrib>Gui, Xiaoqiang ; Cheng, Yueyao ; Sheng, Xiang-Rong ; Zhao, Yunfeng ; Yu, Guoxian ; Han, Shuguang ; Jiang, Yuning ; Xu, Jian ; Zheng, Bo</creatorcontrib><description>In machine learning systems, privileged features refer to the features that
are available during offline training but inaccessible for online serving.
Previous studies have recognized the importance of privileged features and
explored ways to tackle online-offline discrepancies. A typical practice is
privileged features distillation (PFD): train a teacher model using all
features (including privileged ones) and then distill the knowledge from the
teacher model using a student model (excluding the privileged features), which
is then employed for online serving. In practice, the pointwise cross-entropy
loss is often adopted for PFD. However, this loss is insufficient to distill
the ranking ability for CTR prediction. First, it does not consider the
non-i.i.d. characteristic of the data distribution, i.e., other items on the
same page significantly impact the click probability of the candidate item.
Second, it fails to consider the relative item order ranked by the teacher
model's predictions, which is essential to distill the ranking ability. To
address these issues, we first extend the pointwise-based PFD to the
listwise-based PFD. We then define the calibration-compatible property of
distillation loss and show that commonly used listwise losses do not satisfy
this property when employed as distillation loss, thus compromising the model's
calibration ability, which is another important measure for CTR prediction. To
tackle this dilemma, we propose Calibration-compatible LIstwise Distillation
(CLID), which employs carefully-designed listwise distillation loss to achieve
better ranking ability than the pointwise-based PFD while preserving the
model's calibration ability. We theoretically prove it is
calibration-compatible. Extensive experiments on public datasets and a
production dataset collected from the display advertising system of Alibaba
further demonstrate the effectiveness of CLID.</description><identifier>DOI: 10.48550/arxiv.2312.08727</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2023-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.08727$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.08727$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gui, Xiaoqiang</creatorcontrib><creatorcontrib>Cheng, Yueyao</creatorcontrib><creatorcontrib>Sheng, Xiang-Rong</creatorcontrib><creatorcontrib>Zhao, Yunfeng</creatorcontrib><creatorcontrib>Yu, Guoxian</creatorcontrib><creatorcontrib>Han, Shuguang</creatorcontrib><creatorcontrib>Jiang, Yuning</creatorcontrib><creatorcontrib>Xu, Jian</creatorcontrib><creatorcontrib>Zheng, Bo</creatorcontrib><title>Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction</title><description>In machine learning systems, privileged features refer to the features that
are available during offline training but inaccessible for online serving.
Previous studies have recognized the importance of privileged features and
explored ways to tackle online-offline discrepancies. A typical practice is
privileged features distillation (PFD): train a teacher model using all
features (including privileged ones) and then distill the knowledge from the
teacher model using a student model (excluding the privileged features), which
is then employed for online serving. In practice, the pointwise cross-entropy
loss is often adopted for PFD. However, this loss is insufficient to distill
the ranking ability for CTR prediction. First, it does not consider the
non-i.i.d. characteristic of the data distribution, i.e., other items on the
same page significantly impact the click probability of the candidate item.
Second, it fails to consider the relative item order ranked by the teacher
model's predictions, which is essential to distill the ranking ability. To
address these issues, we first extend the pointwise-based PFD to the
listwise-based PFD. We then define the calibration-compatible property of
distillation loss and show that commonly used listwise losses do not satisfy
this property when employed as distillation loss, thus compromising the model's
calibration ability, which is another important measure for CTR prediction. To
tackle this dilemma, we propose Calibration-compatible LIstwise Distillation
(CLID), which employs carefully-designed listwise distillation loss to achieve
better ranking ability than the pointwise-based PFD while preserving the
model's calibration ability. We theoretically prove it is
calibration-compatible. Extensive experiments on public datasets and a
production dataset collected from the display advertising system of Alibaba
further demonstrate the effectiveness of CLID.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QGvz31lKdVQoKFLclpv0Ri5k7JDWUd_eTnV1DpyPAx9jV6IqdW1MdQP5m46lVEKWVe2kO2dvDSTyGWYaP4ow7g9L8wl5S9P8RRPyu6VQSivAx8hfMh0p4TsOfIcwf2aceBwzb7rXZcOBwom8YGcR0oSX_7lh3e6-ax6L9vnhqbltC7DOFSpWw7CN3qig0Spnt6iCN0L6UGtXo6hQGCO0dhKk08IKCBCUjRaNAivVhl3_3a5i_SHTHvJPfxLsV0H1C60IS9I</recordid><startdate>20231214</startdate><enddate>20231214</enddate><creator>Gui, Xiaoqiang</creator><creator>Cheng, Yueyao</creator><creator>Sheng, Xiang-Rong</creator><creator>Zhao, Yunfeng</creator><creator>Yu, Guoxian</creator><creator>Han, Shuguang</creator><creator>Jiang, Yuning</creator><creator>Xu, Jian</creator><creator>Zheng, Bo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231214</creationdate><title>Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction</title><author>Gui, Xiaoqiang ; Cheng, Yueyao ; Sheng, Xiang-Rong ; Zhao, Yunfeng ; Yu, Guoxian ; Han, Shuguang ; Jiang, Yuning ; Xu, Jian ; Zheng, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-3f0dd9fb53c4e63769e3cb512bc8478e10e15514472a274161acac36f6e53a623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Gui, Xiaoqiang</creatorcontrib><creatorcontrib>Cheng, Yueyao</creatorcontrib><creatorcontrib>Sheng, Xiang-Rong</creatorcontrib><creatorcontrib>Zhao, Yunfeng</creatorcontrib><creatorcontrib>Yu, Guoxian</creatorcontrib><creatorcontrib>Han, Shuguang</creatorcontrib><creatorcontrib>Jiang, Yuning</creatorcontrib><creatorcontrib>Xu, Jian</creatorcontrib><creatorcontrib>Zheng, Bo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gui, Xiaoqiang</au><au>Cheng, Yueyao</au><au>Sheng, Xiang-Rong</au><au>Zhao, Yunfeng</au><au>Yu, Guoxian</au><au>Han, Shuguang</au><au>Jiang, Yuning</au><au>Xu, Jian</au><au>Zheng, Bo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction</atitle><date>2023-12-14</date><risdate>2023</risdate><abstract>In machine learning systems, privileged features refer to the features that
are available during offline training but inaccessible for online serving.
Previous studies have recognized the importance of privileged features and
explored ways to tackle online-offline discrepancies. A typical practice is
privileged features distillation (PFD): train a teacher model using all
features (including privileged ones) and then distill the knowledge from the
teacher model using a student model (excluding the privileged features), which
is then employed for online serving. In practice, the pointwise cross-entropy
loss is often adopted for PFD. However, this loss is insufficient to distill
the ranking ability for CTR prediction. First, it does not consider the
non-i.i.d. characteristic of the data distribution, i.e., other items on the
same page significantly impact the click probability of the candidate item.
Second, it fails to consider the relative item order ranked by the teacher
model's predictions, which is essential to distill the ranking ability. To
address these issues, we first extend the pointwise-based PFD to the
listwise-based PFD. We then define the calibration-compatible property of
distillation loss and show that commonly used listwise losses do not satisfy
this property when employed as distillation loss, thus compromising the model's
calibration ability, which is another important measure for CTR prediction. To
tackle this dilemma, we propose Calibration-compatible LIstwise Distillation
(CLID), which employs carefully-designed listwise distillation loss to achieve
better ranking ability than the pointwise-based PFD while preserving the
model's calibration ability. We theoretically prove it is
calibration-compatible. Extensive experiments on public datasets and a
production dataset collected from the display advertising system of Alibaba
further demonstrate the effectiveness of CLID.</abstract><doi>10.48550/arxiv.2312.08727</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2312.08727 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2312_08727 |
source | arXiv.org |
subjects | Computer Science - Information Retrieval |
title | Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T21%3A39%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Calibration-compatible%20Listwise%20Distillation%20of%20Privileged%20Features%20for%20CTR%20Prediction&rft.au=Gui,%20Xiaoqiang&rft.date=2023-12-14&rft_id=info:doi/10.48550/arxiv.2312.08727&rft_dat=%3Carxiv_GOX%3E2312_08727%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |