COLTR: Semi-supervised Learning to Rank with Co-training and Over-parameterization for Web Search

While learning to rank (LTR) has been widely used in web search to prioritize most relevant webpages among the retrieved contents subject to the input queries, the traditional LTR models fail to deliver decent performance due to two main reasons: 1) the lack of well-annotated query-webpage pairs wit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2023-12, Vol.35 (12), p.1-14
Hauptverfasser: Li, Yuchen, Xiong, Haoyi, Wang, Qingzhong, Kong, Linghe, Liu, Hao, Li, Haifang, Bian, Jiang, Wang, Shuaiqiang, Chen, Guihai, Dou, Dejing, Yin, Dawei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:While learning to rank (LTR) has been widely used in web search to prioritize most relevant webpages among the retrieved contents subject to the input queries, the traditional LTR models fail to deliver decent performance due to two main reasons: 1) the lack of well-annotated query-webpage pairs with ranking scores to cover search queries of various popularity, and 2) ill-trained models based on a limited number of training samples with poor generalization performance. To improve the performance of LTR models, tremendous efforts have been done from above two aspects, such as enlarging training sets with pseudo-labels of ranking scores by self-training, or refining the features used for LTR through feature extraction and dimension reduction. Though LTR performance has been marginally increased, we still believe these methods could be further improved in the newly-fashioned "interpolating regime". Specifically, instead of lowering the number of features used for LTR models, our work proposes to transform original data with random Fourier feature, so as to over-parameterize the downstream LTR models (e.g., GBRank or LightGBM) with features in ultra-high dimensionality and achieve superb generalization performance. Furthermore, rather than self-training with pseudo-labels produced by the same LTR model in a "self-tuned" fashion, the proposed method incorporates the diversity of prediction results between the listwise and pointwise LTR models while co-training both models with a cyclic labeling-prediction pipeline in a "ping-pong" manner. We deploy the proposed C o-trained and O ver-parameterized LTR system COLTR at Baidu search and evaluate COLTR with a large number of baseline methods. The results show that COLTR could achieve \Delta NDCG_{4}=3.64%\sim4.92%, compared to baselines, under various ratios of labeled samples. We also conduct a 7-day A/B Test using the realistic web traffics of Baidu Search, where we can still observe significant performance improvement around \Delta NDCG_{4}=0.17%\sim0.92% in real-world applications. COLTR performs consistently both in online and offline experiments.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2023.3270750