End-to-end training of Multimodal Model and ranking Model

Traditional recommender systems heavily rely on ID features, which often encounter challenges related to cold-start and generalization. Modeling pre-extracted content features can mitigate these issues, but is still a suboptimal solution due to the discrepancies between training tasks and model para...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Deng, Xiuqi, Xu, Lu, Li, Xiyao, Yu, Jinkai, Xue, Erpeng, Wang, Zhongyuan, Zhang, Di, Liu, Zhaojie, Zhou, Guorui, Yang, Song, Mou, Na, Shen, Jiang, Li, Han
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Deng, Xiuqi
Xu, Lu
Li, Xiyao
Yu, Jinkai
Xue, Erpeng
Wang, Zhongyuan
Zhang, Di
Liu, Zhaojie
Zhou, Guorui
Yang, Song
Mou, Na
Shen, Jiang
Li, Han
description Traditional recommender systems heavily rely on ID features, which often encounter challenges related to cold-start and generalization. Modeling pre-extracted content features can mitigate these issues, but is still a suboptimal solution due to the discrepancies between training tasks and model parameters. End-to-end training presents a promising solution for these problems, yet most of the existing works mainly focus on retrieval models, leaving the multimodal techniques under-utilized. In this paper, we propose an industrial multimodal recommendation framework named EM3: End-to-end training of Multimodal Model and ranking Model, which sufficiently utilizes multimodal information and allows personalized ranking tasks to directly train the core modules in the multimodal model to obtain more task-oriented content features, without overburdening resource consumption. First, we propose Fusion-Q-Former, which consists of transformers and a set of trainable queries, to fuse different modalities and generate fixed-length and robust multimodal embeddings. Second, in our sequential modeling for user content interest, we utilize Low-Rank Adaptation technique to alleviate the conflict between huge resource consumption and long sequence length. Third, we propose a novel Content-ID-Contrastive learning task to complement the advantages of content and ID by aligning them with each other, obtaining more task-oriented content embeddings and more generalized ID embeddings. In experiments, we implement EM3 on different ranking models in two scenario, achieving significant improvements in both offline evaluation and online A/B test, verifying the generalizability of our method. Ablation studies and visualization are also performed. Furthermore, we also conduct experiments on two public datasets to show that our proposed method outperforms the state-of-the-art methods.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3035349398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3035349398</sourcerecordid><originalsourceid>FETCH-proquest_journals_30353493983</originalsourceid><addsrcrecordid>eNqNiksKwjAUAIMgWLR3CLgOxLxGm7VU3HTnvgSSltSYp_nc3yoewNXAzKxIJQAOrG2E2JA6pZlzLo4nISVURHXBsIzMBkNz1C64MFEcaV98dg802tMejfVUL0PU4f7pX7Mj61H7ZOsft2R_6W7nK3tGfBWb8jBjiWFJA3CQ0ChQLfx3vQEu7zXP</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3035349398</pqid></control><display><type>article</type><title>End-to-end training of Multimodal Model and ranking Model</title><source>Free E- Journals</source><creator>Deng, Xiuqi ; Xu, Lu ; Li, Xiyao ; Yu, Jinkai ; Xue, Erpeng ; Wang, Zhongyuan ; Zhang, Di ; Liu, Zhaojie ; Zhou, Guorui ; Yang, Song ; Mou, Na ; Shen, Jiang ; Li, Han</creator><creatorcontrib>Deng, Xiuqi ; Xu, Lu ; Li, Xiyao ; Yu, Jinkai ; Xue, Erpeng ; Wang, Zhongyuan ; Zhang, Di ; Liu, Zhaojie ; Zhou, Guorui ; Yang, Song ; Mou, Na ; Shen, Jiang ; Li, Han</creatorcontrib><description>Traditional recommender systems heavily rely on ID features, which often encounter challenges related to cold-start and generalization. Modeling pre-extracted content features can mitigate these issues, but is still a suboptimal solution due to the discrepancies between training tasks and model parameters. End-to-end training presents a promising solution for these problems, yet most of the existing works mainly focus on retrieval models, leaving the multimodal techniques under-utilized. In this paper, we propose an industrial multimodal recommendation framework named EM3: End-to-end training of Multimodal Model and ranking Model, which sufficiently utilizes multimodal information and allows personalized ranking tasks to directly train the core modules in the multimodal model to obtain more task-oriented content features, without overburdening resource consumption. First, we propose Fusion-Q-Former, which consists of transformers and a set of trainable queries, to fuse different modalities and generate fixed-length and robust multimodal embeddings. Second, in our sequential modeling for user content interest, we utilize Low-Rank Adaptation technique to alleviate the conflict between huge resource consumption and long sequence length. Third, we propose a novel Content-ID-Contrastive learning task to complement the advantages of content and ID by aligning them with each other, obtaining more task-oriented content embeddings and more generalized ID embeddings. In experiments, we implement EM3 on different ranking models in two scenario, achieving significant improvements in both offline evaluation and online A/B test, verifying the generalizability of our method. Ablation studies and visualization are also performed. Furthermore, we also conduct experiments on two public datasets to show that our proposed method outperforms the state-of-the-art methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Cognitive tasks ; Consumption ; Modelling ; Ranking ; Recommender systems</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Deng, Xiuqi</creatorcontrib><creatorcontrib>Xu, Lu</creatorcontrib><creatorcontrib>Li, Xiyao</creatorcontrib><creatorcontrib>Yu, Jinkai</creatorcontrib><creatorcontrib>Xue, Erpeng</creatorcontrib><creatorcontrib>Wang, Zhongyuan</creatorcontrib><creatorcontrib>Zhang, Di</creatorcontrib><creatorcontrib>Liu, Zhaojie</creatorcontrib><creatorcontrib>Zhou, Guorui</creatorcontrib><creatorcontrib>Yang, Song</creatorcontrib><creatorcontrib>Mou, Na</creatorcontrib><creatorcontrib>Shen, Jiang</creatorcontrib><creatorcontrib>Li, Han</creatorcontrib><title>End-to-end training of Multimodal Model and ranking Model</title><title>arXiv.org</title><description>Traditional recommender systems heavily rely on ID features, which often encounter challenges related to cold-start and generalization. Modeling pre-extracted content features can mitigate these issues, but is still a suboptimal solution due to the discrepancies between training tasks and model parameters. End-to-end training presents a promising solution for these problems, yet most of the existing works mainly focus on retrieval models, leaving the multimodal techniques under-utilized. In this paper, we propose an industrial multimodal recommendation framework named EM3: End-to-end training of Multimodal Model and ranking Model, which sufficiently utilizes multimodal information and allows personalized ranking tasks to directly train the core modules in the multimodal model to obtain more task-oriented content features, without overburdening resource consumption. First, we propose Fusion-Q-Former, which consists of transformers and a set of trainable queries, to fuse different modalities and generate fixed-length and robust multimodal embeddings. Second, in our sequential modeling for user content interest, we utilize Low-Rank Adaptation technique to alleviate the conflict between huge resource consumption and long sequence length. Third, we propose a novel Content-ID-Contrastive learning task to complement the advantages of content and ID by aligning them with each other, obtaining more task-oriented content embeddings and more generalized ID embeddings. In experiments, we implement EM3 on different ranking models in two scenario, achieving significant improvements in both offline evaluation and online A/B test, verifying the generalizability of our method. Ablation studies and visualization are also performed. Furthermore, we also conduct experiments on two public datasets to show that our proposed method outperforms the state-of-the-art methods.</description><subject>Ablation</subject><subject>Cognitive tasks</subject><subject>Consumption</subject><subject>Modelling</subject><subject>Ranking</subject><subject>Recommender systems</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNiksKwjAUAIMgWLR3CLgOxLxGm7VU3HTnvgSSltSYp_nc3yoewNXAzKxIJQAOrG2E2JA6pZlzLo4nISVURHXBsIzMBkNz1C64MFEcaV98dg802tMejfVUL0PU4f7pX7Mj61H7ZOsft2R_6W7nK3tGfBWb8jBjiWFJA3CQ0ChQLfx3vQEu7zXP</recordid><startdate>20240409</startdate><enddate>20240409</enddate><creator>Deng, Xiuqi</creator><creator>Xu, Lu</creator><creator>Li, Xiyao</creator><creator>Yu, Jinkai</creator><creator>Xue, Erpeng</creator><creator>Wang, Zhongyuan</creator><creator>Zhang, Di</creator><creator>Liu, Zhaojie</creator><creator>Zhou, Guorui</creator><creator>Yang, Song</creator><creator>Mou, Na</creator><creator>Shen, Jiang</creator><creator>Li, Han</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240409</creationdate><title>End-to-end training of Multimodal Model and ranking Model</title><author>Deng, Xiuqi ; Xu, Lu ; Li, Xiyao ; Yu, Jinkai ; Xue, Erpeng ; Wang, Zhongyuan ; Zhang, Di ; Liu, Zhaojie ; Zhou, Guorui ; Yang, Song ; Mou, Na ; Shen, Jiang ; Li, Han</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30353493983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ablation</topic><topic>Cognitive tasks</topic><topic>Consumption</topic><topic>Modelling</topic><topic>Ranking</topic><topic>Recommender systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Xiuqi</creatorcontrib><creatorcontrib>Xu, Lu</creatorcontrib><creatorcontrib>Li, Xiyao</creatorcontrib><creatorcontrib>Yu, Jinkai</creatorcontrib><creatorcontrib>Xue, Erpeng</creatorcontrib><creatorcontrib>Wang, Zhongyuan</creatorcontrib><creatorcontrib>Zhang, Di</creatorcontrib><creatorcontrib>Liu, Zhaojie</creatorcontrib><creatorcontrib>Zhou, Guorui</creatorcontrib><creatorcontrib>Yang, Song</creatorcontrib><creatorcontrib>Mou, Na</creatorcontrib><creatorcontrib>Shen, Jiang</creatorcontrib><creatorcontrib>Li, Han</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deng, Xiuqi</au><au>Xu, Lu</au><au>Li, Xiyao</au><au>Yu, Jinkai</au><au>Xue, Erpeng</au><au>Wang, Zhongyuan</au><au>Zhang, Di</au><au>Liu, Zhaojie</au><au>Zhou, Guorui</au><au>Yang, Song</au><au>Mou, Na</au><au>Shen, Jiang</au><au>Li, Han</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>End-to-end training of Multimodal Model and ranking Model</atitle><jtitle>arXiv.org</jtitle><date>2024-04-09</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Traditional recommender systems heavily rely on ID features, which often encounter challenges related to cold-start and generalization. Modeling pre-extracted content features can mitigate these issues, but is still a suboptimal solution due to the discrepancies between training tasks and model parameters. End-to-end training presents a promising solution for these problems, yet most of the existing works mainly focus on retrieval models, leaving the multimodal techniques under-utilized. In this paper, we propose an industrial multimodal recommendation framework named EM3: End-to-end training of Multimodal Model and ranking Model, which sufficiently utilizes multimodal information and allows personalized ranking tasks to directly train the core modules in the multimodal model to obtain more task-oriented content features, without overburdening resource consumption. First, we propose Fusion-Q-Former, which consists of transformers and a set of trainable queries, to fuse different modalities and generate fixed-length and robust multimodal embeddings. Second, in our sequential modeling for user content interest, we utilize Low-Rank Adaptation technique to alleviate the conflict between huge resource consumption and long sequence length. Third, we propose a novel Content-ID-Contrastive learning task to complement the advantages of content and ID by aligning them with each other, obtaining more task-oriented content embeddings and more generalized ID embeddings. In experiments, we implement EM3 on different ranking models in two scenario, achieving significant improvements in both offline evaluation and online A/B test, verifying the generalizability of our method. Ablation studies and visualization are also performed. Furthermore, we also conduct experiments on two public datasets to show that our proposed method outperforms the state-of-the-art methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_3035349398
source Free E- Journals
subjects Ablation
Cognitive tasks
Consumption
Modelling
Ranking
Recommender systems
title End-to-end training of Multimodal Model and ranking Model
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T17%3A18%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=End-to-end%20training%20of%20Multimodal%20Model%20and%20ranking%20Model&rft.jtitle=arXiv.org&rft.au=Deng,%20Xiuqi&rft.date=2024-04-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3035349398%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3035349398&rft_id=info:pmid/&rfr_iscdi=true