Universal Multi-modal Multi-domain Pre-trained Recommendation
There is a rapidly-growing research interest in modeling user preferences via pre-training multi-domain interactions for recommender systems. However, Existing pre-trained multi-domain recommendations mostly select the item texts to be bridges across domains, and simply explore the user behaviors in...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Sun, Wenqi Xie, Ruobing Bian, Shuqing Zhao, Wayne Xin Zhou, Jie |
description | There is a rapidly-growing research interest in modeling user preferences via
pre-training multi-domain interactions for recommender systems. However,
Existing pre-trained multi-domain recommendations mostly select the item texts
to be bridges across domains, and simply explore the user behaviors in target
domains. Hence, they ignore other informative multi-modal item contents (e.g.,
visual information), and also lack of thorough consideration of user behaviors
from all interactive domains. To address these issues, in this paper, we
propose to pre-train universal multi-modal item content presentation for
multi-domain recommendation, called UniM^2Rec, which could smoothly learn the
multi-modal item content presentations and the multi-modal user preferences
from all domains. With the pre-trained multi-domain recommendation model,
UniM^2Rec could be efficiently and effectively transferred to new target
domains in practice. Extensive experiments conducted on five real-world
datasets in target domains demonstrate the superiority of the proposed method
over existing competitive methods, especially for the real-world recommendation
scenarios that usually struggle with seriously missing or noisy item contents. |
doi_str_mv | 10.48550/arxiv.2311.01831 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_01831</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_01831</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-1edf4ff9db0947b20ecada39547150fb2febb3cc5135a0d1d714f3a6045ed8e3</originalsourceid><addsrcrecordid>eNo9j71OwzAURr0woMIDMDUv4NQ3tptk6ICqUpBagfozR9e515KlOEFuqOjbAwUxfWf6dI4QD6ByU1mrZpg-wzkvNECuoNJwKxbHPpw5nbDLth_dGGQc6J9piBj67C2xHNM3MWU7bocYuSccw9DfiRuP3Ynv_3Yi9k-rw_JZbl7XL8vHjcR5CRKYvPG-JqdqU7pCcYuEuramBKu8Kzw7p9vWgraoCKgE4zXOlbFMFeuJmP6-XvWb9xQipkvzk9FcM_QXRPBDcA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Universal Multi-modal Multi-domain Pre-trained Recommendation</title><source>arXiv.org</source><creator>Sun, Wenqi ; Xie, Ruobing ; Bian, Shuqing ; Zhao, Wayne Xin ; Zhou, Jie</creator><creatorcontrib>Sun, Wenqi ; Xie, Ruobing ; Bian, Shuqing ; Zhao, Wayne Xin ; Zhou, Jie</creatorcontrib><description>There is a rapidly-growing research interest in modeling user preferences via
pre-training multi-domain interactions for recommender systems. However,
Existing pre-trained multi-domain recommendations mostly select the item texts
to be bridges across domains, and simply explore the user behaviors in target
domains. Hence, they ignore other informative multi-modal item contents (e.g.,
visual information), and also lack of thorough consideration of user behaviors
from all interactive domains. To address these issues, in this paper, we
propose to pre-train universal multi-modal item content presentation for
multi-domain recommendation, called UniM^2Rec, which could smoothly learn the
multi-modal item content presentations and the multi-modal user preferences
from all domains. With the pre-trained multi-domain recommendation model,
UniM^2Rec could be efficiently and effectively transferred to new target
domains in practice. Extensive experiments conducted on five real-world
datasets in target domains demonstrate the superiority of the proposed method
over existing competitive methods, especially for the real-world recommendation
scenarios that usually struggle with seriously missing or noisy item contents.</description><identifier>DOI: 10.48550/arxiv.2311.01831</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2023-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.01831$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.01831$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Sun, Wenqi</creatorcontrib><creatorcontrib>Xie, Ruobing</creatorcontrib><creatorcontrib>Bian, Shuqing</creatorcontrib><creatorcontrib>Zhao, Wayne Xin</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><title>Universal Multi-modal Multi-domain Pre-trained Recommendation</title><description>There is a rapidly-growing research interest in modeling user preferences via
pre-training multi-domain interactions for recommender systems. However,
Existing pre-trained multi-domain recommendations mostly select the item texts
to be bridges across domains, and simply explore the user behaviors in target
domains. Hence, they ignore other informative multi-modal item contents (e.g.,
visual information), and also lack of thorough consideration of user behaviors
from all interactive domains. To address these issues, in this paper, we
propose to pre-train universal multi-modal item content presentation for
multi-domain recommendation, called UniM^2Rec, which could smoothly learn the
multi-modal item content presentations and the multi-modal user preferences
from all domains. With the pre-trained multi-domain recommendation model,
UniM^2Rec could be efficiently and effectively transferred to new target
domains in practice. Extensive experiments conducted on five real-world
datasets in target domains demonstrate the superiority of the proposed method
over existing competitive methods, especially for the real-world recommendation
scenarios that usually struggle with seriously missing or noisy item contents.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo9j71OwzAURr0woMIDMDUv4NQ3tptk6ICqUpBagfozR9e515KlOEFuqOjbAwUxfWf6dI4QD6ByU1mrZpg-wzkvNECuoNJwKxbHPpw5nbDLth_dGGQc6J9piBj67C2xHNM3MWU7bocYuSccw9DfiRuP3Ynv_3Yi9k-rw_JZbl7XL8vHjcR5CRKYvPG-JqdqU7pCcYuEuramBKu8Kzw7p9vWgraoCKgE4zXOlbFMFeuJmP6-XvWb9xQipkvzk9FcM_QXRPBDcA</recordid><startdate>20231103</startdate><enddate>20231103</enddate><creator>Sun, Wenqi</creator><creator>Xie, Ruobing</creator><creator>Bian, Shuqing</creator><creator>Zhao, Wayne Xin</creator><creator>Zhou, Jie</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231103</creationdate><title>Universal Multi-modal Multi-domain Pre-trained Recommendation</title><author>Sun, Wenqi ; Xie, Ruobing ; Bian, Shuqing ; Zhao, Wayne Xin ; Zhou, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-1edf4ff9db0947b20ecada39547150fb2febb3cc5135a0d1d714f3a6045ed8e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Wenqi</creatorcontrib><creatorcontrib>Xie, Ruobing</creatorcontrib><creatorcontrib>Bian, Shuqing</creatorcontrib><creatorcontrib>Zhao, Wayne Xin</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sun, Wenqi</au><au>Xie, Ruobing</au><au>Bian, Shuqing</au><au>Zhao, Wayne Xin</au><au>Zhou, Jie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Universal Multi-modal Multi-domain Pre-trained Recommendation</atitle><date>2023-11-03</date><risdate>2023</risdate><abstract>There is a rapidly-growing research interest in modeling user preferences via
pre-training multi-domain interactions for recommender systems. However,
Existing pre-trained multi-domain recommendations mostly select the item texts
to be bridges across domains, and simply explore the user behaviors in target
domains. Hence, they ignore other informative multi-modal item contents (e.g.,
visual information), and also lack of thorough consideration of user behaviors
from all interactive domains. To address these issues, in this paper, we
propose to pre-train universal multi-modal item content presentation for
multi-domain recommendation, called UniM^2Rec, which could smoothly learn the
multi-modal item content presentations and the multi-modal user preferences
from all domains. With the pre-trained multi-domain recommendation model,
UniM^2Rec could be efficiently and effectively transferred to new target
domains in practice. Extensive experiments conducted on five real-world
datasets in target domains demonstrate the superiority of the proposed method
over existing competitive methods, especially for the real-world recommendation
scenarios that usually struggle with seriously missing or noisy item contents.</abstract><doi>10.48550/arxiv.2311.01831</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2311.01831 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2311_01831 |
source | arXiv.org |
subjects | Computer Science - Information Retrieval |
title | Universal Multi-modal Multi-domain Pre-trained Recommendation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T10%3A54%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Universal%20Multi-modal%20Multi-domain%20Pre-trained%20Recommendation&rft.au=Sun,%20Wenqi&rft.date=2023-11-03&rft_id=info:doi/10.48550/arxiv.2311.01831&rft_dat=%3Carxiv_GOX%3E2311_01831%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |