Multi-Task Recommendations with Reinforcement Learning

In recent years, Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications. However, current MTL-based recommendation models tend to disregard the session-wise patterns of user-item interactions because they are predominantly constructed based on item-wise dataset...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-03
Hauptverfasser: Liu, Ziru, Tian, Jiejie, Cai, Qingpeng, Zhao, Xiangyu, Gao, Jingtong, Liu, Shuchang, Chen, Dayou, He, Tonghao, Zheng, Dong, Jiang, Peng, Gai, Kun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Ziru
Tian, Jiejie
Cai, Qingpeng
Zhao, Xiangyu
Gao, Jingtong
Liu, Shuchang
Chen, Dayou
He, Tonghao
Zheng, Dong
Jiang, Peng
Gai, Kun
description In recent years, Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications. However, current MTL-based recommendation models tend to disregard the session-wise patterns of user-item interactions because they are predominantly constructed based on item-wise datasets. Moreover, balancing multiple objectives has always been a challenge in this field, which is typically avoided via linear estimations in existing works. To address these issues, in this paper, we propose a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights. To be specific, the RMTL structure can address the two aforementioned issues by (i) constructing an MTL environment from session-wise interactions and (ii) training multi-task actor-critic network structure, which is compatible with most existing MTL-based recommendation models, and (iii) optimizing and fine-tuning the MTL loss function using the weights generated by critic networks. Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models. Additionally, we evaluate and validate RMTL's compatibility and transferability across various MTL models.
doi_str_mv 10.48550/arxiv.2302.03328
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2302_03328</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2774363630</sourcerecordid><originalsourceid>FETCH-LOGICAL-a520-bda8a794c5bda522150ae0ade02be67216558b08045742047ec5475b07116aa13</originalsourceid><addsrcrecordid>eNotj1tLw0AQhRdBsNT-AJ8M-Jw6O7uTjY9SvBQiguQ9TJKtbm2Supt4-feurczDGWYOh_MJcSFhqXMiuGb_7T6XqACXoBTmJ2KGSsk014hnYhHCFgAwM0ikZiJ7mnajS0sO78mLbYaus33Loxv6kHy58S0eXb8ZfGPjY0wKy753_eu5ON3wLtjFv85FeX9Xrh7T4vlhvbotUiaEtG45Z3OjG4obIUoCtsCtBaxtbCAzoryGHDQZjaCNbUgbqsFImTFLNReXx9gDVLX3rmP_U_3BVQe46Lg6OvZ--JhsGKvtMPk-dqrQGK2yOKB-ARNGUII</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2774363630</pqid></control><display><type>article</type><title>Multi-Task Recommendations with Reinforcement Learning</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Liu, Ziru ; Tian, Jiejie ; Cai, Qingpeng ; Zhao, Xiangyu ; Gao, Jingtong ; Liu, Shuchang ; Chen, Dayou ; He, Tonghao ; Zheng, Dong ; Jiang, Peng ; Gai, Kun</creator><creatorcontrib>Liu, Ziru ; Tian, Jiejie ; Cai, Qingpeng ; Zhao, Xiangyu ; Gao, Jingtong ; Liu, Shuchang ; Chen, Dayou ; He, Tonghao ; Zheng, Dong ; Jiang, Peng ; Gai, Kun</creatorcontrib><description>In recent years, Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications. However, current MTL-based recommendation models tend to disregard the session-wise patterns of user-item interactions because they are predominantly constructed based on item-wise datasets. Moreover, balancing multiple objectives has always been a challenge in this field, which is typically avoided via linear estimations in existing works. To address these issues, in this paper, we propose a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights. To be specific, the RMTL structure can address the two aforementioned issues by (i) constructing an MTL environment from session-wise interactions and (ii) training multi-task actor-critic network structure, which is compatible with most existing MTL-based recommendation models, and (iii) optimizing and fine-tuning the MTL loss function using the weights generated by critic networks. Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models. Additionally, we evaluate and validate RMTL's compatibility and transferability across various MTL models.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2302.03328</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Information Retrieval ; Computer Science - Learning ; Datasets ; Recommender systems</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,781,882,27906</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.03328$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1145/3543507.3583467$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Ziru</creatorcontrib><creatorcontrib>Tian, Jiejie</creatorcontrib><creatorcontrib>Cai, Qingpeng</creatorcontrib><creatorcontrib>Zhao, Xiangyu</creatorcontrib><creatorcontrib>Gao, Jingtong</creatorcontrib><creatorcontrib>Liu, Shuchang</creatorcontrib><creatorcontrib>Chen, Dayou</creatorcontrib><creatorcontrib>He, Tonghao</creatorcontrib><creatorcontrib>Zheng, Dong</creatorcontrib><creatorcontrib>Jiang, Peng</creatorcontrib><creatorcontrib>Gai, Kun</creatorcontrib><title>Multi-Task Recommendations with Reinforcement Learning</title><title>arXiv.org</title><description>In recent years, Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications. However, current MTL-based recommendation models tend to disregard the session-wise patterns of user-item interactions because they are predominantly constructed based on item-wise datasets. Moreover, balancing multiple objectives has always been a challenge in this field, which is typically avoided via linear estimations in existing works. To address these issues, in this paper, we propose a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights. To be specific, the RMTL structure can address the two aforementioned issues by (i) constructing an MTL environment from session-wise interactions and (ii) training multi-task actor-critic network structure, which is compatible with most existing MTL-based recommendation models, and (iii) optimizing and fine-tuning the MTL loss function using the weights generated by critic networks. Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models. Additionally, we evaluate and validate RMTL's compatibility and transferability across various MTL models.</description><subject>Computer Science - Information Retrieval</subject><subject>Computer Science - Learning</subject><subject>Datasets</subject><subject>Recommender systems</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj1tLw0AQhRdBsNT-AJ8M-Jw6O7uTjY9SvBQiguQ9TJKtbm2Supt4-feurczDGWYOh_MJcSFhqXMiuGb_7T6XqACXoBTmJ2KGSsk014hnYhHCFgAwM0ikZiJ7mnajS0sO78mLbYaus33Loxv6kHy58S0eXb8ZfGPjY0wKy753_eu5ON3wLtjFv85FeX9Xrh7T4vlhvbotUiaEtG45Z3OjG4obIUoCtsCtBaxtbCAzoryGHDQZjaCNbUgbqsFImTFLNReXx9gDVLX3rmP_U_3BVQe46Lg6OvZ--JhsGKvtMPk-dqrQGK2yOKB-ARNGUII</recordid><startdate>20230310</startdate><enddate>20230310</enddate><creator>Liu, Ziru</creator><creator>Tian, Jiejie</creator><creator>Cai, Qingpeng</creator><creator>Zhao, Xiangyu</creator><creator>Gao, Jingtong</creator><creator>Liu, Shuchang</creator><creator>Chen, Dayou</creator><creator>He, Tonghao</creator><creator>Zheng, Dong</creator><creator>Jiang, Peng</creator><creator>Gai, Kun</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230310</creationdate><title>Multi-Task Recommendations with Reinforcement Learning</title><author>Liu, Ziru ; Tian, Jiejie ; Cai, Qingpeng ; Zhao, Xiangyu ; Gao, Jingtong ; Liu, Shuchang ; Chen, Dayou ; He, Tonghao ; Zheng, Dong ; Jiang, Peng ; Gai, Kun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a520-bda8a794c5bda522150ae0ade02be67216558b08045742047ec5475b07116aa13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Information Retrieval</topic><topic>Computer Science - Learning</topic><topic>Datasets</topic><topic>Recommender systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Ziru</creatorcontrib><creatorcontrib>Tian, Jiejie</creatorcontrib><creatorcontrib>Cai, Qingpeng</creatorcontrib><creatorcontrib>Zhao, Xiangyu</creatorcontrib><creatorcontrib>Gao, Jingtong</creatorcontrib><creatorcontrib>Liu, Shuchang</creatorcontrib><creatorcontrib>Chen, Dayou</creatorcontrib><creatorcontrib>He, Tonghao</creatorcontrib><creatorcontrib>Zheng, Dong</creatorcontrib><creatorcontrib>Jiang, Peng</creatorcontrib><creatorcontrib>Gai, Kun</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Ziru</au><au>Tian, Jiejie</au><au>Cai, Qingpeng</au><au>Zhao, Xiangyu</au><au>Gao, Jingtong</au><au>Liu, Shuchang</au><au>Chen, Dayou</au><au>He, Tonghao</au><au>Zheng, Dong</au><au>Jiang, Peng</au><au>Gai, Kun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Task Recommendations with Reinforcement Learning</atitle><jtitle>arXiv.org</jtitle><date>2023-03-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>In recent years, Multi-task Learning (MTL) has yielded immense success in Recommender System (RS) applications. However, current MTL-based recommendation models tend to disregard the session-wise patterns of user-item interactions because they are predominantly constructed based on item-wise datasets. Moreover, balancing multiple objectives has always been a challenge in this field, which is typically avoided via linear estimations in existing works. To address these issues, in this paper, we propose a Reinforcement Learning (RL) enhanced MTL framework, namely RMTL, to combine the losses of different recommendation tasks using dynamic weights. To be specific, the RMTL structure can address the two aforementioned issues by (i) constructing an MTL environment from session-wise interactions and (ii) training multi-task actor-critic network structure, which is compatible with most existing MTL-based recommendation models, and (iii) optimizing and fine-tuning the MTL loss function using the weights generated by critic networks. Experiments on two real-world public datasets demonstrate the effectiveness of RMTL with a higher AUC against state-of-the-art MTL-based recommendation models. Additionally, we evaluate and validate RMTL's compatibility and transferability across various MTL models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2302.03328</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-03
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2302_03328
source arXiv.org; Free E- Journals
subjects Computer Science - Information Retrieval
Computer Science - Learning
Datasets
Recommender systems
title Multi-Task Recommendations with Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T19%3A45%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Task%20Recommendations%20with%20Reinforcement%20Learning&rft.jtitle=arXiv.org&rft.au=Liu,%20Ziru&rft.date=2023-03-10&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2302.03328&rft_dat=%3Cproquest_arxiv%3E2774363630%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2774363630&rft_id=info:pmid/&rfr_iscdi=true