Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning

Session-based recommendation techniques aim to capture dynamic user behavior by analyzing past interactions. However, existing methods heavily rely on historical item ID sequences to extract user preferences, leading to challenges such as popular bias and cold-start problems. In this paper, we propo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhang, Zhipeng, Tong, Piao, Ma, Yingwei, Liu, Qiao, Liu, Xujiang, Luo, Xu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhang, Zhipeng
Tong, Piao
Ma, Yingwei
Liu, Qiao
Liu, Xujiang
Luo, Xu
description Session-based recommendation techniques aim to capture dynamic user behavior by analyzing past interactions. However, existing methods heavily rely on historical item ID sequences to extract user preferences, leading to challenges such as popular bias and cold-start problems. In this paper, we propose a hybrid multimodal approach for session-based recommendation to address these challenges. Our approach combines different modalities, including textual content and item IDs, leveraging the complementary nature of these modalities using CatBoost. To learn universal item representations, we design a language representation-based item retrieval architecture that extracts features from the textual content utilizing pre-trained language models. Furthermore, we introduce a novel Decoupled Contrastive Learning method to enhance the effectiveness of the language representation. This technique decouples the sequence representation and item representation space, facilitating bidirectional alignment through dual-queue contrastive learning. Simultaneously, the momentum queue provides a large number of negative samples, effectively enhancing the effectiveness of contrastive learning. Our approach yielded competitive results, securing a 5th place ranking in KDD CUP 2023 Task 1. We have released the source code and pre-trained models associated with this work.
doi_str_mv 10.48550/arxiv.2307.10650
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_10650</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_10650</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-6f661877772378c7b5778ae9b8d4ac8673ec8cb103fd90fb3a67cb98aa7b1b1e3</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwIn8gINdN7ZzhNACUqRKbe_R2tmklhqnstMW_h5TOpfRzGhXeoQ8cZYvdFGwFwjf7pzPBVM5Z7Jg92RTg-9P0CNd-j14i222xRjd6OkbxJQ2aMdhQN_ClMrs4qZ99p660_GQ1mr0U4A4uTNmNULwzvcP5K6DQ8THm8_IbrXcVZ-0Xn98Va81BakYlZ2UXKukuVDaKlMopQFLo9sFWC2VQKut4Ux0bck6I9KVNaUGUIYbjmJGnv_fXqGaY3ADhJ_mD665wolfJihK9g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning</title><source>arXiv.org</source><creator>Zhang, Zhipeng ; Tong, Piao ; Ma, Yingwei ; Liu, Qiao ; Liu, Xujiang ; Luo, Xu</creator><creatorcontrib>Zhang, Zhipeng ; Tong, Piao ; Ma, Yingwei ; Liu, Qiao ; Liu, Xujiang ; Luo, Xu</creatorcontrib><description>Session-based recommendation techniques aim to capture dynamic user behavior by analyzing past interactions. However, existing methods heavily rely on historical item ID sequences to extract user preferences, leading to challenges such as popular bias and cold-start problems. In this paper, we propose a hybrid multimodal approach for session-based recommendation to address these challenges. Our approach combines different modalities, including textual content and item IDs, leveraging the complementary nature of these modalities using CatBoost. To learn universal item representations, we design a language representation-based item retrieval architecture that extracts features from the textual content utilizing pre-trained language models. Furthermore, we introduce a novel Decoupled Contrastive Learning method to enhance the effectiveness of the language representation. This technique decouples the sequence representation and item representation space, facilitating bidirectional alignment through dual-queue contrastive learning. Simultaneously, the momentum queue provides a large number of negative samples, effectively enhancing the effectiveness of contrastive learning. Our approach yielded competitive results, securing a 5th place ranking in KDD CUP 2023 Task 1. We have released the source code and pre-trained models associated with this work.</description><identifier>DOI: 10.48550/arxiv.2307.10650</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2023-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.10650$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.10650$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Zhipeng</creatorcontrib><creatorcontrib>Tong, Piao</creatorcontrib><creatorcontrib>Ma, Yingwei</creatorcontrib><creatorcontrib>Liu, Qiao</creatorcontrib><creatorcontrib>Liu, Xujiang</creatorcontrib><creatorcontrib>Luo, Xu</creatorcontrib><title>Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning</title><description>Session-based recommendation techniques aim to capture dynamic user behavior by analyzing past interactions. However, existing methods heavily rely on historical item ID sequences to extract user preferences, leading to challenges such as popular bias and cold-start problems. In this paper, we propose a hybrid multimodal approach for session-based recommendation to address these challenges. Our approach combines different modalities, including textual content and item IDs, leveraging the complementary nature of these modalities using CatBoost. To learn universal item representations, we design a language representation-based item retrieval architecture that extracts features from the textual content utilizing pre-trained language models. Furthermore, we introduce a novel Decoupled Contrastive Learning method to enhance the effectiveness of the language representation. This technique decouples the sequence representation and item representation space, facilitating bidirectional alignment through dual-queue contrastive learning. Simultaneously, the momentum queue provides a large number of negative samples, effectively enhancing the effectiveness of contrastive learning. Our approach yielded competitive results, securing a 5th place ranking in KDD CUP 2023 Task 1. We have released the source code and pre-trained models associated with this work.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwIn8gINdN7ZzhNACUqRKbe_R2tmklhqnstMW_h5TOpfRzGhXeoQ8cZYvdFGwFwjf7pzPBVM5Z7Jg92RTg-9P0CNd-j14i222xRjd6OkbxJQ2aMdhQN_ClMrs4qZ99p660_GQ1mr0U4A4uTNmNULwzvcP5K6DQ8THm8_IbrXcVZ-0Xn98Va81BakYlZ2UXKukuVDaKlMopQFLo9sFWC2VQKut4Ux0bck6I9KVNaUGUIYbjmJGnv_fXqGaY3ADhJ_mD665wolfJihK9g</recordid><startdate>20230720</startdate><enddate>20230720</enddate><creator>Zhang, Zhipeng</creator><creator>Tong, Piao</creator><creator>Ma, Yingwei</creator><creator>Liu, Qiao</creator><creator>Liu, Xujiang</creator><creator>Luo, Xu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230720</creationdate><title>Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning</title><author>Zhang, Zhipeng ; Tong, Piao ; Ma, Yingwei ; Liu, Qiao ; Liu, Xujiang ; Luo, Xu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-6f661877772378c7b5778ae9b8d4ac8673ec8cb103fd90fb3a67cb98aa7b1b1e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Zhipeng</creatorcontrib><creatorcontrib>Tong, Piao</creatorcontrib><creatorcontrib>Ma, Yingwei</creatorcontrib><creatorcontrib>Liu, Qiao</creatorcontrib><creatorcontrib>Liu, Xujiang</creatorcontrib><creatorcontrib>Luo, Xu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Zhipeng</au><au>Tong, Piao</au><au>Ma, Yingwei</au><au>Liu, Qiao</au><au>Liu, Xujiang</au><au>Luo, Xu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning</atitle><date>2023-07-20</date><risdate>2023</risdate><abstract>Session-based recommendation techniques aim to capture dynamic user behavior by analyzing past interactions. However, existing methods heavily rely on historical item ID sequences to extract user preferences, leading to challenges such as popular bias and cold-start problems. In this paper, we propose a hybrid multimodal approach for session-based recommendation to address these challenges. Our approach combines different modalities, including textual content and item IDs, leveraging the complementary nature of these modalities using CatBoost. To learn universal item representations, we design a language representation-based item retrieval architecture that extracts features from the textual content utilizing pre-trained language models. Furthermore, we introduce a novel Decoupled Contrastive Learning method to enhance the effectiveness of the language representation. This technique decouples the sequence representation and item representation space, facilitating bidirectional alignment through dual-queue contrastive learning. Simultaneously, the momentum queue provides a large number of negative samples, effectively enhancing the effectiveness of contrastive learning. Our approach yielded competitive results, securing a 5th place ranking in KDD CUP 2023 Task 1. We have released the source code and pre-trained models associated with this work.</abstract><doi>10.48550/arxiv.2307.10650</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2307.10650
ispartof
issn
language eng
recordid cdi_arxiv_primary_2307_10650
source arXiv.org
subjects Computer Science - Information Retrieval
title Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T09%3A01%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Language-Enhanced%20Session-Based%20Recommendation%20with%20Decoupled%20Contrastive%20Learning&rft.au=Zhang,%20Zhipeng&rft.date=2023-07-20&rft_id=info:doi/10.48550/arxiv.2307.10650&rft_dat=%3Carxiv_GOX%3E2307_10650%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true