A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation
The training paradigm integrating large language models (LLM) is gradually reshaping sequential recommender systems (SRS) and has shown promising results. However, most existing LLM-enhanced methods rely on rich textual information on the item side and instance-level supervised fine-tuning (SFT) to...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Liu, Dugang Xian, Shenxian Lin, Xiaolin Zhang, Xiaolian Zhu, Hong Fang, Yuan Chen, Zhen Ming, Zhong |
description | The training paradigm integrating large language models (LLM) is gradually
reshaping sequential recommender systems (SRS) and has shown promising results.
However, most existing LLM-enhanced methods rely on rich textual information on
the item side and instance-level supervised fine-tuning (SFT) to inject
collaborative information into LLM, which is inefficient and limited in many
applications. To alleviate these problems, this paper proposes a
practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for
SRS. Specifically, in the information reconstruction stage, we design a new
user-level SFT task for collaborative information injection with the assistance
of a pre-trained SRS model, which is more efficient and compatible with limited
text information. Our goal is to let LLM learn to reconstruct a corresponding
prior preference distribution from each user's interaction sequence, where LLM
needs to effectively parse the latent category of each item and the
relationship between different items to accomplish this task. In the
information augmentation stage, we feed each item into LLM to obtain a set of
enhanced embeddings that combine collaborative information and LLM inference
capabilities. These embeddings can then be used to help train various future
SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec
on three SRS benchmark datasets. |
doi_str_mv | 10.48550/arxiv.2406.00333 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_00333</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_00333</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_003333</originalsourceid><addsrcrecordid>eNqFjr0OgjAURrs4GPUBnOwLgNWCcTUG4oAJUffmplzgJrRowR_eXiDuTt9wTr4cxpYb4Qf7MBRrcB96-dtA7HwhpJRTpg48daBb0ujFjtBmVceT5OxFtgSrMeMpOMioMPxNbdnLmKPDngygIVvwvHb8io8n2pag4hfUtTH9EbRU2zmb5FA1uPjtjK3i6HY8eWOKujsy4Do1JKkxSf43vr7MQbs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation</title><source>arXiv.org</source><creator>Liu, Dugang ; Xian, Shenxian ; Lin, Xiaolin ; Zhang, Xiaolian ; Zhu, Hong ; Fang, Yuan ; Chen, Zhen ; Ming, Zhong</creator><creatorcontrib>Liu, Dugang ; Xian, Shenxian ; Lin, Xiaolin ; Zhang, Xiaolian ; Zhu, Hong ; Fang, Yuan ; Chen, Zhen ; Ming, Zhong</creatorcontrib><description>The training paradigm integrating large language models (LLM) is gradually
reshaping sequential recommender systems (SRS) and has shown promising results.
However, most existing LLM-enhanced methods rely on rich textual information on
the item side and instance-level supervised fine-tuning (SFT) to inject
collaborative information into LLM, which is inefficient and limited in many
applications. To alleviate these problems, this paper proposes a
practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for
SRS. Specifically, in the information reconstruction stage, we design a new
user-level SFT task for collaborative information injection with the assistance
of a pre-trained SRS model, which is more efficient and compatible with limited
text information. Our goal is to let LLM learn to reconstruct a corresponding
prior preference distribution from each user's interaction sequence, where LLM
needs to effectively parse the latent category of each item and the
relationship between different items to accomplish this task. In the
information augmentation stage, we feed each item into LLM to obtain a set of
enhanced embeddings that combine collaborative information and LLM inference
capabilities. These embeddings can then be used to help train various future
SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec
on three SRS benchmark datasets.</description><identifier>DOI: 10.48550/arxiv.2406.00333</identifier><language>eng</language><subject>Computer Science - Information Retrieval</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.00333$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.00333$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Dugang</creatorcontrib><creatorcontrib>Xian, Shenxian</creatorcontrib><creatorcontrib>Lin, Xiaolin</creatorcontrib><creatorcontrib>Zhang, Xiaolian</creatorcontrib><creatorcontrib>Zhu, Hong</creatorcontrib><creatorcontrib>Fang, Yuan</creatorcontrib><creatorcontrib>Chen, Zhen</creatorcontrib><creatorcontrib>Ming, Zhong</creatorcontrib><title>A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation</title><description>The training paradigm integrating large language models (LLM) is gradually
reshaping sequential recommender systems (SRS) and has shown promising results.
However, most existing LLM-enhanced methods rely on rich textual information on
the item side and instance-level supervised fine-tuning (SFT) to inject
collaborative information into LLM, which is inefficient and limited in many
applications. To alleviate these problems, this paper proposes a
practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for
SRS. Specifically, in the information reconstruction stage, we design a new
user-level SFT task for collaborative information injection with the assistance
of a pre-trained SRS model, which is more efficient and compatible with limited
text information. Our goal is to let LLM learn to reconstruct a corresponding
prior preference distribution from each user's interaction sequence, where LLM
needs to effectively parse the latent category of each item and the
relationship between different items to accomplish this task. In the
information augmentation stage, we feed each item into LLM to obtain a set of
enhanced embeddings that combine collaborative information and LLM inference
capabilities. These embeddings can then be used to help train various future
SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec
on three SRS benchmark datasets.</description><subject>Computer Science - Information Retrieval</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjr0OgjAURrs4GPUBnOwLgNWCcTUG4oAJUffmplzgJrRowR_eXiDuTt9wTr4cxpYb4Qf7MBRrcB96-dtA7HwhpJRTpg48daBb0ujFjtBmVceT5OxFtgSrMeMpOMioMPxNbdnLmKPDngygIVvwvHb8io8n2pag4hfUtTH9EbRU2zmb5FA1uPjtjK3i6HY8eWOKujsy4Do1JKkxSf43vr7MQbs</recordid><startdate>20240601</startdate><enddate>20240601</enddate><creator>Liu, Dugang</creator><creator>Xian, Shenxian</creator><creator>Lin, Xiaolin</creator><creator>Zhang, Xiaolian</creator><creator>Zhu, Hong</creator><creator>Fang, Yuan</creator><creator>Chen, Zhen</creator><creator>Ming, Zhong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240601</creationdate><title>A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation</title><author>Liu, Dugang ; Xian, Shenxian ; Lin, Xiaolin ; Zhang, Xiaolian ; Zhu, Hong ; Fang, Yuan ; Chen, Zhen ; Ming, Zhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_003333</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Information Retrieval</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Dugang</creatorcontrib><creatorcontrib>Xian, Shenxian</creatorcontrib><creatorcontrib>Lin, Xiaolin</creatorcontrib><creatorcontrib>Zhang, Xiaolian</creatorcontrib><creatorcontrib>Zhu, Hong</creatorcontrib><creatorcontrib>Fang, Yuan</creatorcontrib><creatorcontrib>Chen, Zhen</creatorcontrib><creatorcontrib>Ming, Zhong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Dugang</au><au>Xian, Shenxian</au><au>Lin, Xiaolin</au><au>Zhang, Xiaolian</au><au>Zhu, Hong</au><au>Fang, Yuan</au><au>Chen, Zhen</au><au>Ming, Zhong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation</atitle><date>2024-06-01</date><risdate>2024</risdate><abstract>The training paradigm integrating large language models (LLM) is gradually
reshaping sequential recommender systems (SRS) and has shown promising results.
However, most existing LLM-enhanced methods rely on rich textual information on
the item side and instance-level supervised fine-tuning (SFT) to inject
collaborative information into LLM, which is inefficient and limited in many
applications. To alleviate these problems, this paper proposes a
practice-friendly LLM-enhanced paradigm with preference parsing (P2Rec) for
SRS. Specifically, in the information reconstruction stage, we design a new
user-level SFT task for collaborative information injection with the assistance
of a pre-trained SRS model, which is more efficient and compatible with limited
text information. Our goal is to let LLM learn to reconstruct a corresponding
prior preference distribution from each user's interaction sequence, where LLM
needs to effectively parse the latent category of each item and the
relationship between different items to accomplish this task. In the
information augmentation stage, we feed each item into LLM to obtain a set of
enhanced embeddings that combine collaborative information and LLM inference
capabilities. These embeddings can then be used to help train various future
SRS models. Finally, we verify the effectiveness and efficiency of our TSLRec
on three SRS benchmark datasets.</abstract><doi>10.48550/arxiv.2406.00333</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2406.00333 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2406_00333 |
source | arXiv.org |
subjects | Computer Science - Information Retrieval |
title | A Practice-Friendly LLM-Enhanced Paradigm with Preference Parsing for Sequential Recommendation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T19%3A51%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Practice-Friendly%20LLM-Enhanced%20Paradigm%20with%20Preference%20Parsing%20for%20Sequential%20Recommendation&rft.au=Liu,%20Dugang&rft.date=2024-06-01&rft_id=info:doi/10.48550/arxiv.2406.00333&rft_dat=%3Carxiv_GOX%3E2406_00333%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |