Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations

Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research address...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Wu, Xuansheng, Zhou, Huachi, Shi, Yucheng, Yao, Wenlin, Huang, Xiao, Liu, Ninghao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wu, Xuansheng
Zhou, Huachi
Shi, Yucheng
Yao, Wenlin
Huang, Xiao
Liu, Ninghao
description Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research addresses cold-start issues for either users or items, we still lack solutions for system cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive approach heavily relies on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To solve the challenge, we propose to enhance small language models for recommender systems with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. The pipeline is supported by a theoretical framework that formalizes the connection between in-context recommendation and language modeling. To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only \(17\%\) of the inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2832637752</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2832637752</sourcerecordid><originalsourceid>FETCH-proquest_journals_28326377523</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCIr2HS44F2pifzaHqjjoou7l2lxLS5pobqqvr4Pg6nSG843EVCq1jIuVlBMRMXdJksgsl2mqpuJausFoOPdoDBzQNgM2BEenyTCcyT8JkOFEtet7spo8r-HiXug1wwYDxjXZ4NsaSmd0zAF9-GkMrbM8F-MbGqbo25lY7LaXch_fvXsMxKHq3ODtZ1WyUDJTeZ5K9Z96A0VRRW0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2832637752</pqid></control><display><type>article</type><title>Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations</title><source>Free E- Journals</source><creator>Wu, Xuansheng ; Zhou, Huachi ; Shi, Yucheng ; Yao, Wenlin ; Huang, Xiao ; Liu, Ninghao</creator><creatorcontrib>Wu, Xuansheng ; Zhou, Huachi ; Shi, Yucheng ; Yao, Wenlin ; Huang, Xiao ; Liu, Ninghao</creatorcontrib><description>Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research addresses cold-start issues for either users or items, we still lack solutions for system cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive approach heavily relies on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To solve the challenge, we propose to enhance small language models for recommender systems with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. The pipeline is supported by a theoretical framework that formalizes the connection between in-context recommendation and language modeling. To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only \(17\%\) of the inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Cold ; Cold starts ; Customization ; Data mining ; Recommender systems ; Sentiment analysis</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Wu, Xuansheng</creatorcontrib><creatorcontrib>Zhou, Huachi</creatorcontrib><creatorcontrib>Shi, Yucheng</creatorcontrib><creatorcontrib>Yao, Wenlin</creatorcontrib><creatorcontrib>Huang, Xiao</creatorcontrib><creatorcontrib>Liu, Ninghao</creatorcontrib><title>Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations</title><title>arXiv.org</title><description>Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research addresses cold-start issues for either users or items, we still lack solutions for system cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive approach heavily relies on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To solve the challenge, we propose to enhance small language models for recommender systems with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. The pipeline is supported by a theoretical framework that formalizes the connection between in-context recommendation and language modeling. To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only \(17\%\) of the inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.</description><subject>Benchmarks</subject><subject>Cold</subject><subject>Cold starts</subject><subject>Customization</subject><subject>Data mining</subject><subject>Recommender systems</subject><subject>Sentiment analysis</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyr0KwjAUQOEgCIr2HS44F2pifzaHqjjoou7l2lxLS5pobqqvr4Pg6nSG843EVCq1jIuVlBMRMXdJksgsl2mqpuJausFoOPdoDBzQNgM2BEenyTCcyT8JkOFEtet7spo8r-HiXug1wwYDxjXZ4NsaSmd0zAF9-GkMrbM8F-MbGqbo25lY7LaXch_fvXsMxKHq3ODtZ1WyUDJTeZ5K9Z96A0VRRW0</recordid><startdate>20240304</startdate><enddate>20240304</enddate><creator>Wu, Xuansheng</creator><creator>Zhou, Huachi</creator><creator>Shi, Yucheng</creator><creator>Yao, Wenlin</creator><creator>Huang, Xiao</creator><creator>Liu, Ninghao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240304</creationdate><title>Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations</title><author>Wu, Xuansheng ; Zhou, Huachi ; Shi, Yucheng ; Yao, Wenlin ; Huang, Xiao ; Liu, Ninghao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28326377523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Cold</topic><topic>Cold starts</topic><topic>Customization</topic><topic>Data mining</topic><topic>Recommender systems</topic><topic>Sentiment analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Xuansheng</creatorcontrib><creatorcontrib>Zhou, Huachi</creatorcontrib><creatorcontrib>Shi, Yucheng</creatorcontrib><creatorcontrib>Yao, Wenlin</creatorcontrib><creatorcontrib>Huang, Xiao</creatorcontrib><creatorcontrib>Liu, Ninghao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wu, Xuansheng</au><au>Zhou, Huachi</au><au>Shi, Yucheng</au><au>Yao, Wenlin</au><au>Huang, Xiao</au><au>Liu, Ninghao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations</atitle><jtitle>arXiv.org</jtitle><date>2024-03-04</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recommendation systems help users find matched items based on their previous behaviors. Personalized recommendation becomes challenging in the absence of historical user-item interactions, a practical problem for startups known as the system cold-start recommendation. While existing research addresses cold-start issues for either users or items, we still lack solutions for system cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but effective approach based on in-context learning of language models, where we transform the recommendation task into the sentiment analysis task on natural language containing user and item profiles. However, this naive approach heavily relies on the strong in-context learning ability emerged from large language models, which could suffer from significant latency for online recommendations. To solve the challenge, we propose to enhance small language models for recommender systems with a data-centric pipeline, which consists of: (1) constructing a refined corpus for model pre-training; (2) constructing a decomposed prompt template via prompt pre-training. They correspond to the development of training data and inference data, respectively. The pipeline is supported by a theoretical framework that formalizes the connection between in-context recommendation and language modeling. To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only \(17\%\) of the inference time. To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem. We believe our findings will provide valuable insights for future works. The benchmark and implementations are available at https://github.com/JacksonWuxs/PromptRec.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2832637752
source Free E- Journals
subjects Benchmarks
Cold
Cold starts
Customization
Data mining
Recommender systems
Sentiment analysis
title Could Small Language Models Serve as Recommenders? Towards Data-centric Cold-start Recommendations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T01%3A03%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Could%20Small%20Language%20Models%20Serve%20as%20Recommenders?%20Towards%20Data-centric%20Cold-start%20Recommendations&rft.jtitle=arXiv.org&rft.au=Wu,%20Xuansheng&rft.date=2024-03-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2832637752%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2832637752&rft_id=info:pmid/&rfr_iscdi=true