An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences

To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorith...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2022-08, Vol.52 (8), p.8006-8018
Hauptverfasser: Shang, Mingsheng, Yuan, Ye, Luo, Xin, Zhou, MengChu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8018
container_issue 8
container_start_page 8006
container_title IEEE transactions on cybernetics
container_volume 52
creator Shang, Mingsheng
Yuan, Ye
Luo, Xin
Zhou, MengChu
description To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an \alpha - \beta -divergence-generalized model that enjoys fast convergence. Its ideas are three-fold: 1) generalizing its learning objective with \alpha - \beta -divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.
doi_str_mv 10.1109/TCYB.2020.3026425
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2691874268</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9357412</ieee_id><sourcerecordid>2491950694</sourcerecordid><originalsourceid>FETCH-LOGICAL-c349t-126784513af7c31e22b369e02d27dfea970ca57b47791b84297cf97c78f5363e3</originalsourceid><addsrcrecordid>eNpdkc1qGzEQx0VpqU2aBwiFIOgll3WkkVZaHV3nE1JaSnLISay1s67MfjiSN5C8VfIgeabI2PGhAqFh5jd_ZvQn5IizCefMnN7O7n9OgAGbCAZKQv6JjIGrIgPQ-ed9rPSIHMa4ZOkUKWWKr2QkhGJMgBmTxbSjby_Z22t25h8xLLBzmF1ih6Fs_DNW9C-6vm2xqzDQug_0yi_-NU906twQyjXSPwEr79a-7yLta_rLx-i7Bb2LiU-1GsNGMn4jX-qyiXi4ew_I3cX57ewqu_l9eT2b3mROSLPOeJq3kDkXZa2d4AgwF8oggwp0VWNpNHNlrudSa8PnhQSjXZ2uLupcKIHigJxsdVehfxgwrm3ro8OmKTvsh2hBGm5ypoxM6I__0GU_hC5NZ0EZXmgJqkgU31Iu9DGmhewq-LYMT5YzuzHCboywGyPszojUc7xTHuYtVvuOj29PwPct4BFxXzYi15KDeAcdHIxA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2691874268</pqid></control><display><type>article</type><title>An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences</title><source>IEEE Electronic Library (IEL)</source><creator>Shang, Mingsheng ; Yuan, Ye ; Luo, Xin ; Zhou, MengChu</creator><creatorcontrib>Shang, Mingsheng ; Yuan, Ye ; Luo, Xin ; Zhou, MengChu</creatorcontrib><description><![CDATA[To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an <inline-formula> <tex-math notation="LaTeX">\alpha - \beta </tex-math></inline-formula>-divergence-generalized model that enjoys fast convergence. Its ideas are three-fold: 1) generalizing its learning objective with <inline-formula> <tex-math notation="LaTeX">\alpha - \beta </tex-math></inline-formula>-divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.]]></description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2020.3026425</identifier><identifier>PMID: 33600329</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; big data ; Computational modeling ; Convergence ; convergence analysis ; Data models ; Empirical analysis ; Euclidean distance ; Factor analysis ; high-dimensional and sparse (HiDS) data ; Learning ; Linear programming ; machine learning ; Missing data ; missing data estimation ; momentum ; non-negative latent factor analysis (NLFA) ; Predictive models ; recommender system (RS) ; Recommender systems ; Sparse matrices ; α–β-divergence</subject><ispartof>IEEE transactions on cybernetics, 2022-08, Vol.52 (8), p.8006-8018</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c349t-126784513af7c31e22b369e02d27dfea970ca57b47791b84297cf97c78f5363e3</citedby><cites>FETCH-LOGICAL-c349t-126784513af7c31e22b369e02d27dfea970ca57b47791b84297cf97c78f5363e3</cites><orcidid>0000-0002-5408-8752 ; 0000-0002-7024-2270 ; 0000-0002-1274-2285 ; 0000-0002-1348-5305</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9357412$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9357412$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33600329$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Shang, Mingsheng</creatorcontrib><creatorcontrib>Yuan, Ye</creatorcontrib><creatorcontrib>Luo, Xin</creatorcontrib><creatorcontrib>Zhou, MengChu</creatorcontrib><title>An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE Trans Cybern</addtitle><description><![CDATA[To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an <inline-formula> <tex-math notation="LaTeX">\alpha - \beta </tex-math></inline-formula>-divergence-generalized model that enjoys fast convergence. Its ideas are three-fold: 1) generalizing its learning objective with <inline-formula> <tex-math notation="LaTeX">\alpha - \beta </tex-math></inline-formula>-divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.]]></description><subject>Algorithms</subject><subject>big data</subject><subject>Computational modeling</subject><subject>Convergence</subject><subject>convergence analysis</subject><subject>Data models</subject><subject>Empirical analysis</subject><subject>Euclidean distance</subject><subject>Factor analysis</subject><subject>high-dimensional and sparse (HiDS) data</subject><subject>Learning</subject><subject>Linear programming</subject><subject>machine learning</subject><subject>Missing data</subject><subject>missing data estimation</subject><subject>momentum</subject><subject>non-negative latent factor analysis (NLFA)</subject><subject>Predictive models</subject><subject>recommender system (RS)</subject><subject>Recommender systems</subject><subject>Sparse matrices</subject><subject>α–β-divergence</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkc1qGzEQx0VpqU2aBwiFIOgll3WkkVZaHV3nE1JaSnLISay1s67MfjiSN5C8VfIgeabI2PGhAqFh5jd_ZvQn5IizCefMnN7O7n9OgAGbCAZKQv6JjIGrIgPQ-ed9rPSIHMa4ZOkUKWWKr2QkhGJMgBmTxbSjby_Z22t25h8xLLBzmF1ih6Fs_DNW9C-6vm2xqzDQug_0yi_-NU906twQyjXSPwEr79a-7yLta_rLx-i7Bb2LiU-1GsNGMn4jX-qyiXi4ew_I3cX57ewqu_l9eT2b3mROSLPOeJq3kDkXZa2d4AgwF8oggwp0VWNpNHNlrudSa8PnhQSjXZ2uLupcKIHigJxsdVehfxgwrm3ro8OmKTvsh2hBGm5ypoxM6I__0GU_hC5NZ0EZXmgJqkgU31Iu9DGmhewq-LYMT5YzuzHCboywGyPszojUc7xTHuYtVvuOj29PwPct4BFxXzYi15KDeAcdHIxA</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Shang, Mingsheng</creator><creator>Yuan, Ye</creator><creator>Luo, Xin</creator><creator>Zhou, MengChu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5408-8752</orcidid><orcidid>https://orcid.org/0000-0002-7024-2270</orcidid><orcidid>https://orcid.org/0000-0002-1274-2285</orcidid><orcidid>https://orcid.org/0000-0002-1348-5305</orcidid></search><sort><creationdate>20220801</creationdate><title>An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences</title><author>Shang, Mingsheng ; Yuan, Ye ; Luo, Xin ; Zhou, MengChu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c349t-126784513af7c31e22b369e02d27dfea970ca57b47791b84297cf97c78f5363e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>big data</topic><topic>Computational modeling</topic><topic>Convergence</topic><topic>convergence analysis</topic><topic>Data models</topic><topic>Empirical analysis</topic><topic>Euclidean distance</topic><topic>Factor analysis</topic><topic>high-dimensional and sparse (HiDS) data</topic><topic>Learning</topic><topic>Linear programming</topic><topic>machine learning</topic><topic>Missing data</topic><topic>missing data estimation</topic><topic>momentum</topic><topic>non-negative latent factor analysis (NLFA)</topic><topic>Predictive models</topic><topic>recommender system (RS)</topic><topic>Recommender systems</topic><topic>Sparse matrices</topic><topic>α–β-divergence</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shang, Mingsheng</creatorcontrib><creatorcontrib>Yuan, Ye</creatorcontrib><creatorcontrib>Luo, Xin</creatorcontrib><creatorcontrib>Zhou, MengChu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shang, Mingsheng</au><au>Yuan, Ye</au><au>Luo, Xin</au><au>Zhou, MengChu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2022-08-01</date><risdate>2022</risdate><volume>52</volume><issue>8</issue><spage>8006</spage><epage>8018</epage><pages>8006-8018</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract><![CDATA[To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an <inline-formula> <tex-math notation="LaTeX">\alpha - \beta </tex-math></inline-formula>-divergence-generalized model that enjoys fast convergence. Its ideas are three-fold: 1) generalizing its learning objective with <inline-formula> <tex-math notation="LaTeX">\alpha - \beta </tex-math></inline-formula>-divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>33600329</pmid><doi>10.1109/TCYB.2020.3026425</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-5408-8752</orcidid><orcidid>https://orcid.org/0000-0002-7024-2270</orcidid><orcidid>https://orcid.org/0000-0002-1274-2285</orcidid><orcidid>https://orcid.org/0000-0002-1348-5305</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2168-2267
ispartof IEEE transactions on cybernetics, 2022-08, Vol.52 (8), p.8006-8018
issn 2168-2267
2168-2275
language eng
recordid cdi_proquest_journals_2691874268
source IEEE Electronic Library (IEL)
subjects Algorithms
big data
Computational modeling
Convergence
convergence analysis
Data models
Empirical analysis
Euclidean distance
Factor analysis
high-dimensional and sparse (HiDS) data
Learning
Linear programming
machine learning
Missing data
missing data estimation
momentum
non-negative latent factor analysis (NLFA)
Predictive models
recommender system (RS)
Recommender systems
Sparse matrices
α–β-divergence
title An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T07%3A03%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20%CE%B1-%CE%B2-Divergence-Generalized%20Recommender%20for%20Highly%20Accurate%20Predictions%20of%20Missing%20User%20Preferences&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Shang,%20Mingsheng&rft.date=2022-08-01&rft.volume=52&rft.issue=8&rft.spage=8006&rft.epage=8018&rft.pages=8006-8018&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2020.3026425&rft_dat=%3Cproquest_RIE%3E2491950694%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2691874268&rft_id=info:pmid/33600329&rft_ieee_id=9357412&rfr_iscdi=true