Proximal maximum margin matrix factorization for collaborative filtering
•We propose an alternative and new MMMF scheme for discrete-valued rating matrix.•Our work draws motivation of recent advent of proximal support vector machines.•The propose method overcomes the problem of overtting.•We validate our hypothesis by conducting experiments on real and synthetic datasets...
Gespeichert in:
Veröffentlicht in: | Pattern recognition letters 2017-01, Vol.86, p.62-67 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 67 |
---|---|
container_issue | |
container_start_page | 62 |
container_title | Pattern recognition letters |
container_volume | 86 |
creator | Kumar, Vikas Pujari, Arun K. Sahu, Sandeep Kumar Kagita, Venkateswara Rao Padmanabhan, Vineet |
description | •We propose an alternative and new MMMF scheme for discrete-valued rating matrix.•Our work draws motivation of recent advent of proximal support vector machines.•The propose method overcomes the problem of overtting.•We validate our hypothesis by conducting experiments on real and synthetic datasets.
Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (±1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from −1 entries. The concept of optimizing a loss function to determine the separating hyperplane is prevalent in support vector machines (SVM) research and when hinge/smooth hinge loss is used, the hyperplanes act as a maximum-margin separator. In MMMF, a rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. MMMF is an efficient technique for collaborative filtering but it has several shortcomings. A prominent shortcoming is an overfitting problem wherein if learning iteration is prolonged to decrease the training error the generalization error grows. In this paper, we propose an alternative and new maximum margin factorization scheme for discrete-valued rating matrix to overcome the problem of overfitting. Our work draws motivation from a recent work on proximal support vector machines (PSVMs) wherein two parallel hyperplanes are used for binary classification and points are classified by assigning them to the class corresponding to the closest of two parallel hyperplanes. In other words, proximity to decision hyperplane is used as the classifying criterion. We show that a similar concept can be used to factorize the rating matrix if the loss function is suitably defined. The present scheme of matrix factorization has advantages over MMMF (similar to the advantages of PSVM over standard SVM). We validate our hypothesis by carrying out experiments on real and synthetic datasets. |
doi_str_mv | 10.1016/j.patrec.2016.12.016 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2089725695</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167865516303713</els_id><sourcerecordid>2089725695</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-add2a47ee522df907e97ad49acbbc0aab04affbc866a9a615a1a7ca633045a563</originalsourceid><addsrcrecordid>eNp9kE9LxDAQxYMouK5-Aw8Fz61Jmj_tRZBFXWFBD3oO0zRZUrrNmqbL6qc3Sz17-jGPN2-Yh9AtwQXBRNx3xR5iMLqgaSoILRLO0IJUkuayZOwcLZIi80pwfomuxrHDGIuyrhZo_R780e2gz3aQOO0Sw9YNCTG4Y2ZBRx_cD0Tnh8z6kGnf99D4kJSDyazrowlu2F6jCwv9aG7-uESfz08fq3W-eXt5XT1ucl2WLObQthSYNIZT2toaS1NLaFkNumk0BmgwA2sbXQkBNQjCgYDUIMoSMw5clEt0N-fug_-azBhV56cwpJOK4qqWlIuaJxebXTr4cQzGqn1IX4ZvRbA6daY6NXemTp0pQlVCWnuY10z64OBMUKN2ZtCmdckaVevd_wG_HA95Gw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2089725695</pqid></control><display><type>article</type><title>Proximal maximum margin matrix factorization for collaborative filtering</title><source>Elsevier ScienceDirect Journals Complete</source><creator>Kumar, Vikas ; Pujari, Arun K. ; Sahu, Sandeep Kumar ; Kagita, Venkateswara Rao ; Padmanabhan, Vineet</creator><creatorcontrib>Kumar, Vikas ; Pujari, Arun K. ; Sahu, Sandeep Kumar ; Kagita, Venkateswara Rao ; Padmanabhan, Vineet</creatorcontrib><description>•We propose an alternative and new MMMF scheme for discrete-valued rating matrix.•Our work draws motivation of recent advent of proximal support vector machines.•The propose method overcomes the problem of overtting.•We validate our hypothesis by conducting experiments on real and synthetic datasets.
Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (±1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from −1 entries. The concept of optimizing a loss function to determine the separating hyperplane is prevalent in support vector machines (SVM) research and when hinge/smooth hinge loss is used, the hyperplanes act as a maximum-margin separator. In MMMF, a rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. MMMF is an efficient technique for collaborative filtering but it has several shortcomings. A prominent shortcoming is an overfitting problem wherein if learning iteration is prolonged to decrease the training error the generalization error grows. In this paper, we propose an alternative and new maximum margin factorization scheme for discrete-valued rating matrix to overcome the problem of overfitting. Our work draws motivation from a recent work on proximal support vector machines (PSVMs) wherein two parallel hyperplanes are used for binary classification and points are classified by assigning them to the class corresponding to the closest of two parallel hyperplanes. In other words, proximity to decision hyperplane is used as the classifying criterion. We show that a similar concept can be used to factorize the rating matrix if the loss function is suitably defined. The present scheme of matrix factorization has advantages over MMMF (similar to the advantages of PSVM over standard SVM). We validate our hypothesis by carrying out experiments on real and synthetic datasets.</description><identifier>ISSN: 0167-8655</identifier><identifier>EISSN: 1872-7344</identifier><identifier>DOI: 10.1016/j.patrec.2016.12.016</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Classification ; Collaborative filtering ; Discriminant analysis ; Factorization ; Filtering software ; Filtration ; Hyperplanes ; Matrix ; Matrix completion ; Matrix factorization ; Motivation ; Recommender systems ; Support vector machines</subject><ispartof>Pattern recognition letters, 2017-01, Vol.86, p.62-67</ispartof><rights>2016 Elsevier B.V.</rights><rights>Copyright Elsevier Science Ltd. Jan 15, 2017</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c334t-add2a47ee522df907e97ad49acbbc0aab04affbc866a9a615a1a7ca633045a563</citedby><cites>FETCH-LOGICAL-c334t-add2a47ee522df907e97ad49acbbc0aab04affbc866a9a615a1a7ca633045a563</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0167865516303713$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65534</link.rule.ids></links><search><creatorcontrib>Kumar, Vikas</creatorcontrib><creatorcontrib>Pujari, Arun K.</creatorcontrib><creatorcontrib>Sahu, Sandeep Kumar</creatorcontrib><creatorcontrib>Kagita, Venkateswara Rao</creatorcontrib><creatorcontrib>Padmanabhan, Vineet</creatorcontrib><title>Proximal maximum margin matrix factorization for collaborative filtering</title><title>Pattern recognition letters</title><description>•We propose an alternative and new MMMF scheme for discrete-valued rating matrix.•Our work draws motivation of recent advent of proximal support vector machines.•The propose method overcomes the problem of overtting.•We validate our hypothesis by conducting experiments on real and synthetic datasets.
Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (±1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from −1 entries. The concept of optimizing a loss function to determine the separating hyperplane is prevalent in support vector machines (SVM) research and when hinge/smooth hinge loss is used, the hyperplanes act as a maximum-margin separator. In MMMF, a rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. MMMF is an efficient technique for collaborative filtering but it has several shortcomings. A prominent shortcoming is an overfitting problem wherein if learning iteration is prolonged to decrease the training error the generalization error grows. In this paper, we propose an alternative and new maximum margin factorization scheme for discrete-valued rating matrix to overcome the problem of overfitting. Our work draws motivation from a recent work on proximal support vector machines (PSVMs) wherein two parallel hyperplanes are used for binary classification and points are classified by assigning them to the class corresponding to the closest of two parallel hyperplanes. In other words, proximity to decision hyperplane is used as the classifying criterion. We show that a similar concept can be used to factorize the rating matrix if the loss function is suitably defined. The present scheme of matrix factorization has advantages over MMMF (similar to the advantages of PSVM over standard SVM). We validate our hypothesis by carrying out experiments on real and synthetic datasets.</description><subject>Classification</subject><subject>Collaborative filtering</subject><subject>Discriminant analysis</subject><subject>Factorization</subject><subject>Filtering software</subject><subject>Filtration</subject><subject>Hyperplanes</subject><subject>Matrix</subject><subject>Matrix completion</subject><subject>Matrix factorization</subject><subject>Motivation</subject><subject>Recommender systems</subject><subject>Support vector machines</subject><issn>0167-8655</issn><issn>1872-7344</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LxDAQxYMouK5-Aw8Fz61Jmj_tRZBFXWFBD3oO0zRZUrrNmqbL6qc3Sz17-jGPN2-Yh9AtwQXBRNx3xR5iMLqgaSoILRLO0IJUkuayZOwcLZIi80pwfomuxrHDGIuyrhZo_R780e2gz3aQOO0Sw9YNCTG4Y2ZBRx_cD0Tnh8z6kGnf99D4kJSDyazrowlu2F6jCwv9aG7-uESfz08fq3W-eXt5XT1ucl2WLObQthSYNIZT2toaS1NLaFkNumk0BmgwA2sbXQkBNQjCgYDUIMoSMw5clEt0N-fug_-azBhV56cwpJOK4qqWlIuaJxebXTr4cQzGqn1IX4ZvRbA6daY6NXemTp0pQlVCWnuY10z64OBMUKN2ZtCmdckaVevd_wG_HA95Gw</recordid><startdate>20170115</startdate><enddate>20170115</enddate><creator>Kumar, Vikas</creator><creator>Pujari, Arun K.</creator><creator>Sahu, Sandeep Kumar</creator><creator>Kagita, Venkateswara Rao</creator><creator>Padmanabhan, Vineet</creator><general>Elsevier B.V</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TK</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20170115</creationdate><title>Proximal maximum margin matrix factorization for collaborative filtering</title><author>Kumar, Vikas ; Pujari, Arun K. ; Sahu, Sandeep Kumar ; Kagita, Venkateswara Rao ; Padmanabhan, Vineet</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-add2a47ee522df907e97ad49acbbc0aab04affbc866a9a615a1a7ca633045a563</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Classification</topic><topic>Collaborative filtering</topic><topic>Discriminant analysis</topic><topic>Factorization</topic><topic>Filtering software</topic><topic>Filtration</topic><topic>Hyperplanes</topic><topic>Matrix</topic><topic>Matrix completion</topic><topic>Matrix factorization</topic><topic>Motivation</topic><topic>Recommender systems</topic><topic>Support vector machines</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kumar, Vikas</creatorcontrib><creatorcontrib>Pujari, Arun K.</creatorcontrib><creatorcontrib>Sahu, Sandeep Kumar</creatorcontrib><creatorcontrib>Kagita, Venkateswara Rao</creatorcontrib><creatorcontrib>Padmanabhan, Vineet</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Pattern recognition letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kumar, Vikas</au><au>Pujari, Arun K.</au><au>Sahu, Sandeep Kumar</au><au>Kagita, Venkateswara Rao</au><au>Padmanabhan, Vineet</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Proximal maximum margin matrix factorization for collaborative filtering</atitle><jtitle>Pattern recognition letters</jtitle><date>2017-01-15</date><risdate>2017</risdate><volume>86</volume><spage>62</spage><epage>67</epage><pages>62-67</pages><issn>0167-8655</issn><eissn>1872-7344</eissn><abstract>•We propose an alternative and new MMMF scheme for discrete-valued rating matrix.•Our work draws motivation of recent advent of proximal support vector machines.•The propose method overcomes the problem of overtting.•We validate our hypothesis by conducting experiments on real and synthetic datasets.
Maximum Margin Matrix Factorization (MMMF) has been a successful learning method in collaborative filtering research. For a partially observed ordinal rating matrix, the focus is on determining low-norm latent factor matrices U (of users) and V (of items) so as to simultaneously approximate the observed entries under some loss measure and predict the unobserved entries. When the rating matrix contains only two levels (±1), rows of V can be viewed as points in k-dimensional space and rows of U as decision hyperplanes in this space separating +1 entries from −1 entries. The concept of optimizing a loss function to determine the separating hyperplane is prevalent in support vector machines (SVM) research and when hinge/smooth hinge loss is used, the hyperplanes act as a maximum-margin separator. In MMMF, a rating matrix with multiple discrete values is treated by specially extending hinge loss function to suit multiple levels. MMMF is an efficient technique for collaborative filtering but it has several shortcomings. A prominent shortcoming is an overfitting problem wherein if learning iteration is prolonged to decrease the training error the generalization error grows. In this paper, we propose an alternative and new maximum margin factorization scheme for discrete-valued rating matrix to overcome the problem of overfitting. Our work draws motivation from a recent work on proximal support vector machines (PSVMs) wherein two parallel hyperplanes are used for binary classification and points are classified by assigning them to the class corresponding to the closest of two parallel hyperplanes. In other words, proximity to decision hyperplane is used as the classifying criterion. We show that a similar concept can be used to factorize the rating matrix if the loss function is suitably defined. The present scheme of matrix factorization has advantages over MMMF (similar to the advantages of PSVM over standard SVM). We validate our hypothesis by carrying out experiments on real and synthetic datasets.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.patrec.2016.12.016</doi><tpages>6</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0167-8655 |
ispartof | Pattern recognition letters, 2017-01, Vol.86, p.62-67 |
issn | 0167-8655 1872-7344 |
language | eng |
recordid | cdi_proquest_journals_2089725695 |
source | Elsevier ScienceDirect Journals Complete |
subjects | Classification Collaborative filtering Discriminant analysis Factorization Filtering software Filtration Hyperplanes Matrix Matrix completion Matrix factorization Motivation Recommender systems Support vector machines |
title | Proximal maximum margin matrix factorization for collaborative filtering |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T07%3A51%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Proximal%20maximum%20margin%20matrix%20factorization%20for%20collaborative%20filtering&rft.jtitle=Pattern%20recognition%20letters&rft.au=Kumar,%20Vikas&rft.date=2017-01-15&rft.volume=86&rft.spage=62&rft.epage=67&rft.pages=62-67&rft.issn=0167-8655&rft.eissn=1872-7344&rft_id=info:doi/10.1016/j.patrec.2016.12.016&rft_dat=%3Cproquest_cross%3E2089725695%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2089725695&rft_id=info:pmid/&rft_els_id=S0167865516303713&rfr_iscdi=true |